
Introducing Vellum for Agents
Today we're introducing Vellum. All you do is chat and let Vellum build reliable Agents for you.

Today we're introducing Vellum. All you do is chat and let Vellum build reliable Agents for you.

Workflow triggers, multimodal outputs, 40+ integrations, and other updates making agent building easier and faster.

Native integrations, Agent Builder Threads, and upgrades that make agent building faster than ever in Vellum.

Agent Builder (beta), Custom Nodes, AI Apps, and more for faster and more complex agent building in Vellum.

AI Apps turn your deployed Workflows into no-code apps your whole team can share to use directly in Vellum.

Introducing Agent Node: Multi-tool use with automatic schema, loop logic and context tracking.

MCP-powered Agent Nodes, public Workflow sharing, and a new Workflow Console for easier, collaborative building.

Upgraded Environments, Workflow, and Prompt Builder plus a new Agent Node for faster and easier building on Vellum.

Go from idea to AI workflow in seconds and continue to build in the UI or your IDE.

A first-class way to manage your work across Development, Staging, and Production.

Complete control over the business logic and runtime of your AI workflows in Vellum.

Full control in code and real-time visibility in UI, built for teams shipping reliable AI.

AI-powered features and easier ways to customize and build together, across both the SDK and visual builder.

We have a bunch of quality-of-life upgrades including protected tags, smoother Workflows, and more!

Our biggest product feature drop ever: 27 updates in a single month (a Vellum record!)

This month we improved how you find models, preview Workflows SDK code, and more!

Support for IBM granite models in Vellum.

Vellum 2025: Workflows SDK Beta, self-serve org setup, and new model support!

Unwrap Vellum's latest features: optional inputs, error handling, JSON indexing!

Capture and use end-user feedback as ground truth data to improve your AI system’s accuracy.

Now you can run Llama 3.1 405b, with 200 t/s via SambaNova on Vellum!

Something special is coming, plus new models and quality of life improvements

Learn how to build modular, reusable, and version-controlled tools (subworkflows) to keep your workflows efficient.

Write and execute Python or TypeScript directly in your workflow

New debugging features for AI workflows to get visibility down to every decision and detail

Workflow execution timeline revamp, higher performance for evals, improved Map node debugging and more

Starting today, you can unlock 2,100 t/s with Llama 3.1 70B in Vellum for real-time AI apps.

Workflow execution timeline revamp, higher performance for evals, improved Map node debugging and more

More control with workflow replays, cost and latency tracking, and new Workflow Editor UI

Learn about the latest features and improvements shipped by the Vellum team in July.

Learn more about the latest updates at Vellum: Map Nodes, Inline Subworkflows, API updates and more

Run Workflows from Node, evaluate function call outputs, Guardrail nodes, RAGAS metrics, image support & more.

Prompt editor, prompt blocks, reusable evaluation metrics, new models, and more.

Subworkflow nodes, image support in the UI, error nodes, node mocking, workflow graphs and so much more.

SOC 2 Type 2 Compliant, Prompt Node retries, Evaluation reports, Custom release tags, Cloning workflow nodes & more.

Enhanced prompt comparison, more metrics, flexibility, and new reports for effective LLM evaluation.

Introducing a new way to invoke your Vellum stored prompts!

December: fine-grained control over your prompt release process, powerful new APIs for executing Prompts, and more

November: major Test Suite improvements, arbitrary code execution, and new models!

October: universal LLM support, new Test Suite metrics, and performance

September is full of enhancements to Workflows, Security, Support, and more!

August brings the introduction of Vellum Workflows, Metadata Filtering in Search, and a new design

Vellum Workflows help you quickly prototype, deploy, and manage complex chains of LLM calls

We've continued to build our platform more, here's a look at the latest from us and a sneak peak of what's coming!

We've raised $5m to double down on our mission to help companies build production use cases of LLMs

We've shipped a lot of features recently, here's a look at the latest updates from us!

Details about how to best leverage the Vellum <> LlamaIndex integration

Use Vellum Test Suites to test the quality of prompts in bulk before production. Unit testing for LLMs is here!

Compare model quality across OpenAI's GPT-4, Anthropic's Claude and now Google's PaLM LLM in our platform

Vellum Search, the latest addition to our platform helps companies use proprietary data in LLM applications

We’re excited to publicly announce the start of our new adventure: Vellum