Vellum Product Update | March 2025

Our biggest product feature drop ever: 27 updates in a single month (a Vellum record!)

Written by Sharon Toh

Vellum Product Update | March 2025

🌸 Spring has sprung, and with it, our biggest feature drop ever : 27 updates in a single month (a Vellum record! 🚢). From Prompt Diffing and real-time monitoring integrations to GA of our Workflows SDK and PDF inputs, March was packed with upgrades to help you build, test, and ship faster than ever.

Let’s dig into what’s new.

🆕 Key New Features

Prompt Comparison / Diffing

This one’s been at the top of many of our customers' wishlist — and it’s finally here. You can now view side-by-side diffs between prompt versions, so you never have to guess what changed again.

__wf_reserved_inherit
__wf_reserved_inherit

Whether you’re reviewing edits, debugging issues, or approving updates before deployment, this highly requested feature gives you full visibility into every change, line by line.

Deployment Release Reviews

Inspired by GitHub PR reviews, this feature allows team members to review, approve, or request changes to Prompt and Workflow Deployments. Perfect for Enterprise teams that require a formal approval process comply with SOC 2 regulations. Watch Noa break it down:

Native Retry & Try functionality

You can now “wrap” any node with Try or Retry Adornments directly from the side panel — giving you first-class error handling.

__wf_reserved_inherit

Retry will keep invoking the wrapped node until it succeeds (or hits the max attempts). Try will attempt once and continue gracefully even if it fails.

Bonus: these show up cleanly in your monitoring view, just like a single-node Sub-workflow.

Monitoring View Overhaul

For our VPC and self-hosted customers — this update is for you! With a brand new Grafana-based implementation, the revamped Monitoring View offers faster load times, smoother zooming, and better filters for things like date ranges and Release Tags. It’s everything you need to analyze performance at scale, now wherever you're deployed.

__wf_reserved_inherit

Webhooks + Datadog Integration

You can now configure Webhooks to receive real-time Vellum event updates — perfect for syncing with external tools like a data warehouse or custom health dashboard.

__wf_reserved_inherit

You can emit those same Vellum events in near-real-time to Datadog for deeper observability!

__wf_reserved_inherit

Workflows SDK General Availability

All newly created Workflows are now SDK-enabled by default! Vellum Workflows SDK makes it easier to build predictable AI systems and collaborate with nontechnical teammates by allowing you to build your AI Workflows in code or in UI. Changes are synchronized by pushing and pulling between code and UI. Try our 5 minute quickstart .

PDFs as a Prompt Input

You can now pass PDFs directly into Prompts — perfect for extracting structured data from documents and powering downstream workflows. Just drag and drop a PDF into a Chat History variable , and if the model supports it (like Anthropic’s), you’re good to go. It’s like multi-modal inputs… but for documents.

Since PDFs are handled as images under the hood, this pairs perfectly with Vellum’s support for image inputs . Vellum supports images for OpenAI’s vision models like GPT-4 Turbo with Vision — both via API and in the UI. Read more about it here .

Workflow Deployment Executions – Cost Column

You’ll now see a Cost column in the Workflow Deployment Executions view — helping you track compute spend at a glance. This column breaks down the total cost per execution , summing up all Prompt invocations — so you get a clear picture of what’s driving spend across each run.

__wf_reserved_inherit

🔧 Quality of Life Improvements

You can now search across all your Prompts, Workflows, Document Indexes, and more with Global Search.

__wf_reserved_inherit

This long-awaited feature lets you quickly find and jump to any resource in your Workspace — no more clicking around to track things down.

New Workflow Deployment APIs

You can now use two new APIs to List Workflow Deployment Executions for a specific Workflow Deployment or Retrieve Workflow Deployment Execution for any single execution — making it easier to programmatically track and analyze Workflow runs outside of Vellum.

Automatic Evaluations Setup

Vellum now auto-generates a Test Suite with one Test Case per Scenario the first time you visit the Evaluations tab, so you can start adding Metrics and Ground Truth instantly.

__wf_reserved_inherit

🧠 Model & API Support

Gemini 2.5 Pro Model Support Added support for Gemini 2.5 Pro Experimental (03-25 version) Supports 1M input token context window and 64k output tokens via Google’s Gemini API

  • LLaMa 3.3 70B via Cerebras Added support for LLaMa 3.3 70B through Cerebras AI
  • Qwen QwQ Models via Groq Added support for: QwQ 32B
  • QwQ 2.5 Coder 32B
  • QwQ 2.5 32B
  • All via Groq’s preview models
  • Qwen QwQ 32B via Fireworks AI Added support for Qwen QwQ 32B through Fireworks AI
  • PDF Support for Gemini 2.0 Flash Models Drag-and-drop PDF support added for: Gemini 2.0 Flash Experimental
  • Gemini 2.0 Flash Experimental Thinking Mode
  • Gemini 2.0 Flash

That’s a wrap on March! From fresh debugging views to friendlier editors and deeper integrations, this month was all about helping you move faster with more clarity. We’ll be back in April with even more. Until then — happy building! 🚀

Changelog: https://docs.vellum.ai/changelog/2025/2025-03

Last updated: Jan 19, 2026