Vercel Queues entered public beta on February 27, 2026 — and it's the missing infrastructure layer Zero-Human Companies have been waiting for. Durable event streaming with automatic retries, delivery guarantees, and zero manual intervention.

What Just Shipped

Vercel announced Queues in public beta for all teams. It's a durable event streaming system built on Fluid Compute that gives serverless functions something they've always lacked: reliable asynchronous execution.

  • At-least-once delivery semantics — messages persist until processed or expired
  • Automatic retries with visibility timeouts — no more failed jobs falling through cracks
  • Multi-AZ synchronous replication — durability across availability zones
  • Idempotency keys — prevent duplicate processing
  • Delayed delivery — schedule tasks up to the retention period
  • Concurrency control — process at controlled rates during traffic spikes

The ZHC Economics

Pricing That Enables Automation

Vercel Queues is priced at $0.60 per 1M operations. For context:

  • 1 million queued tasks costs $0.60
  • Fluid compute charges apply for push mode invocations (existing rates)
  • No minimums, no provisioning overhead

This changes the math for ZHC automation. A company processing 10 million background tasks monthly pays $6 for queue operations. The alternative — building retry logic, dead letter queues, and reconciliation systems — costs engineering time that humans must spend.

What You No Longer Need to Build

  • Retry loops with exponential backoff
  • Dead letter queues for failed operations
  • Reconciliation jobs to find missed work
  • Manual intervention dashboards for stuck tasks
  • Deployment-time queue draining scripts

Agent-Native Patterns

For AI agents operating autonomously, Vercel Queues enables patterns that previously required complex orchestration:

1. Fire-and-Forget Agent Tasks

An agent can queue 1000 background jobs and move on. The queue guarantees execution even if the agent's session ends:

// Agent queues tasks without waiting
const { messageId } = await send('process-leads', {
  leadId: lead.id,
  agentContext: session.context
});
// Agent continues immediately — no blocking

2. Deployment-Resilient Workflows

Traditional serverless functions lose in-flight work during deployments. Queues persist messages and resume processing automatically. Agents can deploy new versions without draining or coordinating ongoing tasks.

3. Scheduled Agent Actions

Delayed delivery enables agent scheduling without cron infrastructure:

// Schedule follow-up for 24 hours later
await send('follow-up', { customerId }, {
  delayMs: 24 * 60 * 60 * 1000
});

4. Cross-Agent Communication

Multiple agent types can subscribe to the same topic, enabling pub/sub patterns between specialized agents (sales, support, analytics) without direct coupling.

Implementation for ZHCs

The setup is minimal — three files and you're operational:

1. Producer (Any Route Handler)

import { send } from '@vercel/queue';

export async function POST(request: Request) {
  const order = await request.json();
  const { messageId } = await send('orders', order);
  return Response.json({ messageId });
}

2. Consumer (Queue Handler)

import { handleCallback } from '@vercel/queue';

export const POST = handleCallback(async (order, metadata) => {
  console.log('Processing order', metadata.messageId);
  // Agent logic here — automatic retries on failure
});

3. Configuration (vercel.json)

{
  "functions": {
    "app/api/queues/fulfill-order/route.ts": {
      "experimentalTriggers": [
        { "type": "queue/v2beta", "topic": "orders" }
      ]
    }
  }
}

The consumer route becomes private — no public URL, only Vercel's queue infrastructure can invoke it. This is security-by-default for agent workloads.

Relationship to Vercel Workflow

Queues is the lower-level primitive that powers Vercel Workflow. The distinction matters for ZHC builders:

  • Use Workflow for multi-step orchestration with sleeps, hooks, and durable state
  • Use Queues directly for direct control over message publishing, routing, and consumption

For most ZHC agent systems, Queues provides the right abstraction level — simple enough to reason about, powerful enough to handle production workloads.

The Bigger Picture

This release matters beyond the feature list. It's infrastructure built for the autonomous era — where agents, not humans, handle failure cases and retries.

Before Vercel Queues, building reliable agent automation required:

  • Queue infrastructure (SQS, RabbitMQ, Redis)
  • Retry logic in every function
  • Monitoring and alerting for stuck tasks
  • Human-runbooks for edge cases

After Vercel Queues:

  • One SDK call to queue work
  • Automatic retries with exponential backoff
  • Built-in observability
  • Agents handle their own failures

This is the infrastructure layer that makes zero-human operation economically viable at small scale. No ops team required. No queue expertise needed. Just reliable, durable execution for agent-driven workloads.

Availability & Next Steps

Vercel Queues is in public beta as of February 27, 2026. Available for all teams (Hobby, Pro, Enterprise). The API is stable — Vercel has been running this internally to power Workflow.

Resources: