Skip to content
  • Trusted Sources for Deployment Protection

    Trusted Sources darkTrusted Sources dark

    Trusted Sources lets protected deployments accept short-lived identity tokens (OIDC) from Vercel projects and external services you authorize, so you no longer have to share a long-lived Protection Bypass for Automation secret. Trusted Sources is the recommended approach, but Protection Bypass for Automation continues to work

    Callers attach an OIDC token in the x-vercel-trusted-oidc-idp-token header. Vercel then verifies the signature, checks the claims you configured, and confirms the environment matches the rule.

    Link to headingAuthorize Vercel projects

    By default, the Vercel OIDC token for a project can call its own deployments. To authorize another project in the same team, add it to Trusted Sources.

    Self-access and cross-project rules are both customizable with from/to environment pairs. To authenticate a request from a project, forward its Vercel OIDC token:

    function.ts
    import { getVercelOidcToken } from '@vercel/oidc';
    await fetch('https://protected-project.vercel.app/api/data', {
    headers: { 'x-vercel-trusted-oidc-idp-token': await getVercelOidcToken() },
    });

    Vercel Function example

    Link to headingAuthorize external services

    Any custom OIDC provider can be authorized as a trusted external service, such as GitHub Actions, or a Vercel project in another team.

    workflow.yaml
    - uses: actions/github-script@v7
    id: token
    with:
    script: |
    const token = await core.getIDToken();
    core.setSecret(token);
    core.setOutput('token', token);
    - run: |
    curl -sSf https://protected-project.vercel.app/api/data \
    -H "x-vercel-trusted-oidc-idp-token: ${{ steps.token.outputs.token }}"

    GitHub Action example

    Read the documentation to learn more.

    Kit Foster, Marc Greenstock, Tim White

  • Create Vercel Firewall rules with natural language

    Vercel Firewall now lets you create WAF custom rules using natural language. Describe the behavior you need and the dashboard will generate the rule.

    Visit the firewall custom rules page to try creating a rate-limiting rule:

    Or use the Vercel CLI:

    vercel firewall rules add --ai "Rate limit /api to 100 requests per minute by IP"

    WAF custom rules let you control traffic to your site by logging, blocking, challenging, rate limiting, or redirecting requests based on conditions like IP address, path, country, user agent, and more.

    For example, you can:

    • Log all requests to /api/webhook with a missing authorization header

    • Block all requests to /wp-admin

    • Challenge all traffic to /checkout that doesn't come from the US

    Generate your first rule or learn more in the documentation.

  • Fast mode for Opus 4.7 available on AI Gateway

    Fast mode for Claude Opus 4.7 is now available on AI Gateway in research preview.

    Fast mode delivers ~2.5x faster output token generation with full Opus 4.7 intelligence. This is an early, experimental feature.

    To enable fast mode, pass speed: 'fast' in the anthropic provider options with anthropic/claude-opus-4.7.

    import { streamText } from "ai";
    const { text } = await streamText({
    model: "anthropic/claude-opus-4.7",
    prompt: "Analyze this codebase structure and create a plan to add user auth.",
    providerOptions: {
    anthropic: {
    speed: "fast",
    },
    },
    });

    You can use fast mode with Claude Code via AI Gateway by setting the CLAUDE_CODE_SKIP_FAST_MODE_ORG_CHECK and CLAUDE_CODE_ENABLE_OPUS_4_7_FAST_MODE variables in your shell configuration file or in ~/.claude/settings.json.

    export CLAUDE_CODE_ENABLE_OPUS_4_7_FAST_MODE=1
    export CLAUDE_CODE_SKIP_FAST_MODE_ORG_CHECK=1

    {
    "env": {
    "CLAUDE_CODE_SKIP_FAST_MODE_ORG_CHECK": "1",
    "CLAUDE_CODE_ENABLE_OPUS_4_7_FAST_MODE": "1"
    }
    }

    Fast mode is priced at 6x standard Opus rates.

    Standard

    Fast Mode

    Input: $5 / 1M tokens
    Output: $25 / 1M tokens

    Input: $30 / 1M tokens
    Output: $150 / 1M tokens

    All standard pricing multipliers (e.g., prompt caching) apply on top of these rates.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • Manage Vercel Firewall in the CLI

    You can now manage the Vercel Firewall directly from the CLI.

    Using the vercel firewall command, you can configure custom rules, IP blocks, system bypasses, attack mode, and system mitigations.

    vercel firewall rules add --ai "Rate limit /api to 100 requests per minute by IP"
    vercel firewall ip-blocks block 1.2.3.4
    vercel firewall system-bypass add 10.0.0.1
    vercel firewall attack-mode enable --duration 1h
    vercel firewall system-mitigations pause

    Manage Vercel Firewall functionality from the CLI

    Building on the new CLI commands, the Vercel Firewall skill lets agents interact with the Firewall and includes best practices for rolling out new Firewall rules safely.

    npx skills add vercel/vercel-plugin --skill vercel-firewall

    Update to the latest CLI version and run vercel firewall to get started. Learn more about the Vercel Firewall CLI commands.

  • Node.js 26.x now available on Vercel Sandboxes

    Vercel Sandbox now supports Node.js version 26.

    To run a Sandbox with Node.js 26, upgrade @vercel/sandbox to 1.10.2 or later, or to 2.0.0-beta.19 or later if you're using v2 and set the runtime property to node26:

    main.ts
    import { Sandbox } from "@vercel/sandbox";
    const sandbox = await Sandbox.create({ runtime: "node26" });
    const version = await sandbox.runCommand("node", ["-v"]);
    console.log(`Node.js version: ${await version.stdout()}`);

    Get started today and learn more in the documentation.

    Andy Waller

  • Automate progressive rollouts with Vercel Flags

    You can now use Vercel Flags to roll out a feature to a growing percentage of users on a schedule, with progressive rollouts.

    Unlike weighted splits, which hold a fixed distribution (for example, 50/50) for experiments, a progressive rollout follows a predefined schedule that gradually shifts the traffic percentage to the new variant. Each stage has a target percentage and a duration.

    Exposing a change in stages lets you catch a regression on a small slice of users before it hits everyone.

    Progressive rollouts are now available in the dashboard and through the new vercel flags rollout CLI command.

    Learn more in the Vercel Flags documentation.

  • Vercel Sandbox firewall now supports request proxying and filtering

    The Vercel Sandbox firewall now supports forwarding specific HTTP requests to a proxy you control. You can also use matchers to filter forwarding and credentials brokering to only the requests that need it.

    Link to headingRequests proxying

    You can now route outbound sandbox traffic through your own proxy for logging, debugging, or transforming requests and responses. Set a forwardURL on any allowed domain, and the firewall will forward matching HTTPS requests to your server.

    The proxy receives the original request along with additional headers to identify the source:

    • vercel-forwarded-host: The original request's SNI

    • vercel-forwarded-scheme: The original request's scheme

    • vercel-forwarded-port: The original request's port

    • vercel-sandbox-oidc-token: A Vercel-issued OIDC token that the proxy can use to authenticate the request and identity the source team / project / sandbox. Learn more about it in the docs

    import { Sandbox } from '@vercel/sandbox';
    // Sandbox has access to everything, with a proxy for requests towards github.com
    const sandbox = await Sandbox.create({
    networkPolicy: {
    allow: {
    "github.com": [{
    forwardURL: "https://my-custom-proxy.vercel.app/api/proxy"
    }],
    // Allow traffic to all other domains. If unset only defined ones are reachable.
    "*": []
    }
    }
    });

    Link to headingFiltering

    Additionally, you can now use matchers to filter request forwarding or credentials brokering to requests matching a specific path, method, query string, or headers. This gives you fine-grained control over which requests get transformed; for example, only forwarding POST requests to a specific API path while allowing all other traffic through untouched.

    import { Sandbox } from '@vercel/sandbox';
    // Sandbox has access to everything, with a proxy for requests towards POST github.com/api/*
    // Other requests to github.com are allowed and not proxied
    const sandbox = await Sandbox.create({
    networkPolicy: {
    allow: {
    "api.github.com": [{
    match: {
    path: { startsWith: "/v1" },
    method: ["POST"]
    },
    forwardURL: "https://my-custom-proxy.vercel.app/api/proxy"
    }],
    // Allow traffic to all other domains. If unset only defined ones are reachable.
    "*": []
    }
    }
    });

    These features are available in beta for Pro and Enterprise plans. Get started by installing the @vercel/sandbox@beta SDK, and learn more in the docs about requests proxying and matchers.

    Tom Lienard, Valerian Roche, Brandon Tuttle

  • Chat SDK adds Messenger adapter support

    OG-Chat-SDK-Messenger

    Chat SDK now supports Messenger as a chat adapter.

    Build agents that support messages, reactions, multimedia downloads, postback buttons, and direct conversations, with display names fetched automatically from user profiles.

    lib/bot.ts
    import { Chat } from "chat";
    import { createMessengerAdapter } from "@chat-adapter/messenger";
    const bot = new Chat({
    userName: "mybot",
    adapters: {
    messenger: createMessengerAdapter(),
    },
    });
    bot.onDirectMessage(async (thread, message) => {
    await thread.post(`You said: ${message.text}`);
    });

    Echo each new mention back to the sender

    Read the Chat SDK documentation to get started, browse the supported adapters, or learn how to build your own.

    Special thanks to @mitkodkn, whose community contribution in PR #461 laid the groundwork for this adapter.