• Support for Elysia

    Vercel + Elysia - DarkVercel + Elysia - Dark

    Elysia, a popular ergonomic TypeScript framework with end-to-end type safety, can now be deployed instantly on Vercel.

    When deployed, Vercel will now automatically identify your app is running Elysia and provision the optimal resources to run it efficiently.

    import { Elysia } from "elysia";
    const app = new Elysia()
    .get("/", () => `Hello from Elysia, running on Vercel!`);
    export default app;

    By default, Elysia will use Node. You can opt-in to the Bun runtime by adding the bunVersion line below to your vercel.json.

    vercel.json
    {
    "$schema": "https://openapi.vercel.sh/vercel.json",
    "bunVersion": "1.x"
    }

    Backends on Vercel use Fluid compute with Active CPU pricing by default, so you only pay for time where your code is actively using CPU.

    Deploy Elysia on Vercel, or visit the documentation for Elysia or Bun Runtime at Vercel.

    +2

    Jeff S, Marcos G, Austin M, Anthony S

  • Bulk redirects are now generally available

    Vercel now supports bulk redirects, allowing up to one million static URL redirects per project.

    This feature adds import options for formats like CSV and JSON, so teams can more easily manage large-scale migrations, fix broken links, handle expired pages, and more.

    To use bulk redirects, set the bulkRedirectsPath field in your vercel.json to a file or folder containing your redirects. These will be automatically imported at build time.

    redirects.csv
    source,destination,statusCode
    /product/old,/product/new,308

    vercel.json
    "bulkRedirectsPath": "redirects.csv"

    This feature is available for Pro and Enterprise customers, and includes rates for additional capacity:

    • Pro: 1,000 bulk redirects included per project

    • Enterprise: 10,000 bulk redirects included per project

    • Additional capacity: starts at $50/month per 25,000 redirects

    Get started with bulk redirects.

  • GPT 5.1 Codex models now available in Vercel AI Gateway

    You can now access OpenAI's latest Codex models, GPT-5.1 Codex and GPT-5.1 Codex mini with Vercel's AI Gateway and no other provider accounts required. These Codex models are optimized for long-running, agentic coding tasks and are able to maintain context and reasoning over longer sessions without degradation.

    To use these models with the AI SDK, set the model to openai/gpt-5.1-codex or openai/gpt-5.1-codex-mini:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.1-codex',
    prompt:
    `Create a command-line tool that reads a text file,
    counts word frequencies, and prints the ten most common
    words with counts. Use standard libraries only.`
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • GPT 5.1 models now available in Vercel AI Gateway

    You can now access OpenAI's latest models, GPT-5.1 Instant and GPT-5.1 Thinking, using Vercel's AI Gateway with no other provider accounts required.

    • GPT-5.1 Instant offers improved instruction following, adaptive reasoning, and warmer, more conversational responses.

    • GPT-5.1 Thinking builds on GPT-5 Thinking with dynamic performance tuning that prioritizes speed for simple tasks and deeper reasoning for complex ones.

    To use these models with the AI SDK, set the model to openai/gpt-5.1-instant or openai/gpt-5.1-thinking:

    import { streamText } from 'ai'
    const result = streamText({
    model: "openai/gpt-5.1-instant",
    prompt: "What are the benefits to adaptive reasoning?"
    })

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • Rollbar joins the Vercel Marketplace

    Vercel + Rollbar - DarkVercel + Rollbar - Dark

    Rollbar is now available as a native integration on the Vercel Marketplace, bringing real-time error monitoring and code-first observability directly into your Vercel workflow.

    With Rollbar, developers can automatically detect, track, debug, and resolve faster across deployments, connecting every issue back to the exact release and commit that introduced it. This helps teams move quickly while staying confident in production.

    In just a few clicks, you can:

    • Manage accounts and billing in one place

    • Connect Rollbar to one or many Vercel projects in minutes

    • Automatically track deployments and tie errors to the specific revision that caused them

    • Keep environments and source maps aligned across Rollbar and Vercel for clean, readable stack traces

    Install Rollbar from the Vercel Marketplace.

    Hedi Zandi

  • Model fallbacks now available in Vercel AI Gateway

    Vercel's AI Gateway now supports fallback models for when models fail or are unavailable. In addition to safeguarding against provider-level failures, model fallbacks can help with errors and capability mismatches between models (e.g., multimodal, tool-calling, etc.).

    Fallback models will be tried in the specified order until a request succeeds or no options remain. Any error, such as context limits, unsupported inputs, or provider outages, can trigger a fallback. Requests are billed based on the model that completes successfully.

    This example shows an instance where the primary model does not support multimodal capabilities, falling back to models that do. To use, specify the model fallbacks in models within providerOptions:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-oss-120b', // Primary model
    prompt: 'Parse the attached PDF for tables and graphs, \
    and return the highest performing categories this year',
    providerOptions: {
    gateway: {
    models: [
    'google/gemini-2.5-pro',
    'anthropic/claude-sonnet-4.5',
    'meta/llama-3.1-8b'
    ], // Fallback models
    },
    },
    })

    To have pre-defined provider routing in addition to model routing, specify both models and providers (order or only) in providerOptions:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5-nano', // Primary model
    prompt: 'Parse the attached PDF for tables and graphs, \
    and return the highest performing categories this year',
    providerOptions: {
    gateway: {
    order: ['vertex', 'cerebras'], // Provider routing order
    models: [
    'google/gemini-2.5-flash',
    'openai/gpt-oss-120b'
    ], // Fallback models
    },
    },
    })

    AI Gateway also includes built-in observability, Bring Your Own Key support, and supports OpenAI-compatible API.

  • Support for TanStack Start

    Vercel + TanStack Start - DarkVercel + TanStack Start - Dark

    Vercel detects and supports TanStack Start applications, a full-stack framework powered by TanStack Router for React and Solid.

    Create a new TanStack Start app or add nitro() to vite.config.ts in your existing application to easily deploy your projects:

    vite.config.ts
    import { tanstackStart } from '@tanstack/react-start/plugin/vite'
    import { defineConfig } from 'vite'
    import viteReact from '@vitejs/plugin-react'
    import { nitro } from 'nitro/vite'
    export default defineConfig({
    plugins: [
    tanstackStart(),
    nitro(),
    viteReact(),
    ],
    })

    TanStack Start apps on Vercel use Fluid compute with Active CPU pricing by default. This means your TanStack Start app will automatically scale up and down based on traffic, and you only pay for what you use, not for idle function time.

    Visit the TanStack Start on Vercel documentation to learn more

    Austin Merrick, Marcos Grappeggia

  • Vercel now supports post-quantum cryptography

    HTTPS connections to the Vercel network are now secured with post-quantum cryptography.

    Most web encryption today could be broken by future quantum computers. While this threat isn’t immediate, attackers can capture encrypted traffic today and decrypt it later as quantum technology advances.

    Vercel now supports post-quantum encryption during TLS handshakes, protecting applications against these future risks. Modern browsers will automatically use it with no configuration or additional cost required.

    Read more about encryption and how we secure your deployments.