vercel-logotype Logovercel-logotype Logo
    • Frameworks
      • Next.js

        The native Next.js platform

      • Turborepo

        Speed with Enterprise scale

      • AI SDK

        The AI Toolkit for TypeScript

    • Infrastructure
      • CI/CD

        Helping teams ship 6× faster

      • Delivery network

        Fast, scalable, and reliable

      • Fluid compute

        Servers, in serverless form

      • AI Infrastructure

        AI Gateway, Sandbox, and more

      • Observability

        Trace every step

    • Security
      • Platform security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

      • Bot management

        BotID, Bot Protection

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

    • Tools
      • Resource Center

        Today’s best practices

      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Guides

        Find help quickly

      • Partner Finder

        Get help from solution partners

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

  • Enterprise
  • Docs
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No results found for "".
    Try again with a different keyword.

    Featured articles

  • Jul 10

    The AI Cloud: A unified platform for AI workloads

    For over a decade, Vercel has helped teams develop, preview, and ship everything from static sites to full-stack apps. That mission shaped the Frontend Cloud, now relied on by millions of developers and powering some of the largest sites and apps in the world. Now, AI is changing what and how we build. Interfaces are becoming conversations and workflows are becoming autonomous. We've seen this firsthand while building v0 and working with AI teams like Browserbase and Decagon. The pattern is clear: developers need expanded tools, new infrastructure primitives, and even more protections for their intelligent, agent-powered applications. At Vercel Ship, we introduced the AI Cloud: a unified platform that lets teams build AI features and apps with the right tools to stay flexible, move fast, and be secure, all while focusing on their products, not infrastructure.

    Dan Fein
  • May 20

    Introducing the AI Gateway

    The Vercel AI Gateway is now available for alpha testing. Built on the AI SDK 5 alpha, the Gateway lets you switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts. The Gateway handles authentication, usage tracking, and in the future, billing. Get started with AI SDK 5 and the Gateway, or continue reading to learn more. Why we’re building the AI Gateway The current speed of AI development is fast and is only getting faster. There's a new state-of-the-art model released almost every week. Frustratingly, this means developers have been locked into a specific provider or model API in their application code. We want to help developers ship fast and keep up with AI progress, without needing 10 different API keys and provider accounts. Prod...

    Walter and Lars
  • Jun 25

    Introducing Active CPU pricing for Fluid compute

    Fluid compute exists for a new class of workloads. I/O bound backends like AI inference, agents, MCP servers, and anything that needs to scale instantly, but often remains idle between operations. These workloads do not follow traditional, quick request-response patterns. They’re long-running, unpredictable, and use cloud resources in new ways. Fluid quickly became the default compute model on Vercel, helping teams cut costs by up to 85% through optimizations like in-function concurrency. Today, we’re taking the efficiency and cost savings further with a new pricing model: you pay CPU rates only when your code is actively using CPU.

    Dan and Mariano

    Latest news.

  • v0
    Aug 22

    AI-powered prototyping with design systems

    Prototyping with AI should feel fast, collaborative, and on brand. Most AI tools have cracked the "fast" and "collaborative" parts, but can struggle with feeling "on-brand". This disconnect usually stems from a lack of context. For v0 to produce output that looks and feels right, it needs to understand your components. That includes how things should look, how they should behave, how they work together, and all of the other nuances. Most design systems aren’t built to support that kind of reasoning. However, a design system built for AI enables you to generate brand-aware prototypes that look and feel production ready. Let's look at why giving v0 this context creates on-brand prototypes and how you can get started.

    Will Sather
  • General
    Aug 21

    AI Gateway: Production-ready reliability for your AI apps

    Building an AI app can now take just minutes. With developer tools like the AI SDK, teams can build both AI frontends and backends that accept prompts and context, reason with an LLM, call actions, and stream back results. But going to production requires reliability and stability at scale. Teams that connect directly to a single LLM provider for inference create a fragile dependency: if that provider goes down or hits rate limits, so does the app. As AI workloads become mission-critical, the focus shifts from integration to reliability and consistent model access. Fortunately, there's a better way to run. AI Gateway, now generally available, ensures availability when a provider fails, avoiding low rate limits and providing consistent reliability for AI workloads. It's the same system that has powered v0.app for millions of users, now battle-tested, stable, and ready for production for our customers.

    Walter and Harpreet
  • Customers
    Aug 20

    Rethinking prototyping, requirements, and project delivery at Code and Theory

    Code and Theory is a digital-first creative and technology agency that blends strategy, design, and engineering. With a team structure split evenly between creatives and engineers, the agency builds systems for global brands like Microsoft, Amazon, and NBC that span media, ecommerce, and enterprise tooling. With their focus on delivering expressive, scalable digital experiences, the team uses v0 to shorten the path from idea to working software.

    Peri Langlois
  • General
    Aug 20

    <script type="text/llms.txt">

    How do you tell an AI agent what it needs to do when it hits a protected page? Most systems rely on external documentation or pre-configured knowledge, but there's a simpler approach. What if the instructions were right there in the HTML response? llms.txt is an emerging standard for making content such as docs available for direct consumption by AIs. We’re proposing a convention to include such content directly in HTML responses as <script type="text/llms.txt">.

    Malte Ubl
  • General
    Aug 18

    If agents are building your app, who gets the W-2?

    Autonomous coding agents are not the future. They are already here. Agents can now design, build, test, and deploy an entire full-stack feature from front end to back end without a human touching the keyboard. The reality is that while this technology has advanced quickly, Generally Accepted Accounting Principles (GAAP) have not traditionally focused on the cost of tools used in development. Under current U.S. GAAP, you can capitalize certain third-party software costs if they are a direct cost of creating software during the application development stage. Historically, though, developer tools were treated as overhead because their cost could not be directly tied to capitalizable work. Under GAAP, work that meets the criteria should be capitalized. When agents perform that work, they should be treated no differently than salaried engineers.

    Keith and Werner
  • General
    Aug 13

    The real serverless compute to database connection problem, solved

    There is a long-standing myth that serverless compute inherently requires more connections to traditional databases. The real issue is not the number of connections needed during normal operation, but that some serverless platforms can leak connections when functions are suspended. In this post, we show why this belief is incorrect, explain the actual cause of the problem, and provide a straightforward, simple-to-use solution.

    Malte Ubl
  • General
    Aug 13

    The three types of AI bot traffic and how to handle them

    AI bot traffic is growing across the web. We track this in real-time, and the data reveals three types of AI-driven crawlers that often work independently but together create a discovery flywheel that many teams disrupt without realizing it. Not all bots are harmful. Crawlers have powered search engines for decades, and we've spent just as long optimizing for them. Now, large language models (LLMs) need training data, and the AI tools built on them need timely, relevant updates. This is the next wave of discoverability and getting it right from the start can determine whether AI becomes a growth channel or a missed opportunity. Blocking AI crawlers today is like blocking search engines in the early days and then wondering why organic traffic vanishes. As users shift from Googling for web pages to prompting for direct answers and cited sources, the advantage will go to sites that understand each type of bot and choose where access creates value.

    Kevin Corbett
  • Customers
    Aug 13

    How Coxwave delivers GenAI value faster with Vercel

    Coxwave helps enterprises build GenAI products that work at scale. With their consulting arm, AX, and their analytics platform, Align, they support some of the world’s most technically sophisticated companies, including Anthropic, Meta, Microsoft, and PwC. Since the company’s founding in 2021, speed has been a defining trait. But speed doesn’t just mean fast models. For Coxwave, it means fast iteration, fast validation, and fast value delivery. To meet that bar, Coxwave reimagined their web app strategy with Next.js and Vercel.

    Peri Langlois
  • Customers
    Aug 12

    Cutting delivery times in half with v0

    Ready.net is a core platform that helps utility companies manage their financing and compliance, the company works with a wide network of state-level stakeholders. New feature requirements come in fast, often vague, and always critical. With limited design resources supporting three teams, the company needed a way to speed up the loop between ideation, validation, and delivery. That’s where v0 came in.

    Peri Langlois
  • v0
    Aug 11

    v0.dev -> v0.app

    With a single prompt, anyone can go from idea to deployed app with UI, content, backend, and logic included. v0 is now agentic, helping you research, reason, debug, and plan. It can collaborate with you or take on the work end-to-end. From product managers writing specs to recruiters launching job boards, v0 is changing how teams operate.

    Zeb Hermann
  • Customers
    Aug 8

    How Zapier scales product partnerships with v0

    Zapier is the leading AI orchestration platform, helping businesses turn intelligent insights into automated actions across nearly 8,000 apps. As AI tools and agents become more capable, Zapier provides the connective tissue to operationalize them, bridging the gap between decision and execution.  Powered by Zapier extends this capability to partners. It enables SaaS and AI companies to embed Zapier’s automation engine directly into their products without needing to build or maintain thousands of integrations in-house. But explaining to partners what that experience can look like in their product was a challenge. Needing to move quickly with finite resources, the Zapier team could take a few weeks to design and build a clickable prototype. Now, with v0, the Powered by Zapier team can generate high-fidelity demos in just a few hours. The result: better conversations with partners, faster implementation cycles, and more integrations shipped for end users.

    Peri Langlois
  • General
    Aug 7

    Vercel collaborates with OpenAI for GPT-5 launch

    The GPT-5 family of models, released today, is now available through AI Gateway and in production on v0.dev. Thanks to OpenAI, Vercel has been testing these models over the past few weeks in v0, Next.js, AI SDK, and Vercel Sandbox. From our testing, GPT-5 is noticeably better at frontend design than previous models. It generates polished, balanced UIs with clean, composable code. Internally, we’ve already started using GPT-5 for Vercel's in-dashboard Agent and for v0.dev/gpt-5. GPT-5 shows strong performance in agent-based workflows. We have been impressed with it's long-context reasoning and ability to handle multiple tools in parallel have been especially effective in powering Vercel Agent.

    +4
    Aparna, Harpreet, and 4 others

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Products

  • AI
  • Enterprise
  • Fluid Compute
  • Next.js
  • Observability
  • Previews
  • Rendering
  • Security
  • Turbo
  • v0

Resources

  • Community
  • Docs
  • Guides
  • Help
  • Integrations
  • Pricing
  • Resources
  • Solution Partners
  • Startups
  • Templates

Company

  • About
  • Blog
  • Careers
  • Changelog
  • Events
  • Contact Us
  • Customers
  • Partners
  • Shipped
  • Privacy Policy

Social

  • GitHub
  • LinkedIn
  • Twitter
  • YouTube

Loading status…

Select a display theme: