VercelVercel
Menu

Video Generation

Last updated February 22, 2026

Video generation requires AI SDK v6 and uses the experimental_generateVideo function. This API is experimental and subject to change in future releases.

AI Gateway supports video generation, letting you create videos from text prompts, images, or video input. You can control resolution, duration, aspect ratio, and audio through a unified API across multiple providers.

To see all supported video models, use the Video filter at the AI Gateway Models page.

Some video models are tagged by capability in their model name. You can also see capability tags on the AI Gateway Models page or via the /v1/models endpoint, which is useful for models that support multiple capabilities:

TagCapabilityDescription
t2vText-to-videoGenerate video from a text prompt
i2vImage-to-videoAnimate a static image into a video
r2vReference-to-videoGenerate video featuring characters from reference images or videos
-Video editingEdit existing videos using text prompts

For example, klingai/kling-v2.6-t2v is a text-to-video model, and alibaba/wan-v2.6-i2v is an image-to-video model.

These parameters work across all video models, though support varies by provider.

ParameterTypeDescription
promptstring or { image, text }Text description of the video. For image-to-video, use object format with image and text
durationnumberVideo length in seconds. Supported range varies by model
aspectRatiostringAspect ratio as {width}:{height} (e.g., '16:9', '9:16')
resolutionstringResolution as {width}x{height} (e.g., '1920x1080', '1280x720')

Video models return results in result.videos. Each video object contains:

  • uint8Array: Raw video data as Uint8Array
  • base64: Base64-encoded video data
save-video.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import fs from 'node:fs';
 
const result = await generateVideo({
  model: 'google/veo-3.1-generate-001',
  prompt: 'A serene mountain landscape at sunset',
  duration: 8,
});
 
fs.writeFileSync('output.mp4', result.videos[0].uint8Array);

Video generation can take several minutes. In Node.js, the default fetch implementation (via Undici) enforces a 5-minute timeout. This can cause requests to fail before the video finishes generating.

To extend these timeouts, create a custom gateway instance with a longer Undici Agent timeout:

lib/gateway.ts
import { createGateway } from 'ai';
import { Agent } from 'undici';
 
export const gateway = createGateway({
  fetch: (url, init) =>
    fetch(url, {
      ...init,
      dispatcher: new Agent({
        headersTimeout: 15 * 60 * 1000, // 15 minutes
        bodyTimeout: 15 * 60 * 1000,
      }),
    } as RequestInit),
});

Then use the custom gateway instance:

generate.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import { gateway } from './lib/gateway';
 
const { videos } = await generateVideo({
  model: gateway.video('google/veo-3.1-generate-001'),
  prompt: 'A timelapse of a flower blooming',
  duration: 8,
});

To use plain string model IDs with extended timeouts, set your custom gateway as the global default provider. In a Next.js app, add this to instrumentation.ts:

instrumentation.ts
import { createGateway } from 'ai';
import { Agent } from 'undici';
 
export async function register() {
  globalThis.AI_SDK_DEFAULT_PROVIDER = createGateway({
    fetch: (url, init) =>
      fetch(url, {
        ...init,
        dispatcher: new Agent({
          headersTimeout: 15 * 60 * 1000,
          bodyTimeout: 15 * 60 * 1000,
        }),
      } as RequestInit),
  });
}

Was this helpful?

supported.