Vercel Functions can time out from long-running tasks, inefficient code, or upstream service issues.
The best way to avoid hitting these limits is by taking advantage of Fluid Compute, which blends serverless flexibility with server-like capabilities, including running functions up to 1 minute on free plans and 14 minutes on paid plans.
Fluid Compute offers a hybrid solution that removes many of the traditional limitations of serverless functions:
- Optimized concurrency: Multiple function invocations can run on a single instance, reducing cold starts and improving resource utilization.
- Extended durations: Enjoy longer maximum durations (up to 800 seconds on Pro and Enterprise) without additional configurations.
- Dynamic scaling & automatic cold start optimizations: Scale seamlessly during traffic spikes and benefit from bytecode caching to reduce cold starts.
- Background processing: Use
waitUntil
(orafter
with Next.js) to continue tasks after sending an initial response.
If you frequently encounter timeouts or want to run network-intensive tasks, enabling Fluid Compute is your most effective long-term solution.
- Open your project in the Vercel dashboard
- Click on Settings and select the Functions section
- Scroll to the Fluid Compute section and enable the toggle for Fluid Compute
- Redeploy your project to apply changes.
Fluid Compute is currently supported by the Node.js and Python runtimes.
- By handling multiple invocations within a single function instance, you spend less time waiting for cold starts and better utilize idle compute resources.
- This is especially helpful for I/O-bound workloads (e.g., calling external APIs or databases).
- Fluid Compute has higher default and maximum duration limits than traditional serverless functions, making it less likely you’ll hit a timeout for long-running tasks.
If you’re still encountering timeouts even after enabling Fluid Compute, review the following common causes:
- Check API or database call durations: If these calls exceed your default or configured duration, you’ll see timeouts.
- Extend your function’s duration if needed: You can override Fluid Compute defaults if needed for even longer running functions. If building AI applications, we recommend streaming.
- Always send an HTTP response (even an error). If your function never returns anything, Vercel will wait until the maximum duration has elapsed and then time out.
- Inspect your logic for any loops or recursive calls that never terminate. This is a common cause of unintended long-running functions. Inspect your function runtime logs and observability.
- Verify third-party integrations or database connections. If upstream services fail to respond, make sure you handle the error gracefully and return a response (e.g., an error message) rather than waiting indefinitely.