Observability Insights
Vercel organizes Observability through sections that correspond to different features and traffic sources that you can view, monitor and filter.
The Vercel Functions tab provides a detailed view of the performance of your Vercel Functions. You can see the number of invocations and the error rate of your functions. You can also see the performance of your functions broken down by route.
For more information, see Vercel Functions. See understand the cost impact of function invocations for more information on how to optimize your functions.
The Edge Functions tab provides a detailed view of the performance of your Edge Functions. You can see the number of invocations and the error rate of your functions. You can also see the performance of your functions broken down by route.
For more information, see Vercel Functions.
You can use the External APIs tab to understand more information about requests from your functions to external APIs. You can organize by number of requests, p75 (latency), and error rate to help you understand potential causes for slow upstream times or timeouts.
The Middleware observability tab shows invocation counts and performance metrics of your application's middleware.
Observability Plus users receive additional insights and tooling:
- Analyze invocations by request path, matched against your middleware config
- Break down middleware actions by type (e.g., redirect, rewrite)
- View rewrite targets and frequency
- Query middleware invocations using the query builder
You can use the Edge Requests tab to understand the requests to each of static and dynamic routes through the edge network. This includes the number of requests, the regions, and the requests that have been cached for each route.
It also provides detailed breakdowns for individual bots and bot categories, including AI crawlers and search engines.
Additionally, Observability Plus users can:
- Filter traffic by bot category, such as AI
- View metrics for individual bots
- Break down traffic by bot or category in the query builder
You can use the Fast Data Transfer tab to understand how data is being transferred within the edge network for your project.
For more information, see Fast Data Transfer.
The Image Optimization tab provides deeper insights into image transformations and efficiency.
It contains:
- Transformation insights: View formats, quality settings, and width adjustments
- Optimization analysis: Identify high-frequency transformations to help inform caching strategies
- Bandwidth savings: Compare transformed images against their original sources to measure bandwidth reduction and efficiency
- Image-specific views: See all referrers and unique variants of an optimized image in one place
For more information, see Image Optimization.
You can use the ISR tab to understand your revalidations and cache hit ratio to help you optimize towards cached requests by default.
For more information on ISR, see Incremental Static Regeneration.
Use the Vercel Blob tab to gain visibility into how Blob stores are used across your applications. It allows you to understand usage patterns, identify inefficiencies, and optimize how your application stores and serves assets.
At the team level, you will access:
- Total data transfer
- Download volume
- Cache activity
- API operations
You can also drill into activity by user agent, edge region, and client IP.
Learn more about Vercel Blob.
You can use the Build Diagnostics tab to view the performance of your builds. You can see the build time and resource usage for each of your builds. In addition, you can see the build time broken down by each step in the build and deploy process.
To learn more, see Builds.
With the The AI Gateway (currently in alpha for all users), you can switch between ~100 AI models without needing to manage API keys, rate limits, or provider accounts.
The AI tab surfaces metrics related to the AI Gateway, and provides visibility into:
- Requests by model
- Time to first token (TTFT)
- Request duration
- Input/output token count
- Cost per request (free while in alpha)
You can view these metrics across all projects or drill into per-project and per-model usage to understand which models are performing well, how they compare on latency, and what each request would cost in production.
For more information, see the AI Gateway announcement.
Was this helpful?