Usage & Pricing for Functions
Learn about usage and pricing for Vercel Functions.Functions using the Node.js runtime are measured in GB-hours, which is the memory allocated for each Function in GB, multiplied by the time in hours they were running. For example, a function configured to use 3GB of memory that executes for 1 second, would be billed at 3 GB-s, requiring 1,200 executions to reach a full GB-Hr.
A function can use up to 50 ms of CPU time per execution unit. If a function uses more than 50 ms, it will be divided into multiple 50 ms units for billing purposes.
See viewing function usage for more information on how to track your usage.
The following table outlines the price for each resource according to the plan you are on, and the runtime your function is using.
Vercel Functions are available for free with the included usage limits. If you exceed the included usage and are on the Pro plan, you will be charged for the additional usage according to the on-demand costs:
Resource | Hobby Included | Pro Included | Pro Additional |
---|---|---|---|
Function Duration | First 100 GB-Hours | First 1,000 GB-Hours | $0.18 per 1 GB-Hour |
Function Invocations | First 100,000 | First 1,000,000 | $0.60 per 1,000,000 Invocations |
Vercel will send you emails as you are nearing your usage limits. On the Hobby plan you will not pay for any additional usage. However, your account may be paused if you do exceed the limits.
When your Hobby team is set to paused, it remains in this state indefinitely unless you take action. This means all new and existing deployments will be paused.
If you have reached this state, your application is likely a good candidate for a Pro account.
To unpause your account, you have two main options:
- Contact Support: You can reach out to our support team to discuss the reason for the pause and potential resolutions
- Transfer to a Pro team:
If your Hobby team is paused, you won't have the option to initiate a Pro trial. Instead, you can set up a Pro team:
- Create a Pro team account
- Add a valid credit card to this account. Select the Settings tab, then select Billing and Payment Method
Once set up, a transfer modal will appear, prompting you to transfer your previous Hobby projects to this new team. After transferring, you can continue with your projects as usual.
For teams on a Pro trial, the trial will end when your team reaches the trial limits.
Once your team exceeds the included usage, you will continue to be charged the on-demand costs going forward.
Pro teams can set up Spend Management to get notified or to automatically take action, such as using a webhook or pausing your projects when your usage hits a set spend amount.
Enterprise agreements provide custom usage and pricing for Vercel Functions, including:
- Custom execution units
- Increased maximum duration up to 900 seconds
- Multi-region deployments
- Vercel Function failover
See Vercel Enterprise plans for more information.
Usage metrics can be found in the Usage tab on your dashboard. Functions are invoked for every request that is served.
You can see the usage for functions using the Node.js runtime on the Serverless Functions section of the Usage tab.
Metric | Description | Priced | Optimize |
---|---|---|---|
Function Invocations | The number of times your Functions have been invoked | Yes | Learn More |
Function Duration | The time your Vercel Functions have spent responding to requests | Yes | Learn More |
Throttling | The number of instances where Functions did not execute due to concurrency limits being reached | No | N/A |
You are charged based on the number of times your functions are invoked, including both successful and errored invocations, excluding cache hits. The number of invocations is calculated by the number of times your function is called, regardless of the response status code.
When using Incremental Static Regeneration with Next.js, both the revalidate
option for getStaticProps
and fallback
for getStaticPaths
will result in a Function invocation on revalidation, not for every user request.
When viewing your Functions Invocations graph, you can group by Ratio to see a total of all invocations across your team's projects that finished successfully, errored, or timed out.
Executing a Serverless Function will increase Edge Request usage as well. Caching your Serverless Function reduces the GB-hours of your functions but does not reduce the Edge Request usage that comes with executing it.
- Use the Projects option to identify which projects have the most invocations and where you can optimize.
- Cache your responses using edge caching and Cache-Control headers to reduce the number of invocations and speed up responses for users.
- See How can I reduce my Serverless Execution usage on Vercel? for more general information on how to reduce your Serverless Functions usage.
You are charged based on the duration your Serverless Functions have run. This is sometimes called "wall-clock time, which refers to the actual time elapsed during a process, similar to how you would measure time passing on a wall clock. It includes all time spent from start to finish of the process, regardless of whether that time was actively used for processing or spent waiting for a streamed response. Function Duration is calculated in GB-Hours, which is the memory allocated for each Function in GB x the time in hours they were running.
For example, if a function has 1.7 GB (1769 MB) of memory and is executed 1 million times at a 1-second duration:
- Total Seconds: 1M * (1s) = 1,000,000 Seconds
- Total GB-Seconds: 1769/1024 GB * 1,000,000 Seconds = 1,727,539.06 GB-Seconds
- Total GB-Hrs: 1,727,539.06 GB-Seconds / 3600 = 479.87 GB-Hrs
- The total Serverless Function Execution is 479.87 GB-Hrs.
To see your current usage, navigate to the Usage tab on your team's Dashboard and go to Serverless Functions > Duration. You can use the Ratio option to see the total amount of execution time across all projects within your team, including the completions, errors, and timeouts.
- Use the Projects option to identify which projects have the most execution time and where you can optimize.
- Enable fluid compute to increase function durations, reduce cold starts, and improve performance for I/O-bound workloads. See how to enable fluid compute for more information.
- You can also adjust the maximum duration for your functions to prevent excessive run times.
- To reduce the GB-hours (Execution) of your functions, ensure you are using edge caching and Cache-Control headers. If using Incremental Static Regeneration, note that Vercel counts Function invocations on page revalidation towards both GB-hours and Fast Origin Transfer.
- For troubleshooting issues causing functions to run longer than expected or timeout, see What can I do about Vercel Serverless Functions timing out?.
This counts the number of times that a request to your Functions could not be served because the concurrency limit was hit.
While this is not a chargeable metric, it will cause a 503: FUNCTION_THROTTLED
error. To learn more, see What should I do if I receive a 503 error on Vercel?.
Was this helpful?