Anatomy of the Checks API
Checks API extends the build and deploy process once your deployment is ready. Each check behaves like a webhook that triggers specific events, such as deployment.created, deployment.ready, and deployment.succeeded. The test are verified before domains are assigned.
To learn more, see the Supported Webhooks Events docs.
The workflow for registering and running a check is as follows:
- A check is created after the
deployment.createdevent - When the
deployment.readyevent triggers, the check updates itsstatustorunning - When the check is finished, the
statusupdates tocompleted
If a check is "rerequestable", your integration users get an option to rerequest and rerun the failing checks.
Depending on the type, checks can block the domain assignment stage of deployments.
- Blocking Checks: Prevents a successful deployment and returns a
conclusionwith astatevalue ofcanceledorfailed. For example, a Core Check returning a404error results in afailedconclusionfor a deployment - Non-blocking Checks: Return test results with a successful deployment regardless of the
conclusion
A blocking check with a failed state is configured by the developer (and not the integration).
Checks are always associated with a specific deployment that is tested and validated.
| Attributes | Format | Purpose |
|---|---|---|
blocking | Boolean | Tells Vercel if this check needs to block the deployment |
name | String | Name of the check |
detailsUrl | String (optional) | URL to display in the Vercel dashboard |
externalID | String (optional) | ID used for external use |
path | String (optional) | Path of the page that is being checked |
rerequestable | Boolean (optional) | Tells Vercel if the check can rerun. Users can trigger a deployment.check-rerequested webhook, through a button on the deployment page |
conclusion | String (optional) | The result of a running check. For blocking checks the values can be canceled, failed, neutral, succeeded, skipped. canceled and failed |
status | String (optional) | Tells Vercel the status of the check with values: running and completed |
output | Object (optional) | Details about the result of the check. Vercel uses this data to display actionable information for developers. This helps them debug failed checks |
The check gets a stale status if there is no status update for more than one hour (status = registered). The same applies if the check is running (status = running) for more than five minutes.
| Response | Format | Purpose |
|---|---|---|
status | String | The status of the check. It expects specific values like running or completed |
state | String | Tells the current state of the connection |
connectedAt | Number | Timestamp (in milliseconds) of when the configuration was connected |
type | String | Name of the integrator performing the check |
| Status | Outcome |
|---|---|
200 | Success |
400 | One of the provided values in the request body is invalid, OR one of the provided values in the request query is invalid |
403 | The provided token is not from an OAuth2 client OR you do not have permission to access this resource OR the API token doesn't have permission to perform the request |
404 | The check was not found OR the deployment was not found |
413 | The output provided is too large |
The output property can store any data like Web Vitals and Virtual Experience Score. It is defined under a metrics field:
| Key | Type | Description |
|---|---|---|
TBT | Map | The Total Blocking Time, measured by the check |
LCP | Map | The Largest Contentful Paint, measured by the check |
FCP | Map | The First Contentful Paint, measured by the check |
CLS | Map | The Cumulative Layout Shift, measured by the check |
virtualExperienceScore | Map | The overall Virtual Experience Score measured by the check |
Each of these keys has the following properties:
| Key | Type | Description |
|---|---|---|
value | Float | The value measured for a particular metric, in milliseconds. For virtualExperienceScore this value is the percentage between 0 and 1 |
previousValue | Float | A previous value for comparison purposes |
source | Enum | web-vitals |
metrics makes Web Vitals visible on checks. It is defined inside output as follows:
{
"path": "/",
"output": {
"metrics": {
"FCP": {
"value": 1200,
"previousValue": 1400,
"source": "web-vitals"
}
"LCP": {
"value": 1200,
"previousValue": 1400,
"source": "web-vitals"
},
"CLS": {
"value": 1200,
"previousValue": 1400,
"source": "web-vitals"
},
"TBT": {
"value": 1200,
"previousValue": 1400,
"source": "web-vitals"
}
}
}
}
}All fields are required except previousValue. If
previousValue is present, the delta will be shown.
A check can be "rerequested" using the deployment.check-rerequested webhook. Add the rerequestable attribute, and you can rerequest failed checks.
A rerequested check triggers thedeployment.check-rerequested webhook. It updates the check status to running and resets the conclusion, detailsUrl, externalId, and output fields.
You can "Skip" to stop and ignore check results without affecting the alias assignment. You cannot skip active checks. They continue running until built successfully, and assign domains as the last step.
For "Running Checks", only the Automatic Deployment URL is available. Automatic Branch URL and Custom Domains will apply once the checks finish.
Checks may take different times to run. Each integrator determines the running order of the checks. While Vercel REST API determines the order of check results.
When Checks API begins running on your deployment, the status is set to running. Once it gets a conclusion, the status updates to completed. This results in a successful deployment.
However, your deployment will fail if the conclusion updates to one of the following values:
| Conclusion | blocking=true |
|---|---|
canceled | Yes |
failed | Yes |
neutral | No |
succeeded | No |
skipped | No |
Was this helpful?