LiteLLM
LiteLLM is an open-source library that provides a unified interface to call LLMs. This guide demonstrates how to integrate Vercel AI Gateway with LiteLLM to access various AI models and providers.
First, create a new directory for your project:
terminalmkdir litellm-ai-gateway cd litellm-ai-gateway
Install the required LiteLLM Python package:
terminalpip install litellm python-dotenv
Create a
.env
file with your Vercel AI Gateway API key:.envVERCEL_AI_GATEWAY_API_KEY=your-api-key-here
If you're using the AI Gateway from within a Vercel deployment, you can also use the
VERCEL_OIDC_TOKEN
environment variable which will be automatically provided.Create a new file called
main.py
with the following code:main.pyimport os import litellm from dotenv import load_dotenv load_dotenv() os.environ["VERCEL_AI_GATEWAY_API_KEY"] = os.getenv("VERCEL_AI_GATEWAY_API_KEY") # Define messages messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about the food scene in San Francisco."} ] response = litellm.completion( model="vercel_ai_gateway/openai/gpt-4o", messages=messages ) print(response.choices[0].message.content)
The following code:
- Uses LiteLLM's
completion
function to make requests through Vercel AI Gateway - Specifies the model using the
vercel_ai_gateway/
prefix - Makes a chat completion request and prints the response
- Uses LiteLLM's
Run your Python application:
terminalpython main.py
You should see a response from the AI model in your console.
Was this helpful?