You can build an AI-powered Slack agent that responds to mentions, maintains conversation history, and calls tools autonomously using Chat SDK and AI SDK. Chat SDK handles the platform integration (webhooks, message formatting, thread tracking), while AI SDK's ToolLoopAgent manages the reasoning loop that lets your agent call tools and act on results. Together with Vercel AI Gateway and Redis for state, you get a production-ready Slack agent without managing infrastructure or juggling provider SDKs.
This guide will walk you through building a Slack agent with Chat SDK, AI SDK's ToolLoopAgent, and Claude via the Vercel AI Gateway. You'll wire up streaming responses, tool calling, and multi-turn conversation history, then scale your tool set for production with toolpick.
Before you begin, make sure you have:
- Node.js 18+
- pnpm (or npm/yarn)
- A Slack workspace where you can install apps
- A Redis instance (local or hosted, such as Upstash)
- A Vercel account with an AI Gateway API key
Chat SDK is a unified TypeScript SDK for building chatbots across Slack, Teams, Discord, and other platforms. You register event handlers (like onNewMention and onSubscribedMessage), and the SDK routes incoming webhooks to them. The Slack adapter handles webhook verification, message parsing, and the Slack API. The Redis state adapter tracks which threads your bot has subscribed to and manages distributed locking for concurrent message handling.
AI SDK's ToolLoopAgent wraps a language model with tools and runs an autonomous loop: the model generates text or calls a tool, the SDK executes the tool, feeds the result back, and repeats until the model finishes. When you pass a model string like "anthropic/claude-sonnet-4.6", and host your application on Vercel, the AI SDK will route the request through the AI Gateway automatically.
Chat SDK accepts any AsyncIterable<string> as a message, so you can pass the agent's fullStream directly to thread.post() for real-time streaming in Slack.
Create a new Next.js app and add the Chat SDK, AI SDK, and adapter packages:
The chat package is the Chat SDK core. The @chat-adapter/slack and @chat-adapter/state-redis packages are the Slack platform adapter and Redis state adapter. The ai package is the AI SDK, which includes the AI Gateway provider and ToolLoopAgent. zod is used to define tool input schemas.
The Vercel Plugin equips your AI coding agent (e.g., Claude Code) with skills, specialist agents, slash commands, and more.
Go to api.slack.com/apps, click Create New App, then From a manifest.
Select your workspace and paste this manifest:
After creating the app:
- Go to Install App, and install the app to your workspace
- Go to OAuth & Permissions > OAuth Tokens and copy the Bot User OAuth Token
- Go to Basic Information > App Credentials and copy the Signing Secret
You'll replace the request_url placeholders with your real domain after deploying (or a tunnel URL for local testing).
Create a .env.local file in your project root:
The Slack adapter reads SLACK_BOT_TOKEN and SLACK_SIGNING_SECRET automatically. The Redis state adapter reads REDIS_URL. AI SDK uses AI_GATEWAY_API_KEY to authenticate with the Vercel AI Gateway, or alternatively, use OIDC authentication.
You can create an AI Gateway API key from your Vercel dashboard under AI Gateway and click Create an API Key.
Create lib/tools.ts with the tools your agent can call. This example defines a weather tool and docs tool, but you can add any tools your use case requires:
Each tool has a description (which tells the model when to use it), an inputSchema (a Zod schema that the model fills in), and an execute function that runs when the tool is called.
Create lib/bot.ts with a ToolLoopAgent and a Chat instance:
When someone @mentions the bot, onNewMention fires. The handler subscribes to the thread (to track future messages in that thread) and streams the agent's response. For follow-up messages, onSubscribedMessage retrieves the full thread history using thread.allMessages, converts it to the AI SDK message format with toAiMessagesand passes it to the agent so it has a complete conversation context.
The fullStream is preferred over textStream because it preserves paragraph breaks between tool-calling steps. Chat SDK auto-detects the stream type and handles Slack's native streaming API for real-time updates.
Create the API route at app/api/webhooks/[platform]/route.ts:
This creates a POST /api/webhooks/slack endpoint. The waitUntil option ensures your event handlers finish processing after the HTTP response is sent, which is required on serverless platforms where the function would otherwise terminate early.
- Start the dev server:
- Expose it with a tunnel:
- Copy the tunnel URL (for example,
https://abc123.ngrok-free.dev) and update both Event Subscriptions and Interactivity Request URLs in your Slack app settings tohttps://abc123.ngrok-free.dev/api/webhooks/slack - Invite the bot to a channel (
/invite @AI Agent) - @mention the bot with a question. You should see a streaming response appear in the thread. Try asking it to use one of your tools, such as "What's the weather in San Francisco?"
First, link your project and add your environment variables:
Alternatively, add them in the Vercel dashboard under Settings > Environment Variables.
Then deploy:
Update the Event Subscriptions and Interactivity Request URLs in your Slack app settings to your production URL, for example https://my-slack-agent.vercel.app/api/webhooks/slack.
When deployed to Vercel, AI Gateway supports OIDC-based authentication, so you can also authenticate without a static API key. See the AI Gateway authentication docs.
Check that your Slack app has the app_mentions:read scope and that the Event Subscriptions Request URL is correct. Slack sends a challenge request when you first set the URL, so your server must be running.
Chat SDK uses Slack's native streaming API for smooth updates. If you're seeing issues, check that your Redis connection is stable, as the SDK uses distributed locks to manage concurrent messages.
If the agent calls a tool but no result appears, check for errors in your tool's execute function. AI SDK surfaces tool execution errors back to the model, which may attempt to recover. Add error handling in your tools and check your server logs for details.
For long-running threads, the conversation history can exceed the model's context window. Consider limiting the number of messages you pass to the agent by slicing the history array or by using a summarization step for older messages.
The agent in this guide has two tools. In production, a Slack agent often grows to 15, 20, or 30 tools as you integrate services like GitHub, Linear, Upstash, calendars, and deploy pipelines. At that scale, every tool definition is sent to the model on every step, which increases token costs and makes it harder for the model to pick the right tool.
toolpick solves this by indexing your tools at startup and selecting only the most relevant ones for each step. It hooks into ToolLoopAgent via the prepareStep option, so you don't need to change your handler logic.
Build an index from your full tool set. toolpick uses a combination of keyword matching and semantic embeddings to find the best tools for each step:
For higher accuracy with vague queries (like "ship it" or "ping the team"), add a re-ranker model that uses a cheap LLM to pick the final candidates:
Pass toolIndex.prepareStep() to your ToolLoopAgent. This sets activeTools on each step, so the model only sees the tools it needs, while all tools remain available for execution:
If the model can't find a relevant tool in the current selection, toolpick automatically moves to the next page of results. After two misses, it exposes all tools as a fallback. Your agent never gets stuck in a loop, unable to find the right tool.
For an extra accuracy boost, enable enrichDescriptions to expand your tool descriptions with synonyms and alternative phrasings. This runs a one-time LLM call during warmUp() at server startup. You can also persist the computed embeddings to disk with fileCache so subsequent restarts skip the embedding API call entirely:
This setup is optional for agents with a handful of tools, but becomes worthwhile as your tool set grows. The per-step cost of re-ranking with gpt-4o-mini is approximately $0.0001, which is negligible compared to the token savings from sending fewer tool definitions to the primary model.
Chat SDK supports multiple platforms from a single codebase. The event handlers and agent logic you've already defined work identically across all of them, since the SDK normalizes messages, threads, and reactions into a consistent format.
To add Microsoft Teams or another platform, register an additional adapter:
The existing webhook route in src/index.ts already uses a :platform parameter, so Teams webhooks would be handled at /api/webhooks/teams with no additional routing code.
Streaming behavior varies by platform. Slack uses its native streaming API for smooth real-time updates, while Teams, Discord, and Google Chat fall back to a post-then-edit pattern that throttles updates to avoid rate limits. You can adjust the update interval with the streamingUpdateIntervalMs option when creating your Chat instance.
See the Chat SDK adapter directory for the full list of supported platforms.