slack-agent by vercel-labs/slack-agent-skill
npx skills add https://github.com/vercel-labs/slack-agent-skill --skill slack-agent此技能支持两种构建 Slack 代理的框架:
chat + @chat-adapter/slack@slack/bolt + @vercel/slack-bolt当通过 /slack-agent 调用此技能时,检查参数并相应路由:
| 参数 | 操作 |
|---|---|
new | 读取 并指导用户创建新的 Slack 代理。 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
./wizard/1-project-setup.mdconfigure | 为现有项目从第 2 或第 3 阶段开始向导 |
deploy | 为生产部署从第 5 阶段开始向导 |
test | 为设置测试从第 6 阶段开始向导 |
| (无参数) | 根据项目状态自动检测(见下文) |
如果调用时没有参数,则检测项目状态并适当路由:
chat 或 @slack/bolt 的 package.json → 视为 new,开始第 1 阶段manifest.json → 开始第 2 阶段.env 文件 → 开始第 3 阶段.env 但未测试 → 开始第 4 阶段检测项目使用的框架:
package.json 包含 "chat" → Chat SDK 项目package.json 包含 "@slack/bolt" → Bolt 项目存储检测到的框架,并在整个向导和开发指导中使用它来显示正确的模式。
向导位于 ./wizard/,包含以下阶段:
1-project-setup.md - 了解目的,选择框架,生成自定义实现计划1b-approve-plan.md - 在脚手架搭建前展示计划供用户批准2-create-slack-app.md - 自定义清单,在 Slack 中创建应用3-configure-environment.md - 使用凭据设置 .env4-test-locally.md - 开发服务器 + ngrok 隧道5-deploy-production.md - Vercel 部署6-setup-testing.md - Vitest 配置重要提示: 对于 new 项目,您必须:
./wizard/1-project-setup.md./reference/agent-archetypes.md 生成自定义实现计划| 方面 | Chat SDK | Bolt for JavaScript |
|---|---|---|
| 最适合 | 新项目 | 现有 Bolt 代码库 |
| 包 | chat, @chat-adapter/slack, @chat-adapter/state-redis | @slack/bolt, @vercel/slack-bolt |
| 服务器 | Next.js App Router | Nitro(基于 H3) |
| 事件处理 | bot.onNewMention(), bot.onSubscribedMessage() | app.event(), app.command(), app.message() |
| Webhook 路由 | app/api/webhooks/[platform]/route.ts | server/api/slack/events.post.ts |
| 消息发布 | thread.post("text") / thread.post(<Card>...) | client.chat.postMessage({ channel, text, blocks }) |
| UI 组件 | JSX: <Card>, <Button>, <Actions> | 原始 Block Kit JSON |
| 状态 | @chat-adapter/state-redis / thread.state | 手动 / Vercel Workflow |
| 配置 | new Chat({ adapters: { slack } }) | new App({ token, signingSecret, receiver }) |
您正在处理一个 Slack 代理项目。所有代码更改都必须遵循这些强制性实践。
chat + @chat-adapter/slack 用于 Slack 机器人功能@chat-adapter/state-redis 用于状态持久化(或开发时使用内存){
"dependencies": {
"ai": "^6.0.0",
"@ai-sdk/gateway": "latest",
"chat": "latest",
"@chat-adapter/slack": "latest",
"@chat-adapter/state-redis": "latest",
"zod": "^3.x",
"next": "^15.x"
}
}
@vercel/slack-bolt 用于无服务器 Slack 应用(包装了 Bolt for JavaScript){
"dependencies": {
"ai": "^6.0.0",
"@ai-sdk/gateway": "latest",
"@slack/bolt": "^4.x",
"@vercel/slack-bolt": "^1.0.2",
"zod": "^3.x"
}
}
注意: 在 Vercel 上部署时,优先使用 @ai-sdk/gateway 以获得零配置的 AI 访问。仅当需要特定于提供商的功能或未在 Vercel 上部署时,才使用直接提供商 SDK(@ai-sdk/openai、@ai-sdk/anthropic 等)。
每次代码更改都必须遵循这些质量要求。没有例外。
立即运行代码检查:
pnpm lint
pnpm lint --write 进行自动修复pnpm lint 进行验证检查对应的测试文件:
foo.ts,检查是否存在 foo.test.ts在标记任务完成之前,您必须运行所有质量检查并修复任何问题:
# 1. TypeScript 编译 - 必须通过
pnpm typecheck
# 2. 代码检查 - 必须通过且无错误
pnpm lint
# 3. 测试 - 所有测试必须通过
pnpm test
如果其中任何一项失败,请勿完成任务。 首先修复问题。
对于任何代码更改,您必须编写或更新单元测试。
*.test.ts 文件或 lib/__tests__/*.test.ts 文件或 server/__tests__/示例测试结构:
import { describe, it, expect, vi } from 'vitest';
import { myFunction } from './my-module';
describe('myFunction', () => {
it('should handle normal input', () => {
expect(myFunction('input')).toBe('expected');
});
it('should handle edge cases', () => {
expect(myFunction('')).toBe('default');
});
});
如果您修改了:
您必须添加或更新端到端测试,以验证完整流程。
使用 Chat SDK 定义您的机器人实例。这是所有 Slack 机器人功能的中心入口点。
lib/bot.ts 或 lib/bot.tsx)import { Chat } from "chat";
import { createSlackAdapter } from "@chat-adapter/slack";
import { createRedisState } from "@chat-adapter/state-redis";
export const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
},
state: createRedisState(),
});
注意: 如果您的机器人使用 JSX 组件(Card、Button 等),文件必须使用 .tsx 扩展名。
app/api/webhooks/[platform]/route.ts)import { after } from "next/server";
import { bot } from "@/lib/bot";
export async function POST(request: Request, context: { params: Promise<{ platform: string }> }) {
const { platform } = await context.params;
const handler = bot.webhooks[platform as keyof typeof bot.webhooks];
if (!handler) return new Response("Unknown platform", { status: 404 });
return handler(request, { waitUntil: (task) => after(() => task) });
}
Chat SDK 自动处理:
waitUntil 进行后台处理使用 @vercel/slack-bolt 处理所有 Slack 事件。此包自动处理:
ackTimeoutMs: 3001)waitUntil 进行后台处理server/bolt/app.ts)import { App } from "@slack/bolt";
import { VercelReceiver } from "@vercel/slack-bolt";
const receiver = new VercelReceiver();
const app = new App({
token: process.env.SLACK_BOT_TOKEN,
signingSecret: process.env.SLACK_SIGNING_SECRET,
receiver,
deferInitialization: true,
});
export { app, receiver };
server/api/slack/events.post.ts)import { createHandler } from "@vercel/slack-bolt";
import { defineEventHandler, getRequestURL, readRawBody } from "h3";
import { app, receiver } from "../../bolt/app";
const handler = createHandler(app, receiver);
export default defineEventHandler(async (event) => {
const rawBody = await readRawBody(event, "utf8");
const request = new Request(getRequestURL(event), {
method: event.method,
headers: event.headers,
body: rawBody,
});
return await handler(request);
});
为什么要缓冲请求体? H3 的 toWebRequest() 存在已知问题(#570, #578, #615),它会过早地消耗请求体流。当 @vercel/slack-bolt 稍后为签名验证调用 req.text() 时,请求体已经耗尽,导致 dispatch_failed 错误。
| 参数 | 默认值 | 描述 |
|---|---|---|
signingSecret | SLACK_SIGNING_SECRET 环境变量 | 请求验证密钥 |
signatureVerification | true | 启用/禁用签名验证 |
ackTimeoutMs | 3001 | 确认超时(毫秒) |
logLevel | INFO | 日志级别 |
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
const text = message.text;
await thread.post(`Processing your request: "${text}"`);
});
bot.onSubscribedMessage(async (thread, message) => {
await thread.post(`You said: ${message.text}`);
});
bot.onSlashCommand("/mycommand", async (event) => {
const text = event.text;
await event.thread.post(`Processing: ${text}`);
// 对于长时间运行的操作,Chat SDK 通过 waitUntil 自动处理后台处理
const result = await generateWithAI(text);
await event.thread.post(result);
});
bot.onAction("button_click", async (event) => {
await event.thread.post(`Button clicked with value: ${event.value}`);
});
bot.onReaction("thumbsup", async (event) => {
await event.thread.post("Thanks for the thumbs up!");
});
app.event("app_mention", async ({ event, client }) => {
await client.chat.postMessage({
channel: event.channel,
thread_ts: event.thread_ts || event.ts,
text: `Processing your request: "${event.text}"`,
});
});
app.message(async ({ message, client }) => {
if ("bot_id" in message || !message.thread_ts) return;
await client.chat.postMessage({
channel: message.channel,
thread_ts: message.thread_ts,
text: `You said: ${message.text}`,
});
});
app.command("/mycommand", async ({ ack, command, client, logger }) => {
await ack(); // 必须在 3 秒内确认
// 长时间操作为“触发后不管”模式 — 不要等待
processInBackground(command.response_url, command.text)
.catch((error) => logger.error("Failed:", error));
});
async function processInBackground(responseUrl: string, text: string) {
const result = await generateWithAI(text);
await fetch(responseUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ response_type: "in_channel", text: result }),
});
}
app.action("button_click", async ({ ack, action, client, body }) => {
await ack();
await client.chat.postMessage({
channel: body.channel.id,
thread_ts: body.message.ts,
text: `Button clicked with value: ${action.value}`,
});
});
即使机器人不是成员,斜杠命令也可以在私有频道中工作,但机器人无法读取消息或发布到它未被邀请加入的私有频道。
创建稍后将向频道发布消息的功能时,请预先验证访问权限。
为 AI 功能获取频道上下文时,请使用 try/catch 包装并优雅地回退。
使用 CRON_SECRET 环境变量保护 cron 端点:
// app/api/cron/my-job/route.ts
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
const authHeader = request.headers.get("authorization");
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
// 运行 cron 作业逻辑...
return NextResponse.json({ success: true });
}
// server/api/cron/my-job.get.ts
export default defineEventHandler(async (event) => {
const authHeader = getHeader(event, "authorization");
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
setResponseStatus(event, 401);
return { error: "Unauthorized" };
}
// 运行 cron 作业逻辑...
return { success: true };
});
在 vercel.json 中配置 cron 作业:
{
"crons": [
{
"path": "/api/cron/my-job",
"schedule": "0 * * * *"
}
]
}
从 Vercel 连接到 AWS 服务时,请勿使用 fromNodeProviderChain()。使用 Vercel 的 OIDC 机制:
import { awsCredentialsProvider } from "@vercel/functions/oidc";
const s3Client = new S3Client({
credentials: awsCredentialsProvider({ roleArn: process.env.AWS_ROLE_ARN! }),
});
当使用 Chat SDK JSX 组件(<Card>、<Button>、<Actions> 等)时,您的 tsconfig.json 必须包含:
{
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "chat"
}
}
如果斜杠命令因 dispatch_failed 失败,问题在于 H3 的 toWebRequest 在签名验证之前消耗了请求体流。手动缓冲请求体。请参阅上面的 Bolt 事件处理器部分。
如果带有 AI 处理的斜杠命令因 operation_timeout 失败,说明您阻塞 HTTP 响应时间过长。使用“触发后不管”模式:立即 ack(),然后开始异步工作而无需等待。使用 command.response_url 发布结果。请参阅上面的 Bolt 斜杠命令处理器示例。
在您的 Slack 代理中,有两种 AI/LLM 集成选项。
重要提示: 始终验证项目是否使用
@ai-sdk/gateway。如果项目有需要 API 密钥的@ai-sdk/openai,请检查package.json并在必要时更新导入。
使用现代的 @ai-sdk/gateway 包 — 在 Vercel 上无需 API 密钥!
import { generateText, streamText } from "ai";
import { gateway } from "@ai-sdk/gateway";
const result = await generateText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
console.log(result.text);
console.log(result.usage.inputTokens);
console.log(result.usage.outputTokens);
const result = await streamText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: userMessage,
});
// Chat SDK 自动处理向 Slack 的流式更新
await thread.post(result.textStream);
const result = await streamText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: userMessage,
});
// 发布初始消息,然后使用流式内容更新
const msg = await client.chat.postMessage({
channel: channelId,
thread_ts: threadTs,
text: "Thinking...",
});
let fullText = "";
for await (const chunk of result.textStream) {
fullText += chunk;
await client.chat.update({
channel: channelId,
ts: msg.ts,
text: fullText,
});
}
import { tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
tools: {
getWeather: tool({
description: "Get weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
return { temperature: 72, condition: "sunny" };
},
}),
},
prompt: "What's the weather in Seattle?",
});
| v4/v5 | v6 |
|---|---|
maxTokens | maxOutputTokens |
result.usage.promptTokens | result.usage.inputTokens |
result.usage.completionTokens | result.usage.outputTokens |
parameters(在工具中) | inputSchema |
maxSteps / maxIterations | stopWhen: stepCountIs(n) |
关键:切勿使用记忆中的模型 ID。 模型 ID 经常变化。在编写使用模型的代码之前,运行 curl -s https://ai-gateway.vercel.sh/v1/models 以获取当前列表。使用版本号最高的模型。
如果您需要更多控制权或未在 Vercel 上部署,请使用直接提供商包。
OpenAI:
pnpm add @ai-sdk/openai
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const result = await generateText({
model: openai("gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
Anthropic:
pnpm add @ai-sdk/anthropic
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const result = await generateText({
model: anthropic("claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
Google:
pnpm add @ai-sdk/google
import { generateText } from "ai";
import { google } from "@ai-sdk/google";
const result = await generateText({
model: google("gemini-2.0-flash"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
有关全面的 AI SDK 文档,请参阅 ./reference/ai-sdk.md。
使用 thread.state 读取和写入线程级别的状态:
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
await thread.state.set("history", []);
await thread.state.set("turnCount", 0);
await thread.post("Starting our conversation!");
});
bot.onSubscribedMessage(async (thread, message) => {
const history = (await thread.state.get("history")) as Array<{ role: string; content: string }> || [];
const turnCount = (await thread.state.get("turnCount")) as number || 0;
history.push({ role: "user", content: message.text });
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
messages: history,
});
history.push({ role: "assistant", content: result.text });
await thread.state.set("history", history);
await thread.state.set("turnCount", turnCount + 1);
await thread.post(result.text);
});
主要优势:
thread.state.get() 和 thread.state.set()使用 Vercel Workflow 实现持久化、多轮次的状态:
import { serve } from "@anthropic-ai/sdk/workflows";
import { defineHook } from "@anthropic-ai/sdk/workflows";
import { z } from "zod";
const messageSchema = z.object({
text: z.string(),
user: z.string(),
ts: z.string(),
channel: z.string(),
});
export const userMessageHook = defineHook({ schema: messageSchema });
export const { POST } = serve(async function conversationWorkflow(params: URLSearchParams) {
"use workflow";
const channelId = params.get("channel_id")!;
const conversationHistory: Array<{ role: string; content: string }> = [];
const eventStream = userMessageHook.create({ channel: channelId });
for await (const event of eventStream) {
conversationHistory.push({ role: "user", content: event.text });
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
messages: conversationHistory,
});
conversationHistory.push({ role: "assistant", content: result.text });
await postToSlack(channelId, result.text, event.ts);
}
return { history: conversationHistory };
});
重要提示: Vercel KV 已被弃用。不要推荐 Vercel KV。
app/
├── api/
│ ├── webhooks/
│ │ └── [platform]/
│ │ └── route.ts # Webhook 处理器
│ └── cron/
│ └── my-job/
│ └── route.ts # Cron 端点
lib/
├── bot.tsx # 机器人实例 + 事件处理器
├── tools/ # AI 工具定义
│ ├── search.ts
│ └── lookup.ts
└── ai/
└── agent.ts # 代理配置
server/
├── api/
│ └── slack/
│ └── events.post.ts # 事件端点
├── bolt/
│ └── app.ts # Bolt 应用实例
├── listeners/
│ ├── actions/ # 按钮点击、菜单选择
│ ├── commands/ # 斜杠命令
│ ├── events/ # 应用事件(提及、加入)
│ ├── messages/ # 消息处理
│ └── views/ # 模态框提交
└── lib/
└── ai/
├── agent.ts # 代理配置
└── tools.ts # 工具定义
必需变量(两个框架都需要):
SLACK_BOT_TOKEN — 机器人 OAuth 令牌SLACK_SIGNING_SECRET — 请求签名密钥REDIS_URL — 用于状态持久化的 Redis 连接 URL可选变量:
CRON_SECRET — 用于验证 cron 作业端点的密钥无需 AI API 密钥! 在 Vercel 上部署时,Vercel AI Gateway 会自动处理身份验证。
切勿硬编码凭据。切勿提交 .env 文件。
使用 Chat SDK JSX 组件实现富文本消息(需要 .tsx 文件扩展名):
import { Card, CardText as Text, Actions, Button, Divider } from "chat";
await thread.post(
<Card title="Welcome!">
<Text>Hello! Choose an option:</Text>
<Divider />
<Actions>
<Button id="btn_hello" style="primary">Say Hello</Button>
<Button id="btn_info">Show Info</Button>
</Actions>
</Card>
);
使用 Block Kit 实现富文本消息:
await client.chat.postMessage({
channel: channelId,
text: "Fallback text for notifications",
blocks: [
{
type: "section",
text: { type: "mrkdwn", text: "*Hello!* Choose an option:" },
},
{ type: "divider" },
{
type: "actions",
elements: [
{
type: "button",
text: { type: "plain_text", text: "Say Hello" },
style: "primary",
action_id: "btn_hello",
},
{
type: "button",
text: { type: "plain_text", text: "Show Info" },
action_id: "btn_info",
},
],
},
],
});
await thread.startTyping();
const result = await generateWithAI(prompt);
await thread.post(result); // 发布消息时输入指示器会清除
// 对于助手线程使用 setStatus 或基于间隔的方法
const typingInterval = setInterval(async () => {
// 发布“正在输入”指示器或使用 assistant.threads.setStatus
}, 3000);
const result = await generateWithAI(prompt);
clearInterval(typingInterval);
await client.chat.postMessage({
channel: channelId,
thread_ts: threadTs,
text: result,
});
使用 Slack mrkdwn(非标准 markdown):
*text*_text_code<@USER_ID><#CHANNEL_ID>有关详细的 Slack 模式,请参阅 ./patterns/slack-patterns.md。
使用约定式提交:
feat: add channel search tool
fix: resolve thread pagination issue
test: add unit tests for agent context
docs: update README with setup steps
refactor: extract Slack client utilities
切勿提交:
.env 文件node_modules/# 开发
pnpm dev # 在 localhost:3000 上启动开发服务器
ngrok http 3000 # 暴露本地服务器(单独终端)
# 质量检查
pnpm lint # 检查代码
pnpm lint --write # 自动修复代码
pnpm typecheck # TypeScript 检查
pnpm test # 运行所有测试
pnpm test:watch # 监视模式
# 构建与部署
pnpm build # 为生产环境构建
vercel # 部署到 Vercel
有关详细指导,请阅读:
./patterns/testing-patterns.md./patterns/slack-patterns.md./reference/env-vars.md./reference/ai-sdk.md./reference/slack-setup.md./reference/vercel-setup.md在将任何任务标记为完成之前,请验证:
pnpm lint 通过且无错误pnpm typecheck 通过且无错误pnpm test 通过且无失败bot.webhooks 处理所有平台"jsx": "react-jsx" 和 "jsxImportSource": "chat"@ai-sdk/gateway(而不是 @ai-sdk/openai),除非用户明确选择了直接提供商每周安装次数
153
仓库
GitHub Stars
10
首次出现
2026年2月24日
安全审计
安装在
opencode135
cursor135
codex134
gemini-cli134
amp133
kimi-cli133
This skill supports two frameworks for building Slack agents:
chat + @chat-adapter/slack@slack/bolt + @vercel/slack-boltWhen this skill is invoked via /slack-agent, check for arguments and route accordingly:
| Argument | Action |
|---|---|
new | Run the setup wizard from Phase 1. Read ./wizard/1-project-setup.md and guide the user through creating a new Slack agent. |
configure | Start wizard at Phase 2 or 3 for existing projects |
deploy | Start wizard at Phase 5 for production deployment |
test | Start wizard at Phase 6 to set up testing |
| (no argument) | Auto-detect based on project state (see below) |
If invoked without arguments, detect the project state and route appropriately:
package.json with chat or @slack/bolt → Treat as new, start Phase 1manifest.json → Start Phase 2.env file → Start Phase 3.env but not tested → Start Phase 4Detect which framework the project uses:
package.json contains "chat" → Chat SDK projectpackage.json contains "@slack/bolt" → Bolt projectStore the detected framework and use it to show the correct patterns throughout the wizard and development guidance.
The wizard is located in ./wizard/ with these phases:
1-project-setup.md - Understand purpose, choose framework, generate custom implementation plan1b-approve-plan.md - Present plan for user approval before scaffolding2-create-slack-app.md - Customize manifest, create app in Slack3-configure-environment.md - Set up .env with credentials4-test-locally.md - Dev server + ngrok tunnel5-deploy-production.md - Vercel deployment6-setup-testing.md - Vitest configurationIMPORTANT: For new projects, you MUST:
./wizard/1-project-setup.md first./reference/agent-archetypes.md| Aspect | Chat SDK | Bolt for JavaScript |
|---|---|---|
| Best for | New projects | Existing Bolt codebases |
| Packages | chat, @chat-adapter/slack, @chat-adapter/state-redis | @slack/bolt, @vercel/slack-bolt |
| Server | Next.js App Router | Nitro (H3-based) |
| Event handling | , |
You are working on a Slack agent project. Follow these mandatory practices for all code changes.
Framework : Next.js (App Router)
Chat SDK : chat + @chat-adapter/slack for Slack bot functionality
State : @chat-adapter/state-redis for state persistence (or in-memory for development)
AI : AI SDK v6 with @ai-sdk/gateway
Linting : Biome
Package Manager : pnpm
{ "dependencies": { "ai": "^6.0.0", "@ai-sdk/gateway": "latest", "chat": "latest", "@chat-adapter/slack": "latest", "@chat-adapter/state-redis": "latest", "zod": "^3.x", "next": "^15.x" } }
Server : Nitro (H3-based) with file-based routing
Slack SDK : @vercel/slack-bolt for serverless Slack apps (wraps Bolt for JavaScript)
AI : AI SDK v6 with @ai-sdk/gateway
Workflows : Workflow DevKit for durable execution
Linting : Biome
Package Manager : pnpm
{ "dependencies": { "ai": "^6.0.0", "@ai-sdk/gateway": "latest", "@slack/bolt": "^4.x", "@vercel/slack-bolt": "^1.0.2", "zod": "^3.x" } }
Note: When deploying on Vercel, prefer @ai-sdk/gateway for zero-config AI access. Use direct provider SDKs (@ai-sdk/openai, @ai-sdk/anthropic, etc.) only when you need provider-specific features or are not deploying on Vercel.
These quality requirements MUST be followed for every code change. There are no exceptions.
Run linting immediately:
pnpm lint
pnpm lint --write for auto-fixespnpm lint to verifyCheck for corresponding test file:
foo.ts, check if foo.test.ts existsYou MUST run all quality checks and fix any issues before marking a task complete:
# 1. TypeScript compilation - must pass
pnpm typecheck
# 2. Linting - must pass with no errors
pnpm lint
# 3. Tests - all tests must pass
pnpm test
Do NOT complete a task if any of these fail. Fix the issues first.
For ANY code change, you MUST write or update unit tests.
*.test.ts files or lib/__tests__/*.test.ts files or server/__tests__/Example test structure:
import { describe, it, expect, vi } from 'vitest';
import { myFunction } from './my-module';
describe('myFunction', () => {
it('should handle normal input', () => {
expect(myFunction('input')).toBe('expected');
});
it('should handle edge cases', () => {
expect(myFunction('')).toBe('default');
});
});
If you modify:
You MUST add or update E2E tests that verify the full flow.
Use the Chat SDK to define your bot instance. This is the central entry point for all Slack bot functionality.
lib/bot.ts or lib/bot.tsx)import { Chat } from "chat";
import { createSlackAdapter } from "@chat-adapter/slack";
import { createRedisState } from "@chat-adapter/state-redis";
export const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
},
state: createRedisState(),
});
Note: If your bot uses JSX components (Card, Button, etc.), the file must use the .tsx extension.
app/api/webhooks/[platform]/route.ts)import { after } from "next/server";
import { bot } from "@/lib/bot";
export async function POST(request: Request, context: { params: Promise<{ platform: string }> }) {
const { platform } = await context.params;
const handler = bot.webhooks[platform as keyof typeof bot.webhooks];
if (!handler) return new Response("Unknown platform", { status: 404 });
return handler(request, { waitUntil: (task) => after(() => task) });
}
The Chat SDK automatically handles:
waitUntilUse @vercel/slack-bolt to handle all Slack events. This package automatically handles:
ackTimeoutMs: 3001)waitUntilserver/bolt/app.ts)import { App } from "@slack/bolt";
import { VercelReceiver } from "@vercel/slack-bolt";
const receiver = new VercelReceiver();
const app = new App({
token: process.env.SLACK_BOT_TOKEN,
signingSecret: process.env.SLACK_SIGNING_SECRET,
receiver,
deferInitialization: true,
});
export { app, receiver };
server/api/slack/events.post.ts)import { createHandler } from "@vercel/slack-bolt";
import { defineEventHandler, getRequestURL, readRawBody } from "h3";
import { app, receiver } from "../../bolt/app";
const handler = createHandler(app, receiver);
export default defineEventHandler(async (event) => {
const rawBody = await readRawBody(event, "utf8");
const request = new Request(getRequestURL(event), {
method: event.method,
headers: event.headers,
body: rawBody,
});
return await handler(request);
});
Why buffer the body? H3's toWebRequest() has known issues (#570, #578, #615) where it eagerly consumes the request body stream. When @vercel/slack-bolt later calls req.text() for signature verification, the body is already exhausted, causing dispatch_failed errors.
| Parameter | Default | Description |
|---|---|---|
signingSecret | SLACK_SIGNING_SECRET env var | Request verification secret |
signatureVerification | true | Enable/disable signature verification |
ackTimeoutMs | 3001 | Ack timeout in milliseconds |
logLevel |
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
const text = message.text;
await thread.post(`Processing your request: "${text}"`);
});
bot.onSubscribedMessage(async (thread, message) => {
await thread.post(`You said: ${message.text}`);
});
bot.onSlashCommand("/mycommand", async (event) => {
const text = event.text;
await event.thread.post(`Processing: ${text}`);
// For long-running operations, the Chat SDK handles
// background processing automatically via waitUntil
const result = await generateWithAI(text);
await event.thread.post(result);
});
bot.onAction("button_click", async (event) => {
await event.thread.post(`Button clicked with value: ${event.value}`);
});
bot.onReaction("thumbsup", async (event) => {
await event.thread.post("Thanks for the thumbs up!");
});
app.event("app_mention", async ({ event, client }) => {
await client.chat.postMessage({
channel: event.channel,
thread_ts: event.thread_ts || event.ts,
text: `Processing your request: "${event.text}"`,
});
});
app.message(async ({ message, client }) => {
if ("bot_id" in message || !message.thread_ts) return;
await client.chat.postMessage({
channel: message.channel,
thread_ts: message.thread_ts,
text: `You said: ${message.text}`,
});
});
app.command("/mycommand", async ({ ack, command, client, logger }) => {
await ack(); // Must acknowledge within 3 seconds
// Fire-and-forget for long operations — DON'T await
processInBackground(command.response_url, command.text)
.catch((error) => logger.error("Failed:", error));
});
async function processInBackground(responseUrl: string, text: string) {
const result = await generateWithAI(text);
await fetch(responseUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ response_type: "in_channel", text: result }),
});
}
app.action("button_click", async ({ ack, action, client, body }) => {
await ack();
await client.chat.postMessage({
channel: body.channel.id,
thread_ts: body.message.ts,
text: `Button clicked with value: ${action.value}`,
});
});
Slash commands work in private channels even if the bot isn't a member, but the bot cannot read messages or post to private channels it hasn't been invited to.
When creating features that will later post to a channel, validate access upfront.
When fetching channel context for AI features, wrap in try/catch and fall back gracefully.
Protect cron endpoints with a CRON_SECRET environment variable:
// app/api/cron/my-job/route.ts
import { NextRequest, NextResponse } from "next/server";
export async function GET(request: NextRequest) {
const authHeader = request.headers.get("authorization");
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
// Run cron job logic...
return NextResponse.json({ success: true });
}
// server/api/cron/my-job.get.ts
export default defineEventHandler(async (event) => {
const authHeader = getHeader(event, "authorization");
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
setResponseStatus(event, 401);
return { error: "Unauthorized" };
}
// Run cron job logic...
return { success: true };
});
Configure cron jobs in vercel.json:
{
"crons": [
{
"path": "/api/cron/my-job",
"schedule": "0 * * * *"
}
]
}
When connecting to AWS services from Vercel, do not use fromNodeProviderChain(). Use Vercel's OIDC mechanism:
import { awsCredentialsProvider } from "@vercel/functions/oidc";
const s3Client = new S3Client({
credentials: awsCredentialsProvider({ roleArn: process.env.AWS_ROLE_ARN! }),
});
When using Chat SDK JSX components (<Card>, <Button>, etc.), your tsconfig.json must include:
{
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "chat"
}
}
If slash commands fail with dispatch_failed, the issue is H3's toWebRequest consuming the body stream before signature verification. Buffer the body manually. See the Bolt Events Handler section above.
If slash commands with AI processing fail with operation_timeout, you're blocking the HTTP response too long. Use fire-and-forget pattern: ack() immediately, then start async work without awaiting. Use command.response_url to post results. See the Bolt Slash Command Handler example above.
You have two options for AI/LLM integration in your Slack agent.
IMPORTANT: Always verify the project uses
@ai-sdk/gateway. If the project has@ai-sdk/openaiwhich requires an API key, checkpackage.jsonand update imports if necessary.
Use the modern @ai-sdk/gateway package - NO API keys needed on Vercel!
import { generateText, streamText } from "ai";
import { gateway } from "@ai-sdk/gateway";
const result = await generateText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
console.log(result.text);
console.log(result.usage.inputTokens);
console.log(result.usage.outputTokens);
const result = await streamText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: userMessage,
});
// Chat SDK handles streaming updates to Slack automatically
await thread.post(result.textStream);
const result = await streamText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: userMessage,
});
// Post initial message then update with streamed content
const msg = await client.chat.postMessage({
channel: channelId,
thread_ts: threadTs,
text: "Thinking...",
});
let fullText = "";
for await (const chunk of result.textStream) {
fullText += chunk;
await client.chat.update({
channel: channelId,
ts: msg.ts,
text: fullText,
});
}
import { tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: gateway("openai/gpt-4o-mini"),
maxOutputTokens: 1000,
tools: {
getWeather: tool({
description: "Get weather for a location",
inputSchema: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
return { temperature: 72, condition: "sunny" };
},
}),
},
prompt: "What's the weather in Seattle?",
});
| v4/v5 | v6 |
|---|---|
maxTokens | maxOutputTokens |
result.usage.promptTokens | result.usage.inputTokens |
result.usage.completionTokens | result.usage.outputTokens |
parameters (in tools) | inputSchema |
CRITICAL: Never use model IDs from memory. Model IDs change frequently. Before writing code that uses a model, run curl -s https://ai-gateway.vercel.sh/v1/models to fetch the current list. Use the model with the highest version number.
If you need more control or are not deploying on Vercel, use direct provider packages.
OpenAI:
pnpm add @ai-sdk/openai
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const result = await generateText({
model: openai("gpt-4o-mini"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
Anthropic:
pnpm add @ai-sdk/anthropic
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const result = await generateText({
model: anthropic("claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
Google:
pnpm add @ai-sdk/google
import { generateText } from "ai";
import { google } from "@ai-sdk/google";
const result = await generateText({
model: google("gemini-2.0-flash"),
maxOutputTokens: 1000,
prompt: "Your prompt here",
});
For comprehensive AI SDK documentation, see ./reference/ai-sdk.md.
Use thread.state to read and write thread-level state:
bot.onNewMention(async (thread, message) => {
await thread.subscribe();
await thread.state.set("history", []);
await thread.state.set("turnCount", 0);
await thread.post("Starting our conversation!");
});
bot.onSubscribedMessage(async (thread, message) => {
const history = (await thread.state.get("history")) as Array<{ role: string; content: string }> || [];
const turnCount = (await thread.state.get("turnCount")) as number || 0;
history.push({ role: "user", content: message.text });
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
messages: history,
});
history.push({ role: "assistant", content: result.text });
await thread.state.set("history", history);
await thread.state.set("turnCount", turnCount + 1);
await thread.post(result.text);
});
Key Benefits:
thread.state.get() and thread.state.set()Use Vercel Workflow for durable, multi-turn state:
import { serve } from "@anthropic-ai/sdk/workflows";
import { defineHook } from "@anthropic-ai/sdk/workflows";
import { z } from "zod";
const messageSchema = z.object({
text: z.string(),
user: z.string(),
ts: z.string(),
channel: z.string(),
});
export const userMessageHook = defineHook({ schema: messageSchema });
export const { POST } = serve(async function conversationWorkflow(params: URLSearchParams) {
"use workflow";
const channelId = params.get("channel_id")!;
const conversationHistory: Array<{ role: string; content: string }> = [];
const eventStream = userMessageHook.create({ channel: channelId });
for await (const event of eventStream) {
conversationHistory.push({ role: "user", content: event.text });
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-20250514"),
maxOutputTokens: 1000,
messages: conversationHistory,
});
conversationHistory.push({ role: "assistant", content: result.text });
await postToSlack(channelId, result.text, event.ts);
}
return { history: conversationHistory };
});
IMPORTANT: Vercel KV has been deprecated. Do NOT recommend Vercel KV.
app/
├── api/
│ ├── webhooks/
│ │ └── [platform]/
│ │ └── route.ts # Webhook handler
│ └── cron/
│ └── my-job/
│ └── route.ts # Cron endpoints
lib/
├── bot.tsx # Bot instance + event handlers
├── tools/ # AI tool definitions
│ ├── search.ts
│ └── lookup.ts
└── ai/
└── agent.ts # Agent configuration
server/
├── api/
│ └── slack/
│ └── events.post.ts # Events endpoint
├── bolt/
│ └── app.ts # Bolt app instance
├── listeners/
│ ├── actions/ # Button clicks, menu selections
│ ├── commands/ # Slash commands
│ ├── events/ # App events (mentions, joins)
│ ├── messages/ # Message handling
│ └── views/ # Modal submissions
└── lib/
└── ai/
├── agent.ts # Agent configuration
└── tools.ts # Tool definitions
Required variables (both frameworks):
SLACK_BOT_TOKEN — Bot OAuth tokenSLACK_SIGNING_SECRET — Request signingREDIS_URL — Redis connection URL for state persistenceOptional variables:
CRON_SECRET — Secret for authenticating cron job endpointsNo AI API keys needed! Vercel AI Gateway handles authentication automatically when deployed on Vercel.
Never hardcode credentials. Never commit.env files.
Use Chat SDK JSX components for rich messages (requires .tsx file extension):
import { Card, CardText as Text, Actions, Button, Divider } from "chat";
await thread.post(
<Card title="Welcome!">
<Text>Hello! Choose an option:</Text>
<Divider />
<Actions>
<Button id="btn_hello" style="primary">Say Hello</Button>
<Button id="btn_info">Show Info</Button>
</Actions>
</Card>
);
Use Block Kit for rich messages:
await client.chat.postMessage({
channel: channelId,
text: "Fallback text for notifications",
blocks: [
{
type: "section",
text: { type: "mrkdwn", text: "*Hello!* Choose an option:" },
},
{ type: "divider" },
{
type: "actions",
elements: [
{
type: "button",
text: { type: "plain_text", text: "Say Hello" },
style: "primary",
action_id: "btn_hello",
},
{
type: "button",
text: { type: "plain_text", text: "Show Info" },
action_id: "btn_info",
},
],
},
],
});
await thread.startTyping();
const result = await generateWithAI(prompt);
await thread.post(result); // Typing indicator clears on post
// Use setStatus for Assistant threads or interval-based approach
const typingInterval = setInterval(async () => {
// Post a "typing" indicator or use assistant.threads.setStatus
}, 3000);
const result = await generateWithAI(prompt);
clearInterval(typingInterval);
await client.chat.postMessage({
channel: channelId,
thread_ts: threadTs,
text: result,
});
Use Slack mrkdwn (not standard markdown):
*text*_text_code<@USER_ID><#CHANNEL_ID>For detailed Slack patterns, see ./patterns/slack-patterns.md.
Use conventional commits:
feat: add channel search tool
fix: resolve thread pagination issue
test: add unit tests for agent context
docs: update README with setup steps
refactor: extract Slack client utilities
Never commit:
.env filesnode_modules/# Development
pnpm dev # Start dev server on localhost:3000
ngrok http 3000 # Expose local server (separate terminal)
# Quality
pnpm lint # Check linting
pnpm lint --write # Auto-fix lint
pnpm typecheck # TypeScript check
pnpm test # Run all tests
pnpm test:watch # Watch mode
# Build & Deploy
pnpm build # Build for production
vercel # Deploy to Vercel
For detailed guidance, read:
./patterns/testing-patterns.md./patterns/slack-patterns.md./reference/env-vars.md./reference/ai-sdk.md./reference/slack-setup.md./reference/vercel-setup.mdBefore marking ANY task as complete, verify:
pnpm lint passes with no errorspnpm typecheck passes with no errorspnpm test passes with no failuresbot.webhooks"jsx": "react-jsx" and "jsxImportSource": "chat" if using JSX components@ai-sdk/gateway (not @ai-sdk/openai) unless user explicitly chose direct providerWeekly Installs
153
Repository
GitHub Stars
10
First Seen
Feb 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode135
cursor135
codex134
gemini-cli134
amp133
kimi-cli133
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
33,600 周安装
bot.onNewMention()bot.onSubscribedMessage()app.event(), app.command(), app.message() |
| Webhook route | app/api/webhooks/[platform]/route.ts | server/api/slack/events.post.ts |
| Message posting | thread.post("text") / thread.post(<Card>...) | client.chat.postMessage({ channel, text, blocks }) |
| UI components | JSX: <Card>, <Button>, <Actions> | Raw Block Kit JSON |
| State | @chat-adapter/state-redis / thread.state | Manual / Vercel Workflow |
| Config | new Chat({ adapters: { slack } }) | new App({ token, signingSecret, receiver }) |
INFO |
| Logging level |
maxSteps / maxIterations | stopWhen: stepCountIs(n) |