重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
vercel-ai-sdk by existential-birds/beagle
npx skills add https://github.com/existential-birds/beagle --skill vercel-ai-sdkVercel AI SDK 提供 React 钩子和服务器工具,用于构建支持工具调用、文件附件和多步推理的流式聊天界面。
import { useChat } from '@ai-sdk/react';
const { messages, status, sendMessage, stop, regenerate } = useChat({
id: 'chat-id',
messages: initialMessages,
onFinish: ({ message, messages, isAbort, isError }) => {
console.log('Chat finished');
},
onError: (error) => {
console.error('Chat error:', error);
}
});
// 发送消息
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });
// 发送带文件的消息
sendMessage({
text: 'Analyze this',
files: fileList // FileList or FileUIPart[]
});
status 字段表示聊天的当前状态:
ready : 聊天空闲,准备接受新消息submitted : 消息已发送至 API,等待响应流开始广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
streamingerror : 请求期间发生错误消息使用基于部分的 UIMessage 类型结构:
interface UIMessage {
id: string;
role: 'system' | 'user' | 'assistant';
metadata?: unknown;
parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}
部分类型包括:
text: 文本内容,带有可选的流式状态file: 文件附件(图像、文档)tool-{toolName}: 带有状态机的工具调用reasoning: AI 推理痕迹data-{typeName}: 自定义数据部分import { streamText } from 'ai';
import { convertToModelMessages } from 'ai';
const result = streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(uiMessages),
tools: {
getWeather: tool({
description: 'Get weather',
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => {
return { temperature: 72, weather: 'sunny' };
}
})
}
});
return result.toUIMessageStreamResponse({
originalMessages: uiMessages,
onFinish: ({ messages }) => {
// 保存到数据库
}
});
客户端工具执行:
const { addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
if (toolCall.toolName === 'getLocation') {
addToolOutput({
tool: 'getLocation',
toolCallId: toolCall.toolCallId,
output: 'San Francisco'
});
}
}
});
渲染工具状态:
{message.parts.map(part => {
if (part.type === 'tool-getWeather') {
switch (part.state) {
case 'input-streaming':
return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
case 'input-available':
return <div>Getting weather for {part.input.city}...</div>;
case 'output-available':
return <div>Weather: {part.output.weather}</div>;
case 'output-error':
return <div>Error: {part.errorText}</div>;
}
}
})}
关于特定方面的详细文档:
const { error, clearError } = useChat({
onError: (error) => {
toast.error(error.message);
}
});
// 清除错误并重置为就绪状态
if (error) {
clearError();
}
const { regenerate } = useChat();
// 重新生成最后一条助手消息
await regenerate();
// 重新生成特定消息
await regenerate({ messageId: 'msg-123' });
import { DefaultChatTransport } from 'ai';
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({
body: {
chatId: id,
lastMessage: messages[messages.length - 1],
trigger,
messageId
}
})
})
});
// 限制 UI 更新以减少重新渲染
const chat = useChat({
experimental_throttle: 100 // 最多每 100ms 更新一次
});
import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai';
const chat = useChat({
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls
// 当所有工具调用都有输出时自动重新发送
});
SDK 为工具和消息提供完整的类型推断:
import { InferUITools, UIMessage } from 'ai';
const tools = {
getWeather: tool({
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => ({ weather: 'sunny' })
})
};
type MyMessage = UIMessage<
{ createdAt: number }, // 元数据类型
UIDataTypes,
InferUITools<typeof tools> // 工具类型
>;
const { messages } = useChat<MyMessage>();
消息使用部分数组而非单一内容字段。这允许:
工具部分经历以下状态:
input-streaming: 工具输入流式传输(可选)input-available: 工具输入完成approval-requested: 等待用户批准(可选)approval-responded: 用户已批准/拒绝(可选)output-available: 工具执行完成output-error: 工具执行失败output-denied: 用户拒绝批准SDK 使用带有 UIMessageChunk 类型的服务器发送事件 (SSE):
text-start, text-delta, text-endtool-input-available, tool-output-availablereasoning-start, reasoning-delta, reasoning-endstart, finish, abort服务器端工具 具有 execute 函数并在 API 路由上运行。
客户端工具 省略 execute,通过 onToolCall 和 addToolOutput 处理。
error 状态并提供用户反馈experimental_throttlestatus 实现适当的加载状态sendAutomaticallyWhenstop() 允许用户取消长时间运行的请求validateUIMessages 验证消息每周安装量
63
代码仓库
GitHub 星标数
42
首次出现
2026年1月20日
安全审计
安装于
gemini-cli52
opencode52
claude-code49
codex48
github-copilot44
cursor43
The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.
import { useChat } from '@ai-sdk/react';
const { messages, status, sendMessage, stop, regenerate } = useChat({
id: 'chat-id',
messages: initialMessages,
onFinish: ({ message, messages, isAbort, isError }) => {
console.log('Chat finished');
},
onError: (error) => {
console.error('Chat error:', error);
}
});
// Send a message
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });
// Send with files
sendMessage({
text: 'Analyze this',
files: fileList // FileList or FileUIPart[]
});
The status field indicates the current state of the chat:
ready : Chat is idle and ready to accept new messagessubmitted : Message sent to API, awaiting response stream startstreaming : Response actively streaming from the APIerror : An error occurred during the requestMessages use the UIMessage type with a parts-based structure:
interface UIMessage {
id: string;
role: 'system' | 'user' | 'assistant';
metadata?: unknown;
parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}
Part types include:
text: Text content with optional streaming statefile: File attachments (images, documents)tool-{toolName}: Tool invocations with state machinereasoning: AI reasoning tracesdata-{typeName}: Custom data partsimport { streamText } from 'ai';
import { convertToModelMessages } from 'ai';
const result = streamText({
model: openai('gpt-4'),
messages: convertToModelMessages(uiMessages),
tools: {
getWeather: tool({
description: 'Get weather',
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => {
return { temperature: 72, weather: 'sunny' };
}
})
}
});
return result.toUIMessageStreamResponse({
originalMessages: uiMessages,
onFinish: ({ messages }) => {
// Save to database
}
});
Client-Side Tool Execution:
const { addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
if (toolCall.toolName === 'getLocation') {
addToolOutput({
tool: 'getLocation',
toolCallId: toolCall.toolCallId,
output: 'San Francisco'
});
}
}
});
Rendering Tool States:
{message.parts.map(part => {
if (part.type === 'tool-getWeather') {
switch (part.state) {
case 'input-streaming':
return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
case 'input-available':
return <div>Getting weather for {part.input.city}...</div>;
case 'output-available':
return <div>Weather: {part.output.weather}</div>;
case 'output-error':
return <div>Error: {part.errorText}</div>;
}
}
})}
Detailed documentation on specific aspects:
const { error, clearError } = useChat({
onError: (error) => {
toast.error(error.message);
}
});
// Clear error and reset to ready state
if (error) {
clearError();
}
const { regenerate } = useChat();
// Regenerate last assistant message
await regenerate();
// Regenerate specific message
await regenerate({ messageId: 'msg-123' });
import { DefaultChatTransport } from 'ai';
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({
body: {
chatId: id,
lastMessage: messages[messages.length - 1],
trigger,
messageId
}
})
})
});
// Throttle UI updates to reduce re-renders
const chat = useChat({
experimental_throttle: 100 // Update max once per 100ms
});
import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai';
const chat = useChat({
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls
// Automatically resend when all tool calls have outputs
});
The SDK provides full type inference for tools and messages:
import { InferUITools, UIMessage } from 'ai';
const tools = {
getWeather: tool({
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => ({ weather: 'sunny' })
})
};
type MyMessage = UIMessage<
{ createdAt: number }, // Metadata type
UIDataTypes,
InferUITools<typeof tools> // Tool types
>;
const { messages } = useChat<MyMessage>();
Messages use a parts array instead of a single content field. This allows:
Tool parts progress through states:
input-streaming: Tool input streaming (optional)input-available: Tool input completeapproval-requested: Waiting for user approval (optional)approval-responded: User approved/denied (optional)output-available: Tool execution completeoutput-error: Tool execution failedoutput-denied: User denied approvalThe SDK uses Server-Sent Events (SSE) with UIMessageChunk types:
text-start, text-delta, text-endtool-input-available, tool-output-availablereasoning-start, reasoning-delta, reasoning-endstart, finish, abortServer-side tools have an execute function and run on the API route.
Client-side tools omit execute and are handled via onToolCall and addToolOutput.
error state and provide user feedbackexperimental_throttle for high-frequency updatesstatussendAutomaticallyWhen for multi-turn tool workflowsstop() to allow users to cancel long-running requestsvalidateUIMessages on the serverWeekly Installs
63
Repository
GitHub Stars
42
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli52
opencode52
claude-code49
codex48
github-copilot44
cursor43
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
55,300 周安装
Railway 服务指标查询指南 - 监控 CPU、内存、网络、磁盘使用情况
794 周安装
Claude Historian:AI对话历史搜索工具 - 提升Claude开发效率的智能助手
55 周安装
代码库知识图谱分析工具 - Understand Anything 多智能体代码理解与可视化
819 周安装
Trello API 命令行管理工具 - 使用 curl 自动化管理看板、列表和卡片
816 周安装
Sentry CLI 使用指南:命令行监控与错误追踪工具详解
825 周安装
AutoResearchClaw:全自动AI研究流水线,从主题到完整学术论文(含LaTeX格式)
813 周安装