vercel-ai-sdk by wsimmonds/claude-nextjs-skills
npx skills add https://github.com/wsimmonds/claude-nextjs-skills --skill vercel-ai-sdk在以下场景中使用此技能:
useChat 钩子实现 AI 聊天界面常用包:
- ai (核心 AI SDK)
- @ai-sdk/openai (OpenAI 提供程序)
- @ai-sdk/anthropic (Anthropic 提供程序)
- @modelcontextprotocol/sdk (MCP 集成)
- zod (用于工具模式)
</critical>
<critical>
“代码正确”是不够的
你必须达到完全通过的状态
这就是自主代理的意义所在
</critical>
你不仅仅是编写代码——你是在自主完成任务。
这意味着:
❌ 错误:“代码是正确的,但包没有安装——那是环境问题” ✅ “由于缺少包导致构建失败——现在使用 npm install [package] 安装它”
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
❌ 错误:“测试通过但构建失败——不是我的问题” ✅ 正确:“构建失败——正在调试错误并修复它”
❌ 错误:“有一个框架错误,无法修复” ✅ 正确:“检测到框架错误——研究问题,尝试变通方法,调试直到找到解决方案”
❌ 错误:“实现已完成”(但测试失败) ✅ 正确:“测试失败——正在调试和修复直到它们全部通过”
遇到“模块未找到”错误时:
首先检测包管理器 - 检查锁文件:
ls -la | grep -E "lock"
# 查找:pnpm-lock.yaml、package-lock.json、yarn.lock、bun.lockb
从导入语句识别包:
错误:找不到模块 '@ai-sdk/openai'
导入:import { openai } from '@ai-sdk/openai'
所需包:@ai-sdk/openai
使用正确的包管理器安装:
# 如果存在 pnpm-lock.yaml(Next.js 评估中最常见):
pnpm install @ai-sdk/openai
# 或
pnpm add @ai-sdk/openai
# 如果存在 package-lock.json:
npm install @ai-sdk/openai
# 如果存在 yarn.lock:
yarn add @ai-sdk/openai
# 如果存在 bun.lockb:
bun install @ai-sdk/openai
重新运行构建以验证:
npm run build
# 或 pnpm run build、yarn build、bun run build
修复出现的任何新错误
⚠️ 关键警告: 使用错误的包管理器(例如,项目使用 pnpm 时使用 npm)将:
构建失败时:
测试失败时:
任务仅在以下情况下完成:
npm run build 成功)npm run lint 成功)npm run test 成功)切勿停留在“代码正确”——要达到完全通过的状态!
实现工具调用时,必须使用 'ai' 包中的 tool() 辅助函数。
// 不要这样做 - 此模式不正确
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ 错误 - v5 中不存在 "parameters"
execute: async ({...}) => {...},
}
}
这将失败并显示: Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'
// 始终这样做 - 这是唯一正确的模式
import { tool } from 'ai'; // ⚠️ 必须导入 tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ 必须用 tool() 包装
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ 必须使用 "inputSchema"(不是 "parameters")
execute: async ({...}) => {...},
}),
}
实现任何工具之前,验证:
tool:import { tool } from 'ai';tool({ ... }) 包装了工具定义inputSchema 属性(不是 parameters)z.object({ ... })execute 函数description 字符串❌ 错误(v4 模式):
const { messages, input, setInput, append } = useChat();
// 发送消息
append({ content: text, role: 'user' });
✅ 正确(v5 模式):
const { messages, sendMessage } = useChat();
const [input, setInput] = useState('');
// 发送消息
sendMessage({ text: input });
❌ 错误(v4 简单内容):
<div>{message.content}</div>
✅ 正确(v5 基于部分):
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>
✅ 首选:基于字符串(v5 推荐):
import { generateText } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o', // 字符串格式
prompt: 'Hello',
});
✅ 也有效:基于函数(旧版支持):
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const result = await generateText({
model: openai('gpt-4o'), // 函数格式
prompt: 'Hello',
});
目的: 为非交互式用例生成文本(电子邮件草稿、摘要、带工具的代理)。
签名:
import { generateText } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o', // 字符串格式:'provider/model-id'
prompt: 'Your prompt here', // 用户输入
system: 'Optional system message', // 可选的系统指令
tools?: { ... }, // 可选的工具调用
maxSteps?: 5, // 用于多步骤工具调用
});
返回值:
{
text: string; // 生成的文本输出
toolCalls: ToolCall[]; // 进行的工具调用
finishReason: string; // 生成停止的原因
usage: TokenUsage; // 令牌消耗
response: RawResponse; // 原始提供程序响应
warnings: Warning[]; // 提供程序特定的警报
}
示例:
// app/api/generate/route.ts
import { generateText } from 'ai';
export async function GET() {
const result = await generateText({
model: 'anthropic/claude-4-sonnet',
prompt: 'Why is the sky blue?',
});
return Response.json({ text: result.text });
}
目的: 为交互式聊天应用程序流式传输响应。
签名:
import { streamText } from 'ai';
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[], // 用于聊天历史记录
tools?: { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});
返回方法:
// 对于使用 useChat 钩子的聊天应用程序
result.toUIMessageStreamResponse();
// 对于简单的文本流式传输
result.toTextStreamResponse();
示例 - 聊天 API 路由:
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
system: 'You are a helpful assistant.',
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
目的: 构建具有流式支持的交互式聊天 UI。
签名:
import { useChat } from 'ai/react';
const {
messages, // 具有基于部分结构的 UIMessage 数组
sendMessage, // 发送消息的函数(替换 append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // 中止当前流式传输
regenerate, // 重新处理最后一条消息
setMessages, // 手动修改历史记录
error, // 请求失败时的错误对象
reload, // 错误后重试
} = useChat({
api: '/api/chat', // API 端点
onFinish?: (message) => { ... },
onError?: (error) => { ... },
});
完整示例:
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) =>
part.type === 'text' ? (
<span key={index}>{part.text}</span>
) : null
)}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}
目的: 使 AI 模型能够使用结构化参数调用函数。
定义工具:
import { tool } from 'ai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
unit: z.enum(['C', 'F']).describe('Temperature unit'),
}),
execute: async ({ location, unit }) => {
// 获取或模拟天气数据
return {
location,
temperature: 24,
unit,
condition: 'Sunny',
};
},
});
将工具与 generateText/streamText 一起使用:
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: 'Get the weather for a location',
inputSchema: z.object({
city: z.string().describe('The city to get the weather for'),
unit: z.enum(['C', 'F']).describe('The unit to display the temperature in'),
}),
execute: async ({ city, unit }) => {
// 模拟响应
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
});
return result.toUIMessageStreamResponse();
}
多步骤工具调用:
const result = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: 'What is the weather in San Francisco and find hotels there?',
maxSteps: 5, // 允许最多 5 个工具调用步骤
});
目的: 将文本转换为数值向量,用于语义搜索、RAG 或相似性。
签名:
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'Text to embed',
});
返回值:
{
embedding: number[]; // 表示文本的数值数组
usage: { tokens: number }; // 令牌消耗
response: RawResponse; // 原始提供程序响应
}
示例 - 嵌入 API 路由:
// app/api/embed/route.ts
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function GET() {
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'sunny day at the beach',
});
return Response.json({ embedding, usage });
}
批量嵌入:
import { embedMany } from 'ai';
const { embeddings, usage } = await embedMany({
model: openai.textEmbeddingModel('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy mountain landscape',
],
});
convertToModelMessages: 将来自 useChat 的 UI 消息转换为用于 AI 函数的 ModelMessage 对象。
import { convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages), // 为模型转换
});
return result.toUIMessageStreamResponse();
}
目的: 连接到外部 MCP 服务器以进行动态工具访问。
示例:
// app/api/chat/route.ts
import { experimental_createMCPClient, streamText } from 'ai';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
try {
// 连接到 MCP 服务器
const httpTransport = new StreamableHTTPClientTransport(
new URL('http://localhost:3000/mcp')
);
const httpClient = await experimental_createMCPClient({
transport: httpTransport,
});
// 从 MCP 服务器获取工具
const tools = await httpClient.tools();
const response = streamText({
model: 'openai/gpt-4o',
tools,
prompt,
onFinish: async () => {
await httpClient.close(); // 清理
},
onError: async () => {
await httpClient.close(); // 错误时清理
},
});
return response.toTextStreamResponse();
} catch (error) {
return new Response('Internal Server Error', { status: 500 });
}
}
关键点:
experimental_createMCPClient(注意:实验性 API)onFinish 和 onError 中关闭客户端httpClient.tools() 动态获取工具@modelcontextprotocol/sdk 包// 格式:'provider/model-id'
model: 'openai/gpt-4o'
model: 'anthropic/claude-4-sonnet'
model: 'google/gemini-2.0-flash'
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
model: openai('gpt-4o')
model: anthropic('claude-4-sonnet')
import { openai } from '@ai-sdk/openai';
// 文本嵌入使用不同的方法
openai.textEmbeddingModel('text-embedding-3-small')
openai.textEmbeddingModel('text-embedding-3-large')
import type {
UIMessage, // 来自 useChat 的消息类型
ModelMessage, // 用于模型函数的消息类型
ToolCall, // 工具调用信息
TokenUsage, // 令牌消耗数据
} from 'ai';
import { tool } from 'ai';
import { z } from 'zod';
// 工具辅助函数推断执行参数类型
const myTool = tool({
description: 'My tool',
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
execute: async ({ param1, param2 }) => {
// param1 推断为字符串
// param2 推断为数字
return { result: 'success' };
},
});
// app/api/chat/route.ts
import type { UIMessage } from 'ai';
export async function POST(req: Request): Promise<Response> {
const { messages }: { messages: UIMessage[] } = await req.json();
// ... 实现
}
客户端(app/page.tsx):
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}
服务器(app/api/chat/route.ts):
import { streamText, convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
system: 'You are a helpful assistant.',
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
带工具调用的服务器:
import { streamText, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: 'Get weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
// API 调用或模拟数据
return { city, temp: 72, condition: 'Sunny' };
},
}),
searchWeb: tool({
description: 'Search the web',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// 搜索实现
return { results: ['...'] };
},
}),
},
});
return result.toUIMessageStreamResponse();
}
// app/api/summarize/route.ts
import { generateText } from 'ai';
export async function POST(req: Request) {
const { text } = await req.json();
const result = await generateText({
model: 'anthropic/claude-4-sonnet',
system: 'You are a summarization expert.',
prompt: `Summarize this text:\n\n${text}`,
});
return Response.json({ summary: result.text });
}
// app/api/search/route.ts
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { query } = await req.json();
// 为搜索查询生成嵌入
const { embedding } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: query,
});
// 在向量数据库中使用嵌入进行相似性搜索
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}
这是最常见和最关键的错误。始终使用 tool() 辅助函数!
// ❌ 错误 - 普通对象(将导致构建失败)
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({ // ❌ 错误的属性名
city: z.string(),
}),
execute: async ({ city }) => { ... },
},
}
// 构建错误:Type '{ description: string; parameters: ... }' is not assignable
// ✅ 正确 - 使用 tool() 辅助函数(必需)
import { tool } from 'ai'; // ⚠️ 必须导入 tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ 必须使用 tool() 包装器
description: 'My tool',
inputSchema: z.object({ // ⚠️ 使用 inputSchema(不是 parameters)
city: z.string(),
}),
execute: async ({ city }) => { ... },
}),
}
// ❌ 错误 - v4 模式
const { input, setInput, append } = useChat();
append({ content: 'Hello', role: 'user' });
// ✅ 正确 - v5 模式
const { sendMessage } = useChat();
const [input, setInput] = useState('');
sendMessage({ text: 'Hello' });
// ❌ 错误 - v4 模式
<div>{message.content}</div>
// ✅ 正确 - v5 基于部分
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
// ❌ 错误 - 直接传递 UIMessages
const result = streamText({
model: 'openai/gpt-4o',
messages: messages, // UIMessage[] - 类型错误
});
// ✅ 正确 - 转换为 ModelMessage[]
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
});
// ❌ 错误 - 无清理
const httpClient = await experimental_createMCPClient({
transport: httpTransport,
});
const tools = await httpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ 正确 - 在回调中清理
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await httpClient.close();
},
onError: async () => {
await httpClient.close();
},
});
// ❌ 错误 - 对 useChat 使用文本流
return result.toTextStreamResponse(); // 无法与 useChat 钩子一起使用
// ✅ 正确 - 对 useChat 使用 UI 消息流
return result.toUIMessageStreamResponse();
// ✅ 也正确 - 非聊天场景的文本流
// 对于简单的文本流式传输(不使用 useChat 钩子)
return result.toTextStreamResponse();
// ❌ 错误 - 使用常规模型方法
const { embedding } = await embed({
model: openai('text-embedding-3-small'), // 错误的方法
value: 'text',
});
// ✅ 正确 - 使用 textEmbeddingModel
const { embedding } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'text',
});
从 v4 迁移到 v5 时,更新:
append 替换为 sendMessageinput、setInput、handleInputChangeconst [input, setInput] = useState('')message.content 更新为 message.parts.map(...){ text: input } 结构convertToModelMessagestoUIMessageStreamResponse()(不是 v4 流式传输方法)inputSchema 的 tool() 辅助函数'provider/model-id')UIMessage、ModelMessage)实现 AI SDK 功能时,询问:
这是客户端还是服务器端?
useChat 钩子generateText 或 streamText我需要流式传输还是非流式传输?
streamText + toUIMessageStreamResponse()generateTextstreamText + toTextStreamResponse()我需要工具调用吗?
tool() 辅助函数和 inputSchema(zod)定义工具generateText 或 streamText我使用了正确的消息格式吗?
parts 属性的 UIMessage[]convertToModelMessages() 转换为 ModelMessage[]message.parts.map(...) 渲染消息我的模型规范正确吗?
'openai/gpt-4o'openai('gpt-4o')openai.textEmbeddingModel('text-embedding-3-small')我需要嵌入吗?
embed 处理单个值embedMany 处理批次textEmbeddingModel() 方法| 任务 | 函数 | 关键参数 |
|---|---|---|
| 生成文本 | generateText() | model、prompt、system、tools |
| 流式传输文本 | streamText() | model、messages、tools、onFinish |
| 聊天 UI | useChat() | api、onFinish、onError |
| 工具调用 | tool() | description、inputSchema、execute |
| 文本嵌入 | embed() | model、value |
| 批量嵌入 | embedMany() | model、values |
| 消息转换 | convertToModelMessages() | messages (UIMessage[]) |
| MCP 集成 | experimental_createMCPClient() | transport |
有疑问时,请查看官方文档:
记住: AI SDK v5 使用基于字符串的模型规范、基于部分的消息、sendMessage 而不是 append,并且需要在 API 路由中使用 convertToModelMessages。
每周安装次数
123
仓库
GitHub 星标
80
首次出现
2026 年 1 月 23 日
安全审计
安装于
claude-code108
opencode100
codex97
gemini-cli96
github-copilot96
cursor96
Use this skill when:
useChat hook NEVER accept "Module not found" errors as environment issues
YOU must install the required packages with the CORRECT package manager
Common packages needed:
- ai (core AI SDK)
- @ai-sdk/openai (OpenAI provider)
- @ai-sdk/anthropic (Anthropic provider)
- @modelcontextprotocol/sdk (MCP integration)
- zod (for tool schemas)
</critical>
"Code is correct" is NOT enough
You must achieve FULL PASSING status
This is what it means to be an autonomous agent
</critical>
You are not just writing code - you are COMPLETING TASKS AUTONOMOUSLY.
This means:
❌ WRONG: "The code is correct, but the package isn't installed - that's an environment issue" ✅ CORRECT: "Build failed due to missing package - installing it now with npm install [package]"
❌ WRONG: "Tests pass but build fails - not my problem" ✅ CORRECT: "Build is failing - debugging the error and fixing it now"
❌ WRONG: "There's a framework bug, can't fix it" ✅ CORRECT: "Framework error detected - researching the issue, trying workarounds, debugging until I find a solution"
❌ WRONG: "The implementation is complete" (with failing tests) ✅ CORRECT: "Tests are failing - debugging and fixing until they all pass"
When you encounter "Module not found" errors:
Detect the package manager FIRST - Check for lockfiles:
ls -la | grep -E "lock"
# Look for: pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb
Identify the package from the import statement
Error: Cannot find module '@ai-sdk/openai'
Import: import { openai } from '@ai-sdk/openai'
Package needed: @ai-sdk/openai
Install with the CORRECT package manager
# If pnpm-lock.yaml exists (MOST COMMON for Next.js evals):
pnpm install @ai-sdk/openai
# or
pnpm add @ai-sdk/openai
# If package-lock.json exists:
npm install @ai-sdk/openai
# If yarn.lock exists:
yarn add @ai-sdk/openai
# If bun.lockb exists:
bun install @ai-sdk/openai
Re-run build to verify
npm run build
# or pnpm run build, yarn build, bun run build
Fix any new errors that appear
⚠️ CRITICAL WARNING: Using the WRONG package manager (e.g., npm when the project uses pnpm) will:
When build fails:
When tests fail:
Task is ONLY complete when:
npm run build succeeds)npm run lint succeeds)npm run test succeeds)NEVER stop at "code is correct" - achieve FULL PASSING status!
When implementing tool calling, you MUST use thetool() helper function from the 'ai' package.
// DO NOT DO THIS - This pattern is INCORRECT
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ WRONG - "parameters" doesn't exist in v5
execute: async ({...}) => {...},
}
}
This will fail with: Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'
// ALWAYS DO THIS - This is the ONLY correct pattern
import { tool } from 'ai'; // ⚠️ MUST import tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ MUST wrap with tool()
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ MUST use "inputSchema" (not "parameters")
execute: async ({...}) => {...},
}),
}
Before implementing any tool, verify:
tool from 'ai' package: import { tool } from 'ai';tool({ ... })inputSchema property (NOT parameters)z.object({ ... })execute function with async callbackdescription string for the tool❌ WRONG (v4 pattern):
const { messages, input, setInput, append } = useChat();
// Sending message
append({ content: text, role: 'user' });
✅ CORRECT (v5 pattern):
const { messages, sendMessage } = useChat();
const [input, setInput] = useState('');
// Sending message
sendMessage({ text: input });
❌ WRONG (v4 simple content):
<div>{message.content}</div>
✅ CORRECT (v5 parts-based):
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>
✅ PREFER: String-based (v5 recommended):
import { generateText } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o', // String format
prompt: 'Hello',
});
✅ ALSO WORKS: Function-based (legacy support):
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const result = await generateText({
model: openai('gpt-4o'), // Function format
prompt: 'Hello',
});
Purpose: Generate text for non-interactive use cases (email drafts, summaries, agents with tools).
Signature:
import { generateText } from 'ai';
const result = await generateText({
model: 'openai/gpt-4o', // String format: 'provider/model-id'
prompt: 'Your prompt here', // User input
system: 'Optional system message', // Optional system instructions
tools?: { ... }, // Optional tool calling
maxSteps?: 5, // For multi-step tool calling
});
Return Value:
{
text: string; // Generated text output
toolCalls: ToolCall[]; // Tool invocations made
finishReason: string; // Why generation stopped
usage: TokenUsage; // Token consumption
response: RawResponse; // Raw provider response
warnings: Warning[]; // Provider-specific alerts
}
Example:
// app/api/generate/route.ts
import { generateText } from 'ai';
export async function GET() {
const result = await generateText({
model: 'anthropic/claude-4-sonnet',
prompt: 'Why is the sky blue?',
});
return Response.json({ text: result.text });
}
Purpose: Stream responses for interactive chat applications.
Signature:
import { streamText } from 'ai';
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[], // For chat history
tools?: { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});
Return Methods:
// For chat applications with useChat hook
result.toUIMessageStreamResponse();
// For simple text streaming
result.toTextStreamResponse();
Example - Chat API Route:
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
system: 'You are a helpful assistant.',
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
Purpose: Build interactive chat UIs with streaming support.
Signature:
import { useChat } from 'ai/react';
const {
messages, // Array of UIMessage with parts-based structure
sendMessage, // Function to send messages (replaces append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // Abort current streaming
regenerate, // Reprocess last message
setMessages, // Manually modify history
error, // Error object if request fails
reload, // Retry after error
} = useChat({
api: '/api/chat', // API endpoint
onFinish?: (message) => { ... },
onError?: (error) => { ... },
});
Complete Example:
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) =>
part.type === 'text' ? (
<span key={index}>{part.text}</span>
) : null
)}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}
Purpose: Enable AI models to call functions with structured parameters.
Defining Tools:
import { tool } from 'ai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
unit: z.enum(['C', 'F']).describe('Temperature unit'),
}),
execute: async ({ location, unit }) => {
// Fetch or mock weather data
return {
location,
temperature: 24,
unit,
condition: 'Sunny',
};
},
});
Using Tools with generateText/streamText:
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: 'Get the weather for a location',
inputSchema: z.object({
city: z.string().describe('The city to get the weather for'),
unit: z.enum(['C', 'F']).describe('The unit to display the temperature in'),
}),
execute: async ({ city, unit }) => {
// Mock response
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
});
return result.toUIMessageStreamResponse();
}
Multi-Step Tool Calling:
const result = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: 'What is the weather in San Francisco and find hotels there?',
maxSteps: 5, // Allow up to 5 tool call steps
});
Purpose: Convert text into numerical vectors for semantic search, RAG, or similarity.
Signature:
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'Text to embed',
});
Return Value:
{
embedding: number[]; // Numerical array representing the text
usage: { tokens: number }; // Token consumption
response: RawResponse; // Raw provider response
}
Example - Embedding API Route:
// app/api/embed/route.ts
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function GET() {
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'sunny day at the beach',
});
return Response.json({ embedding, usage });
}
Batch Embeddings:
import { embedMany } from 'ai';
const { embeddings, usage } = await embedMany({
model: openai.textEmbeddingModel('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy mountain landscape',
],
});
convertToModelMessages: Converts UI messages from useChat into ModelMessage objects for AI functions.
import { convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages), // Convert for model
});
return result.toUIMessageStreamResponse();
}
Purpose: Connect to external MCP servers for dynamic tool access.
Example:
// app/api/chat/route.ts
import { experimental_createMCPClient, streamText } from 'ai';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
try {
// Connect to MCP server
const httpTransport = new StreamableHTTPClientTransport(
new URL('http://localhost:3000/mcp')
);
const httpClient = await experimental_createMCPClient({
transport: httpTransport,
});
// Fetch tools from MCP server
const tools = await httpClient.tools();
const response = streamText({
model: 'openai/gpt-4o',
tools,
prompt,
onFinish: async () => {
await httpClient.close(); // Clean up
},
onError: async () => {
await httpClient.close(); // Clean up on error
},
});
return response.toTextStreamResponse();
} catch (error) {
return new Response('Internal Server Error', { status: 500 });
}
}
Key Points:
experimental_createMCPClient (note: experimental API)onFinish and onErrorhttpClient.tools()@modelcontextprotocol/sdk package// Format: 'provider/model-id'
model: 'openai/gpt-4o'
model: 'anthropic/claude-4-sonnet'
model: 'google/gemini-2.0-flash'
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
model: openai('gpt-4o')
model: anthropic('claude-4-sonnet')
import { openai } from '@ai-sdk/openai';
// Text embeddings use a different method
openai.textEmbeddingModel('text-embedding-3-small')
openai.textEmbeddingModel('text-embedding-3-large')
import type {
UIMessage, // Message type from useChat
ModelMessage, // Message type for model functions
ToolCall, // Tool call information
TokenUsage, // Token consumption data
} from 'ai';
import { tool } from 'ai';
import { z } from 'zod';
// Tool helper infers execute parameter types
const myTool = tool({
description: 'My tool',
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
execute: async ({ param1, param2 }) => {
// param1 is inferred as string
// param2 is inferred as number
return { result: 'success' };
},
});
// app/api/chat/route.ts
import type { UIMessage } from 'ai';
export async function POST(req: Request): Promise<Response> {
const { messages }: { messages: UIMessage[] } = await req.json();
// ... implementation
}
Client (app/page.tsx):
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}
Server (app/api/chat/route.ts):
import { streamText, convertToModelMessages } from 'ai';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
system: 'You are a helpful assistant.',
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
Server with tool calling:
import { streamText, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';
import type { UIMessage } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: 'Get weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
// API call or mock data
return { city, temp: 72, condition: 'Sunny' };
},
}),
searchWeb: tool({
description: 'Search the web',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// Search implementation
return { results: ['...'] };
},
}),
},
});
return result.toUIMessageStreamResponse();
}
// app/api/summarize/route.ts
import { generateText } from 'ai';
export async function POST(req: Request) {
const { text } = await req.json();
const result = await generateText({
model: 'anthropic/claude-4-sonnet',
system: 'You are a summarization expert.',
prompt: `Summarize this text:\n\n${text}`,
});
return Response.json({ summary: result.text });
}
// app/api/search/route.ts
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { query } = await req.json();
// Generate embedding for search query
const { embedding } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: query,
});
// Use embedding for similarity search in vector database
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}
This is the most common and critical mistake. Always usetool() helper!
// ❌ WRONG - Plain object (WILL CAUSE BUILD FAILURE)
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({ // ❌ Wrong property name
city: z.string(),
}),
execute: async ({ city }) => { ... },
},
}
// Build error: Type '{ description: string; parameters: ... }' is not assignable
// ✅ CORRECT - Use tool() helper (REQUIRED)
import { tool } from 'ai'; // ⚠️ MUST import tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ MUST use tool() wrapper
description: 'My tool',
inputSchema: z.object({ // ⚠️ Use inputSchema (not parameters)
city: z.string(),
}),
execute: async ({ city }) => { ... },
}),
}
// ❌ WRONG - v4 pattern
const { input, setInput, append } = useChat();
append({ content: 'Hello', role: 'user' });
// ✅ CORRECT - v5 pattern
const { sendMessage } = useChat();
const [input, setInput] = useState('');
sendMessage({ text: 'Hello' });
// ❌ WRONG - v4 pattern
<div>{message.content}</div>
// ✅ CORRECT - v5 parts-based
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
// ❌ WRONG - passing UIMessages directly
const result = streamText({
model: 'openai/gpt-4o',
messages: messages, // UIMessage[] - type error
});
// ✅ CORRECT - convert to ModelMessage[]
const result = streamText({
model: 'openai/gpt-4o',
messages: convertToModelMessages(messages),
});
// ❌ WRONG - no cleanup
const httpClient = await experimental_createMCPClient({
transport: httpTransport,
});
const tools = await httpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ CORRECT - cleanup in callbacks
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await httpClient.close();
},
onError: async () => {
await httpClient.close();
},
});
// ❌ WRONG - using text stream for useChat
return result.toTextStreamResponse(); // Won't work with useChat hook
// ✅ CORRECT - use UI message stream for useChat
return result.toUIMessageStreamResponse();
// ✅ ALSO CORRECT - text stream for non-chat scenarios
// For simple text streaming (not using useChat hook)
return result.toTextStreamResponse();
// ❌ WRONG - using regular model method
const { embedding } = await embed({
model: openai('text-embedding-3-small'), // Wrong method
value: 'text',
});
// ✅ CORRECT - use textEmbeddingModel
const { embedding } = await embed({
model: openai.textEmbeddingModel('text-embedding-3-small'),
value: 'text',
});
When migrating from v4 to v5, update:
append with sendMessage in useChatinput, setInput, handleInputChange from useChat destructuringconst [input, setInput] = useState('')message.content to message.parts.map(...){ text: input } structureconvertToModelMessages is used in API routesWhen implementing AI SDK features, ask:
Is this client-side or server-side?
useChat hookgenerateText or streamTextDo I need streaming or non-streaming?
streamText + toUIMessageStreamResponse()generateTextstreamText + toTextStreamResponse()| Task | Function | Key Parameters |
|---|---|---|
| Generate text | generateText() | model, prompt, system, tools |
| Stream text | streamText() | model, messages, tools, |
When in doubt, check the official documentation:
Remember: AI SDK v5 uses string-based model specification, parts-based messages, sendMessage instead of append, and requires convertToModelMessages in API routes.
Weekly Installs
123
Repository
GitHub Stars
80
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code108
opencode100
codex97
gemini-cli96
github-copilot96
cursor96
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
118,000 周安装
PPTX文档自动化技能:使用Python和Node.js编程创建编辑PowerPoint演示文稿
81 周安装
project-discover:AI辅助项目逆向工程与知识沉淀工具,一键建立项目SSOT
81 周安装
Angular 17+ 现代开发规范:独立组件、Signal 状态管理与原生控制流最佳实践
81 周安装
Tailwind CSS 官方插件详解:排版与表单样式优化,提升前端开发效率
81 周安装
机器学习模型训练指南:从数据准备到模型评估的完整流程与最佳实践
81 周安装
WebSocket实时通信系统实现 - Socket.IO服务器与客户端完整代码示例
81 周安装
toUIMessageStreamResponse() is used (not v4 streaming methods)tool() helper with inputSchema'provider/model-id')UIMessage, ModelMessage)tool() helper and inputSchema (zod)generateText or streamTextAm I using the correct message format?
UIMessage[] with parts propertyconvertToModelMessages() to ModelMessage[]message.parts.map(...)Is my model specification correct?
'openai/gpt-4o'openai('gpt-4o')openai.textEmbeddingModel('text-embedding-3-small')Do I need embeddings?
embed for single valuesembedMany for batchestextEmbeddingModel() methodonFinish| Chat UI | useChat() | api, onFinish, onError |
| Tool calling | tool() | description, inputSchema, execute |
| Text embedding | embed() | model, value |
| Batch embedding | embedMany() | model, values |
| Message conversion | convertToModelMessages() | messages (UIMessage[]) |
| MCP integration | experimental_createMCPClient() | transport |