vercel-ai-sdk by fluid-tools/claude-skills
npx skills add https://github.com/fluid-tools/claude-skills --skill vercel-ai-sdk在以下场景中使用此技能:
useChat 钩子实现 AI 聊天界面ToolLoopAgent 构建智能体应用Output.object()、Output.array() 等生成结构化输出 NEVER accept "Module not found" errors as environment issues
YOU must install the required packages with the CORRECT package manager
Common packages needed:
- ai (core AI SDK)
- @ai-sdk/openai (OpenAI provider)
- @ai-sdk/anthropic (Anthropic provider)
- @ai-sdk/mcp (MCP integration)
- @modelcontextprotocol/sdk (MCP client SDK)
- zod (for tool schemas)
</critical>
"Code is correct" is NOT enough
You must achieve FULL PASSING status
This is what it means to be an autonomous agent
</critical>
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
你不仅仅是编写代码——你是在自主完成任务。
这意味着:
❌ 错误: "代码是正确的,但包没有安装——那是环境问题" ✅ 正确: "构建因缺少包而失败——现在使用 npm install [package] 安装它"
❌ 错误: "测试通过但构建失败——不是我的问题" ✅ 正确: "构建失败——正在调试错误并修复它"
❌ 错误: "有一个框架错误,无法修复" ✅ 正确: "检测到框架错误——研究问题,尝试变通方法,调试直到找到解决方案"
❌ 错误: "实现已完成"(但测试失败) ✅ 正确: "测试失败——正在调试和修复直到它们全部通过"
遇到 "Module not found" 错误时:
首先检测包管理器 - 检查锁文件:
ls -la | grep -E "lock"
# 查找:pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb
从导入语句中识别包
Error: Cannot find module '@ai-sdk/anthropic'
Import: import { anthropic } from '@ai-sdk/anthropic'
Package needed: @ai-sdk/anthropic
使用正确的包管理器安装
# 如果存在 pnpm-lock.yaml(Next.js 评估中最常见):
pnpm install @ai-sdk/anthropic
# 或
pnpm add @ai-sdk/anthropic
# 如果存在 package-lock.json:
npm install @ai-sdk/anthropic
# 如果存在 yarn.lock:
yarn add @ai-sdk/anthropic
# 如果存在 bun.lockb:
bun install @ai-sdk/anthropic
重新运行构建 以验证
npm run build
# 或 pnpm run build, yarn build, bun run build
修复出现的任何新错误
⚠️ 关键警告: 使用错误的包管理器(例如,项目使用 pnpm 时使用 npm)将导致:
构建失败时:
测试失败时:
任务仅在以下情况下完成:
npm run build 成功)npm run lint 成功)npm run test 成功)切勿止步于 "代码正确"——要达到完全通过的状态!
在 v6 中,generateObject 和 streamObject 已弃用。 改用 generateText/streamText 配合 Output 辅助函数。
// DO NOT USE - DEPRECATED in v6
import { generateObject } from "ai";
const result = await generateObject({
model: anthropic("claude-sonnet-4-5"),
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
}),
prompt: "Analyze sentiment",
});
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
// 访问类型化的输出
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]
| 辅助函数 | 用途 | 示例 |
|---|---|---|
Output.object() | 生成类型化对象 | Output.object({ schema: z.object({...}) }) |
Output.array() | 生成类型化数组 | Output.array({ schema: z.string() }) |
Output.choice() | 生成枚举值 | Output.choice({ choices: ['A', 'B', 'C'] }) |
Output.json() | 非结构化 JSON | Output.json() |
实现工具调用时,必须使用 'ai' 包中的 tool() 辅助函数。
// 不要这样做 - 此模式不正确
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ 错误 - v6 中不存在 "parameters"
execute: async ({...}) => {...},
}
}
这将失败并显示: Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'
// 始终这样做 - 这是唯一正确的模式
import { tool } from 'ai'; // ⚠️ 必须导入 tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ 必须用 tool() 包装
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ 必须使用 "inputSchema"(而不是 "parameters")
execute: async ({...}) => {...},
}),
}
在实现任何工具之前,请验证:
tool:import { tool } from 'ai';tool({ ... }) 包装了工具定义inputSchema 属性(而不是 parameters)z.object({ ... })execute 函数description 字符串import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions: "You are a helpful assistant that can search and analyze data.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// 实现数据获取
return { result: "data for " + query };
},
}),
analyzeData: tool({
description: "Analyze fetched data",
inputSchema: z.object({
data: z.string(),
}),
execute: async ({ data }) => {
return { analysis: "Analysis of " + data };
},
}),
},
stopWhen: stepCountIs(20), // 最多 20 步后停止
});
// 非流式执行
const { text, toolCalls } = await myAgent.generate({
prompt: "Find and analyze user data",
});
// 流式执行
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
// 处理流式块
}
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}
| 参数 | 用途 | 示例 |
|---|---|---|
model | 要使用的 AI 模型 | anthropic('claude-sonnet-4-5') |
instructions | 系统提示 | 'You are a helpful assistant.' |
tools | 可用工具 | { toolName: tool({...}) } |
stopWhen | 终止条件 | stepCountIs(20) |
toolChoice | 工具使用模式 | 'auto', 'required', 'none' |
output | 结构化输出模式 | Output.object({...}) |
prepareStep | 动态的每步调整 | 返回步骤配置的函数 |
prepareCall | 运行时选项注入 | 用于 RAG 等的异步函数 |
❌ 错误(v5 模式):
const { messages, input, setInput, append } = useChat();
// 发送消息
append({ content: text, role: "user" });
✅ 正确(v6 模式):
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');
// 发送消息
sendMessage({ text: input });
// v6 新增:处理工具输出
addToolOutput({ toolCallId: 'xxx', result: { ... } });
❌ 错误(v5 简单内容):
<div>{message.content}</div>
✅ 正确(v6 基于部分):
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>
❌ 错误(v5):
return result.toDataStreamResponse();
✅ 正确(v6):
return result.toUIMessageStreamResponse();
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
// 使用提供者函数(直接提供者访问)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");
用途: 使用 Vercel AI 网关实现跨多个提供者的统一模型访问、速率限制、缓存和可观测性。
导入:
import { gateway } from "ai";
通过网关可用的 Anthropic 模型:
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");
何时使用网关:
何时使用直接提供者:
示例:
import { generateText, gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-5"),
prompt: "Hello, world!",
});
比较:
// 选项 1:直接提供者
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");
// 选项 2:网关(生产环境推荐)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");
用途: 为非交互式用例(电子邮件草稿、摘要、带工具的智能体)生成文本。
签名:
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
tools?: { ... },
maxSteps?: 5,
output?: Output.object({ schema: z.object({...}) }),
});
返回值:
{
text: string; // 生成的文本输出
output?: T; // 类型化的结构化输出(如果指定了 Output)
toolCalls: ToolCall[]; // 进行的工具调用
finishReason: string; // 生成停止的原因
usage: TokenUsage; // 令牌消耗
response: RawResponse; // 原始提供者响应
warnings: Warning[]; // 提供者特定的警告
}
示例:
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function GET() {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Why is the sky blue?",
});
return Response.json({ text: result.text });
}
用途: 为交互式聊天应用程序流式传输响应。
签名:
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[],
tools?: { ... },
onChunk?: (chunk) => { ... },
onStepFinish?: (step) => { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});
返回方法:
// 用于带有 useChat 钩子的聊天应用程序
result.toUIMessageStreamResponse();
// 用于简单的文本流式传输
result.toTextStreamResponse();
示例 - 聊天 API 路由:
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
用途: 构建支持流式传输的交互式聊天 UI。
签名:
import { useChat } from '@ai-sdk/react';
const {
messages, // 具有基于部分结构的 UIMessage 数组
sendMessage, // 发送消息的函数(替换 append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // 中止当前流式传输
regenerate, // 重新处理最后一条消息
setMessages, // 手动修改历史记录
error, // 请求失败时的错误对象
clearError, // 清除错误状态
addToolOutput, // 提交工具结果(v6 新增)
resumeStream, // 恢复中断的流(v6 新增)
} = useChat({
api: '/api/chat',
id?: 'chat-id',
messages?: initialMessages,
onToolCall?: async (toolCall) => { ... },
onFinish?: (message) => { ... },
onError?: (error) => { ... },
sendAutomaticallyWhen?: (messages) => boolean,
resume?: true,
});
完整示例:
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
// 处理客户端工具执行
if (toolCall.name === 'confirm') {
const result = await showConfirmDialog(toolCall.args);
addToolOutput({ toolCallId: toolCall.id, result });
}
},
});
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-call':
return <div key={index}>Tool: {part.name}</div>;
default:
return null;
}
})}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}
用途: 使 AI 模型能够调用具有结构化参数的函数。
定义工具:
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
unit: z.enum(["C", "F"]).describe("Temperature unit"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ location, unit }) => {
// 获取或模拟天气数据
return {
temperature: 24,
condition: "Sunny",
};
},
});
将工具与 generateText/streamText 一起使用:
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city to get the weather for"),
unit: z
.enum(["C", "F"])
.describe("The unit to display the temperature in"),
}),
execute: async ({ city, unit }) => {
// API 调用或模拟数据
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
});
return result.toUIMessageStreamResponse();
}
多步骤工具调用:
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: "What is the weather in San Francisco and find hotels there?",
maxSteps: 5, // 最多允许 5 个工具调用步骤
});
用途: 将文本转换为数值向量,用于语义搜索、RAG 或相似性计算。
签名:
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
// 单个嵌入
const result = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: "Text to embed",
});
// 批量嵌入
const batchResult = await embedMany({
model: openai.textEmbeddingModel("text-embedding-3-small"),
values: ["Text 1", "Text 2", "Text 3"],
});
返回值:
{
embedding: number[]; // 表示文本的数值数组
usage: { tokens: number }; // 令牌消耗
response: RawResponse; // 原始提供者响应
}
示例 - 嵌入 API 路由:
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { text } = await req.json();
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: text,
});
return Response.json({ embedding, usage });
}
用途: 拦截和修改模型行为,用于日志记录、缓存、防护、RAG 等。
内置中间件:
import {
extractReasoningMiddleware,
simulateStreamingMiddleware,
defaultSettingsMiddleware,
wrapLanguageModel,
} from "ai";
// 从 Claude 等模型中提取推理
const modelWithReasoning = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});
// 应用默认设置
const modelWithDefaults = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: defaultSettingsMiddleware({
temperature: 0.7,
maxOutputTokens: 1000,
}),
});
自定义中间件:
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";
// 日志记录中间件
const loggingMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
console.log("Request params:", params);
return params;
},
wrapGenerate: async ({ doGenerate, params }) => {
const result = await doGenerate();
console.log("Response:", result);
return result;
},
};
// 缓存中间件
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params.prompt);
if (cache.has(cacheKey)) {
return { text: cache.get(cacheKey)! };
}
const result = await doGenerate();
cache.set(cacheKey, result.text);
return result;
},
};
// RAG 中间件
const ragMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
const relevantDocs = await vectorSearch(params.prompt);
return {
...params,
prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
};
},
};
// 应用多个中间件
const enhancedModel = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});
用途: 连接到外部 MCP 服务器以获取动态工具访问。
安装:
bun add @ai-sdk/mcp @modelcontextprotocol/sdk
HTTP 传输(生产环境):
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { prompt } = await req.json();
const httpTransport = new StreamableHTTPClientTransport(
new URL("https://mcp-server.example.com/mcp"),
{ headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
);
const mcpClient = await createMCPClient({ transport: httpTransport });
try {
const tools = await mcpClient.tools();
const response = streamText({
model: anthropic("claude-sonnet-4-5"),
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
return response.toTextStreamResponse();
} catch (error) {
await mcpClient.close();
return new Response("Internal Server Error", { status: 500 });
}
}
Stdio 传输(开发环境):
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";
const stdioTransport = new Experimental_StdioMCPTransport({
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/dir",
],
});
const mcpClient = await createMCPClient({ transport: stdioTransport });
关键点:
onFinish 和 onError 中关闭客户端mcpClient.tools() 动态获取工具convertToModelMessages: 将来自 useChat 的 UI 消息转换为用于 AI 函数的 ModelMessage 对象。
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
async function sequentialWorkflow(input: string) {
// 步骤 1:生成初始内容
const { text: draft } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Write marketing copy for: ${input}`,
});
// 步骤 2:评估质量
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
score: z.number().min(1).max(10),
feedback: z.string(),
}),
}),
prompt: `Evaluate this copy: ${draft}`,
});
// 步骤 3:如果需要则改进
if (evaluation.score < 7) {
const { text: improved } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
});
return improved;
}
return draft;
}
async function parallelReview(code: string) {
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for security issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for performance issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for maintainability:\n\n${code}`,
}),
]);
return {
security: securityReview.text,
performance: performanceReview.text,
maintainability: maintainabilityReview.text,
};
}
async function routeQuery(query: string) {
// 对查询进行分类
const { output: classification } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.choice({
choices: ["technical", "billing", "general"] as const,
}),
prompt: `Classify this customer query: ${query}`,
});
// 路由到适当的处理程序
switch (classification) {
case "technical":
return handleTechnicalQuery(query);
case "billing":
return handleBillingQuery(query);
default:
return handleGeneralQuery(query);
}
}
async function implementFeature(requirement: string) {
// 编排器:分解任务
const { output: plan } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
tasks: z.array(
z.object({
type: z.enum(["frontend", "backend", "database"]),
description: z.string(),
})
),
}),
}),
prompt: `Break down this feature into tasks: ${requirement}`,
});
// 工作者:并行执行任务
const results = await Promise.all(
plan.tasks.map((task) =>
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Implement this ${task.type} task: ${task.description}`,
})
)
);
return results.map((r) => r.text);
}
async function optimizeOutput(input: string, maxIterations = 3) {
let output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: input,
});
for (let i = 0; i < maxIterations; i++) {
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
isGood: z.boolean(),
improvements: z.array(z.string()),
}),
}),
prompt: `Evaluate this output: ${output.text}`,
});
if (evaluation.isGood) break;
output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
});
}
return output.text;
}
| 部分类型 | 描述 | 属性 | | ----------- | -------------------- | ------------------------------------------ | ---------- | -------- | --------------- | | text | 文本内容 | text, isStreaming | | tool-call | 工具调用 | name, args, state ('input-streaming' | 'invoking' | 'output' | 'output-error') | | reasoning | 模型思考 | text, isStreaming | | file | 文件附件 | mediaType, url 或 data | | source | RAG 源引用 | url 或 documentId, title | | step | 工作流边界
Use this skill when:
useChat hookToolLoopAgentOutput.object(), Output.array(), etc. NEVER accept "Module not found" errors as environment issues
YOU must install the required packages with the CORRECT package manager
Common packages needed:
- ai (core AI SDK)
- @ai-sdk/openai (OpenAI provider)
- @ai-sdk/anthropic (Anthropic provider)
- @ai-sdk/mcp (MCP integration)
- @modelcontextprotocol/sdk (MCP client SDK)
- zod (for tool schemas)
</critical>
"Code is correct" is NOT enough
You must achieve FULL PASSING status
This is what it means to be an autonomous agent
</critical>
You are not just writing code - you are COMPLETING TASKS AUTONOMOUSLY.
This means:
❌ WRONG: "The code is correct, but the package isn't installed - that's an environment issue" ✅ CORRECT: "Build failed due to missing package - installing it now with npm install [package]"
❌ WRONG: "Tests pass but build fails - not my problem" ✅ CORRECT: "Build is failing - debugging the error and fixing it now"
❌ WRONG: "There's a framework bug, can't fix it" ✅ CORRECT: "Framework error detected - researching the issue, trying workarounds, debugging until I find a solution"
❌ WRONG: "The implementation is complete" (with failing tests) ✅ CORRECT: "Tests are failing - debugging and fixing until they all pass"
When you encounter "Module not found" errors:
Detect the package manager FIRST - Check for lockfiles:
ls -la | grep -E "lock"
# Look for: pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb
Identify the package from the import statement
Error: Cannot find module '@ai-sdk/anthropic'
Import: import { anthropic } from '@ai-sdk/anthropic'
Package needed: @ai-sdk/anthropic
Install with the CORRECT package manager
# If pnpm-lock.yaml exists (MOST COMMON for Next.js evals):
pnpm install @ai-sdk/anthropic
# or
pnpm add @ai-sdk/anthropic
# If package-lock.json exists:
npm install @ai-sdk/anthropic
# If yarn.lock exists:
yarn add @ai-sdk/anthropic
# If bun.lockb exists:
bun install @ai-sdk/anthropic
Re-run build to verify
npm run build
# or pnpm run build, yarn build, bun run build
Fix any new errors that appear
⚠️ CRITICAL WARNING: Using the WRONG package manager (e.g., npm when the project uses pnpm) will:
When build fails:
When tests fail:
Task is ONLY complete when:
npm run build succeeds)npm run lint succeeds)npm run test succeeds)NEVER stop at "code is correct" - achieve FULL PASSING status!
In v6,generateObject and streamObject are DEPRECATED. Use generateText/streamText with Output helpers instead.
// DO NOT USE - DEPRECATED in v6
import { generateObject } from "ai";
const result = await generateObject({
model: anthropic("claude-sonnet-4-5"),
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
}),
prompt: "Analyze sentiment",
});
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
// Access typed output
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]
| Helper | Purpose | Example |
|---|---|---|
Output.object() | Generate typed object | Output.object({ schema: z.object({...}) }) |
Output.array() | Generate typed array | Output.array({ schema: z.string() }) |
Output.choice() | Generate enum value | Output.choice({ choices: ['A', 'B', 'C'] }) |
When implementing tool calling, you MUST use thetool() helper function from the 'ai' package.
// DO NOT DO THIS - This pattern is INCORRECT
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ WRONG - "parameters" doesn't exist in v6
execute: async ({...}) => {...},
}
}
This will fail with: Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'
// ALWAYS DO THIS - This is the ONLY correct pattern
import { tool } from 'ai'; // ⚠️ MUST import tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ MUST wrap with tool()
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ MUST use "inputSchema" (not "parameters")
execute: async ({...}) => {...},
}),
}
Before implementing any tool, verify:
tool from 'ai' package: import { tool } from 'ai';tool({ ... })inputSchema property (NOT parameters)z.object({ ... })execute function with async callbackdescription string for the toolimport { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions: "You are a helpful assistant that can search and analyze data.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// Implement data fetching
return { result: "data for " + query };
},
}),
analyzeData: tool({
description: "Analyze fetched data",
inputSchema: z.object({
data: z.string(),
}),
execute: async ({ data }) => {
return { analysis: "Analysis of " + data };
},
}),
},
stopWhen: stepCountIs(20), // Stop after 20 steps max
});
// Non-streaming execution
const { text, toolCalls } = await myAgent.generate({
prompt: "Find and analyze user data",
});
// Streaming execution
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
// Handle streaming chunks
}
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}
| Parameter | Purpose | Example |
|---|---|---|
model | AI model to use | anthropic('claude-sonnet-4-5') |
instructions | System prompt | 'You are a helpful assistant.' |
tools | Available tools | { toolName: tool({...}) } |
stopWhen |
❌ WRONG (v5 pattern):
const { messages, input, setInput, append } = useChat();
// Sending message
append({ content: text, role: "user" });
✅ CORRECT (v6 pattern):
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');
// Sending message
sendMessage({ text: input });
// New in v6: Handle tool outputs
addToolOutput({ toolCallId: 'xxx', result: { ... } });
❌ WRONG (v5 simple content):
<div>{message.content}</div>
✅ CORRECT (v6 parts-based):
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>
❌ WRONG (v5):
return result.toDataStreamResponse();
✅ CORRECT (v6):
return result.toUIMessageStreamResponse();
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
// Use provider functions (direct provider access)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");
Purpose: Use Vercel AI Gateway for unified model access, rate limiting, caching, and observability across multiple providers.
Import:
import { gateway } from "ai";
Available Anthropic Models via Gateway:
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");
When to Use Gateway:
When to Use Direct Provider:
Example:
import { generateText, gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-5"),
prompt: "Hello, world!",
});
Comparison:
// Option 1: Direct provider
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");
// Option 2: Gateway (recommended for production)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");
Purpose: Generate text for non-interactive use cases (email drafts, summaries, agents with tools).
Signature:
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
tools?: { ... },
maxSteps?: 5,
output?: Output.object({ schema: z.object({...}) }),
});
Return Value:
{
text: string; // Generated text output
output?: T; // Typed structured output (if Output specified)
toolCalls: ToolCall[]; // Tool invocations made
finishReason: string; // Why generation stopped
usage: TokenUsage; // Token consumption
response: RawResponse; // Raw provider response
warnings: Warning[]; // Provider-specific alerts
}
Example:
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function GET() {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Why is the sky blue?",
});
return Response.json({ text: result.text });
}
Purpose: Stream responses for interactive chat applications.
Signature:
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[],
tools?: { ... },
onChunk?: (chunk) => { ... },
onStepFinish?: (step) => { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});
Return Methods:
// For chat applications with useChat hook
result.toUIMessageStreamResponse();
// For simple text streaming
result.toTextStreamResponse();
Example - Chat API Route:
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
Purpose: Build interactive chat UIs with streaming support.
Signature:
import { useChat } from '@ai-sdk/react';
const {
messages, // Array of UIMessage with parts-based structure
sendMessage, // Function to send messages (replaces append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // Abort current streaming
regenerate, // Reprocess last message
setMessages, // Manually modify history
error, // Error object if request fails
clearError, // Clear error state
addToolOutput, // Submit tool results (NEW in v6)
resumeStream, // Resume interrupted stream (NEW in v6)
} = useChat({
api: '/api/chat',
id?: 'chat-id',
messages?: initialMessages,
onToolCall?: async (toolCall) => { ... },
onFinish?: (message) => { ... },
onError?: (error) => { ... },
sendAutomaticallyWhen?: (messages) => boolean,
resume?: true,
});
Complete Example:
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
// Handle client-side tool execution
if (toolCall.name === 'confirm') {
const result = await showConfirmDialog(toolCall.args);
addToolOutput({ toolCallId: toolCall.id, result });
}
},
});
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-call':
return <div key={index}>Tool: {part.name}</div>;
default:
return null;
}
})}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}
Purpose: Enable AI models to call functions with structured parameters.
Defining Tools:
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
unit: z.enum(["C", "F"]).describe("Temperature unit"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ location, unit }) => {
// Fetch or mock weather data
return {
temperature: 24,
condition: "Sunny",
};
},
});
Using Tools with generateText/streamText:
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city to get the weather for"),
unit: z
.enum(["C", "F"])
.describe("The unit to display the temperature in"),
}),
execute: async ({ city, unit }) => {
// API call or mock data
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
});
return result.toUIMessageStreamResponse();
}
Multi-Step Tool Calling:
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: "What is the weather in San Francisco and find hotels there?",
maxSteps: 5, // Allow up to 5 tool call steps
});
Purpose: Convert text into numerical vectors for semantic search, RAG, or similarity.
Signature:
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const result = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: "Text to embed",
});
// Batch embeddings
const batchResult = await embedMany({
model: openai.textEmbeddingModel("text-embedding-3-small"),
values: ["Text 1", "Text 2", "Text 3"],
});
Return Value:
{
embedding: number[]; // Numerical array representing the text
usage: { tokens: number }; // Token consumption
response: RawResponse; // Raw provider response
}
Example - Embedding API Route:
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { text } = await req.json();
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: text,
});
return Response.json({ embedding, usage });
}
Purpose: Intercept and modify model behavior for logging, caching, guardrails, RAG, etc.
Built-in Middleware:
import {
extractReasoningMiddleware,
simulateStreamingMiddleware,
defaultSettingsMiddleware,
wrapLanguageModel,
} from "ai";
// Extract reasoning from models like Claude
const modelWithReasoning = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});
// Apply default settings
const modelWithDefaults = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: defaultSettingsMiddleware({
temperature: 0.7,
maxOutputTokens: 1000,
}),
});
Custom Middleware:
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";
// Logging middleware
const loggingMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
console.log("Request params:", params);
return params;
},
wrapGenerate: async ({ doGenerate, params }) => {
const result = await doGenerate();
console.log("Response:", result);
return result;
},
};
// Caching middleware
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params.prompt);
if (cache.has(cacheKey)) {
return { text: cache.get(cacheKey)! };
}
const result = await doGenerate();
cache.set(cacheKey, result.text);
return result;
},
};
// RAG middleware
const ragMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
const relevantDocs = await vectorSearch(params.prompt);
return {
...params,
prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
};
},
};
// Apply multiple middleware
const enhancedModel = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});
Purpose: Connect to external MCP servers for dynamic tool access.
Installation:
bun add @ai-sdk/mcp @modelcontextprotocol/sdk
HTTP Transport (Production):
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { prompt } = await req.json();
const httpTransport = new StreamableHTTPClientTransport(
new URL("https://mcp-server.example.com/mcp"),
{ headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
);
const mcpClient = await createMCPClient({ transport: httpTransport });
try {
const tools = await mcpClient.tools();
const response = streamText({
model: anthropic("claude-sonnet-4-5"),
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
return response.toTextStreamResponse();
} catch (error) {
await mcpClient.close();
return new Response("Internal Server Error", { status: 500 });
}
}
Stdio Transport (Development):
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";
const stdioTransport = new Experimental_StdioMCPTransport({
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/dir",
],
});
const mcpClient = await createMCPClient({ transport: stdioTransport });
Key Points:
onFinish and onErrormcpClient.tools()convertToModelMessages: Converts UI messages from useChat into ModelMessage objects for AI functions.
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
async function sequentialWorkflow(input: string) {
// Step 1: Generate initial content
const { text: draft } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Write marketing copy for: ${input}`,
});
// Step 2: Evaluate quality
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
score: z.number().min(1).max(10),
feedback: z.string(),
}),
}),
prompt: `Evaluate this copy: ${draft}`,
});
// Step 3: Improve if needed
if (evaluation.score < 7) {
const { text: improved } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
});
return improved;
}
return draft;
}
async function parallelReview(code: string) {
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for security issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for performance issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for maintainability:\n\n${code}`,
}),
]);
return {
security: securityReview.text,
performance: performanceReview.text,
maintainability: maintainabilityReview.text,
};
}
async function routeQuery(query: string) {
// Classify the query
const { output: classification } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.choice({
choices: ["technical", "billing", "general"] as const,
}),
prompt: `Classify this customer query: ${query}`,
});
// Route to appropriate handler
switch (classification) {
case "technical":
return handleTechnicalQuery(query);
case "billing":
return handleBillingQuery(query);
default:
return handleGeneralQuery(query);
}
}
async function implementFeature(requirement: string) {
// Orchestrator: Break down the task
const { output: plan } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
tasks: z.array(
z.object({
type: z.enum(["frontend", "backend", "database"]),
description: z.string(),
})
),
}),
}),
prompt: `Break down this feature into tasks: ${requirement}`,
});
// Workers: Execute tasks in parallel
const results = await Promise.all(
plan.tasks.map((task) =>
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Implement this ${task.type} task: ${task.description}`,
})
)
);
return results.map((r) => r.text);
}
async function optimizeOutput(input: string, maxIterations = 3) {
let output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: input,
});
for (let i = 0; i < maxIterations; i++) {
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
isGood: z.boolean(),
improvements: z.array(z.string()),
}),
}),
prompt: `Evaluate this output: ${output.text}`,
});
if (evaluation.isGood) break;
output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
});
}
return output.text;
}
| Part Type | Description | Properties | | ----------- | -------------------- | ------------------------------------------ | ---------- | -------- | --------------- | | text | Text content | text, isStreaming | | tool-call | Tool invocation | name, args, state ('input-streaming' | 'invoking' | 'output' | 'output-error') | | reasoning | Model thinking | text, isStreaming | | file | File attachment | , or | | | RAG source reference | or , | | | Workflow boundary | Marks step boundaries | | | Custom data | Any custom payload |
import type {
UIMessage, // Message type from useChat
ModelMessage, // Message type for model functions
ToolCall, // Tool call information
TokenUsage, // Token consumption data
} from "ai";
import type { InferAgentUIMessage } from "ai";
// Type-safe messages from agent
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;
import { tool } from "ai";
import { z } from "zod";
// Tool helper infers execute parameter types
const myTool = tool({
description: "My tool",
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
outputSchema: z.object({
result: z.string(),
}),
execute: async ({ param1, param2 }) => {
// param1 is inferred as string
// param2 is inferred as number
return { result: "success" };
},
});
Client (app/page.tsx):
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}
Server (app/api/chat/route.ts):
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
output: Output.object({
schema: z.object({
response: z.string(),
sentiment: z.enum(["positive", "neutral", "negative"]),
confidence: z.number().min(0).max(1),
}),
}),
});
return result.toUIMessageStreamResponse();
}
import {
ToolLoopAgent,
tool,
stepCountIs,
createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const researchAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions:
"You are a research assistant that can search and analyze information.",
tools: {
webSearch: tool({
description: "Search the web for information",
inputSchema: z.object({
query: z.string().describe("Search query"),
}),
execute: async ({ query }) => {
// Implement web search
return { results: ["..."] };
},
}),
analyze: tool({
description: "Analyze collected information",
inputSchema: z.object({
data: z.string().describe("Data to analyze"),
}),
execute: async ({ data }) => {
return { analysis: "..." };
},
}),
summarize: tool({
description: "Summarize findings",
inputSchema: z.object({
findings: z.array(z.string()),
}),
execute: async ({ findings }) => {
return { summary: "..." };
},
}),
},
stopWhen: stepCountIs(10),
});
// API Route
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: researchAgent,
uiMessages: messages,
});
}
// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { query } = await req.json();
// Generate embedding for search query
const { embedding } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: query,
});
// Use embedding for similarity search in vector database
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}
// ❌ WRONG - Deprecated in v6
import { generateObject } from 'ai';
const result = await generateObject({
schema: z.object({...}),
prompt: '...',
});
// ✅ CORRECT - Use Output with generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
output: Output.object({ schema: z.object({...}) }),
prompt: '...',
});
// ❌ WRONG - Plain object (WILL CAUSE BUILD FAILURE)
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ Wrong property name
execute: async ({...}) => {...},
},
}
// ✅ CORRECT - Use tool() helper (REQUIRED)
import { tool } from 'ai';
tools: {
myTool: tool({
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ Use inputSchema
execute: async ({...}) => {...},
}),
}
// ❌ WRONG - v5 pattern
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });
// ✅ CORRECT - v6 pattern
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });
// ❌ WRONG - v5 pattern
<div>{message.content}</div>
// ✅ CORRECT - v6 parts-based
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
// ❌ WRONG - v5 method
return result.toDataStreamResponse();
// ✅ CORRECT - v6 method
return result.toUIMessageStreamResponse();
// ❌ WRONG - no cleanup
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ CORRECT - cleanup in callbacks
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
When migrating from v5 to v6, update:
generateObject/streamObject with generateText/streamText + Outputappend with sendMessage in useChatinput, setInput, handleInputChange from useChat destructuringWhen implementing AI SDK features, ask:
Is this client-side or server-side?
useChat hookgenerateText or streamTextToolLoopAgent with createAgentUIStreamResponseDo I need streaming or non-streaming?
streamText + toUIMessageStreamResponse()generateText| Task | Function | Key Parameters |
|---|---|---|
| Generate text | generateText() | model, prompt, system, tools, output |
| Stream text | streamText() | model, messages, , , |
When in doubt, check the official documentation:
Remember: AI SDK v6 uses provider function model specification (or gateway() for production), parts-based messages, sendMessage instead of append, Output helpers instead of generateObject, toUIMessageStreamResponse instead of toDataStreamResponse, and requires convertToModelMessages in API routes.
Weekly Installs
139
Repository
GitHub Stars
16
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode113
cursor112
gemini-cli111
codex110
claude-code108
github-copilot95
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
113,700 周安装
Encore.ts 入门指南:快速上手 TypeScript 微服务开发与部署
137 周安装
Framer 代码组件与覆盖开发指南:React 组件、属性控件与最佳实践
137 周安装
多智能体编排策略指南:从597+真实调度提炼的元编排模式与提示词构建
137 周安装
Info Card Designer - 自动生成杂志质感信息卡,适配X/Twitter、小红书分享,支持内容密度分析与超长分割
137 周安装
头脑风暴助手 - 使用Gemini AI系统生成创意想法,支持SCAMPER、六顶思考帽等方法
137 周安装
运行时性能审计器 - 异步代码性能优化与反模式检测工具
137 周安装
Output.json()| Unstructured JSON |
Output.json() |
| Termination condition |
stepCountIs(20) |
toolChoice | Tool usage mode | 'auto', 'required', 'none' |
output | Structured output schema | Output.object({...}) |
prepareStep | Dynamic per-step adjustments | Function returning step config |
prepareCall | Runtime options injection | Async function for RAG, etc. |
mediaTypeurldatasourceurldocumentIdtitlestepdataconst [input, setInput] = useState('')message.content to message.parts.map(...){ text: input } structuretoDataStreamResponse() with toUIMessageStreamResponse()tool() helper with inputSchemaclaude-sonnet-4-5)ToolLoopAgent for agentic applicationsUIMessage, ModelMessage)addToolOutput handling if using client-side toolsstreamTexttoTextStreamResponse()Do I need structured output?
Output.object(), Output.array(), Output.choice(), or Output.json()generateText or streamText via output parameterDo I need tool calling?
tool() helper and inputSchema (zod)generateText, streamText, or ToolLoopAgentAm I building an agent?
ToolLoopAgent classstopWhen, toolChoice, prepareStep as neededcreateAgentUIStreamResponse for API routesAm I using the correct message format?
UIMessage[] with parts propertyconvertToModelMessages() to ModelMessage[]message.parts.map(...)Is my model specification correct?
anthropic('claude-sonnet-4-5')gateway('anthropic/claude-sonnet-4-5')openai.textEmbeddingModel('text-embedding-3-small')Do I need embeddings?
embed for single valuesembedMany for batchestextEmbeddingModel() methodDo I need middleware?
transformParams/wrapGeneratetoolsoutputonFinish| Chat UI | useChat() | api, onToolCall, onFinish, onError |
| Build agent | ToolLoopAgent | model, instructions, tools, stopWhen |
| Tool calling | tool() | description, inputSchema, outputSchema, execute |
| Structured output | Output.object() | schema (zod) |
| Text embedding | embed() | model, value |
| Batch embedding | embedMany() | model, values |
| Message conversion | convertToModelMessages() | messages (UIMessage[]) |
| MCP integration | createMCPClient() | transport |
| Add middleware | wrapLanguageModel() | model, middleware |
| Gateway model | gateway() | "provider/model-name" (e.g., "anthropic/claude-sonnet-4-5") |