langchain-fundamentals by langchain-ai/langchain-skills
npx skills add https://github.com/langchain-ai/langchain-skills --skill langchain-fundamentals<create_agent>
create_agent() 是构建智能体的推荐方法。它负责处理智能体循环、工具执行和状态管理。
| 参数 | 用途 | 示例 |
|---|---|---|
model | 使用的 LLM | "anthropic:claude-sonnet-4-5" 或模型实例 |
tools | 工具列表 | [search, calculator] |
system_prompt / |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
systemPrompt| 智能体指令 |
"You are a helpful assistant" |
checkpointer | 状态持久化 | MemorySaver() |
middleware | 处理钩子 | [HumanInTheLoopMiddleware] (Python) / [humanInTheLoopMiddleware({...})] (TypeScript) |
| </create_agent> |
@tool def get_weather(location: str) -> str: """获取指定地点的当前天气。
Args:
location: 城市名称
"""
return f"Weather in {location}: Sunny, 72F"
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[get_weather], system_prompt="You are a helpful assistant." )
result = agent.invoke({ "messages": [{"role": "user", "content": "What's the weather in Paris?"}] }) print(result["messages"][-1].content)
</python>
<typescript>
```typescript
import { createAgent } from "langchain";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ location }) => `Weather in ${location}: Sunny, 72F`,
{
name: "get_weather",
description: "Get current weather for a location.",
schema: z.object({ location: z.string().describe("City name") }),
}
);
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [getWeather],
systemPrompt: "You are a helpful assistant.",
});
const result = await agent.invoke({
messages: [{ role: "user", content: "What's the weather in Paris?" }],
});
console.log(result.messages[result.messages.length - 1].content);
checkpointer = MemorySaver()
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[search], checkpointer=checkpointer, )
config = {"configurable": {"thread_id": "user-123"}} agent.invoke({"messages": [{"role": "user", "content": "My name is Alice"}]}, config=config) result = agent.invoke({"messages": [{"role": "user", "content": "What's my name?"}]}, config=config)
</python>
<typescript>
添加 MemorySaver 检查点以在多次调用间维持对话状态。
```typescript
import { createAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [search],
checkpointer,
});
const config = { configurable: { thread_id: "user-123" } };
await agent.invoke({ messages: [{ role: "user", content: "My name is Alice" }] }, config);
const result = await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] }, config);
// Agent remembers: "Your name is Alice"
工具是智能体可以调用的函数。使用 @tool 装饰器 (Python) 或 tool() 函数 (TypeScript)。
@tool def add(a: float, b: float) -> float: """将两个数字相加。
Args:
a: 第一个数字
b: 第二个数字
"""
return a + b
</python>
<typescript>
```typescript
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const add = tool(
async ({ a, b }) => a + b,
{
name: "add",
description: "Add two numbers.",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
}
);
中间件可以拦截智能体循环,以添加人工审批、错误处理、日志记录等功能。深入理解中间件对于生产环境中的智能体至关重要——使用 HumanInTheLoopMiddleware (Python) / humanInTheLoopMiddleware (TypeScript) 来处理审批工作流,使用 @wrap_tool_call (Python) / createMiddleware (TypeScript) 来创建自定义钩子。
关键导入:
from langchain.agents.middleware import HumanInTheLoopMiddleware, wrap_tool_call
import { humanInTheLoopMiddleware, createMiddleware } from "langchain";
关键模式:
middleware=[HumanInTheLoopMiddleware(interrupt_on={"dangerous_tool": True})] — 需要 checkpointer + thread_idagent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config)@wrap_tool_call 装饰器 (Python) 或 createMiddleware({ wrapToolCall: ... }) (TypeScript)<structured_output>
使用 response_format 或 with_structured_output() 从智能体获取类型化、经过验证的响应。
class ContactInfo(BaseModel): name: str email: str phone: str = Field(description="Phone number with area code")
agent = create_agent(model="gpt-4.1", tools=[search], response_format=ContactInfo) result = agent.invoke({"messages": [{"role": "user", "content": "Find contact for John"}]}) print(result["structured_response"]) # ContactInfo(name='John', ...)
from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4.1") structured_model = model.with_structured_output(ContactInfo) response = structured_model.invoke("Extract: John, john@example.com, 555-1234")
</python>
<typescript>
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const ContactInfo = z.object({
name: z.string(),
email: z.string().email(),
phone: z.string().describe("Phone number with area code"),
});
// 模型级别的结构化输出
const model = new ChatOpenAI({ model: "gpt-4.1" });
const structuredModel = model.withStructuredOutput(ContactInfo);
const response = await structuredModel.invoke("Extract: John, john@example.com, 555-1234");
// { name: 'John', email: 'john@example.com', phone: '555-1234' }
<model_config>
create_agent 接受模型字符串 ("anthropic:claude-sonnet-4-5", "openai:gpt-4.1") 或用于自定义设置的模型实例:
from langchain_anthropic import ChatAnthropic
agent = create_agent(model=ChatAnthropic(model="claude-sonnet-4-5", temperature=0), tools=[...])
</model_config>
@tool def search(query: str) -> str: """搜索网络以获取有关某个主题的最新信息。
当你需要最新数据或事实时使用此工具。
Args:
query: 搜索查询(建议 2-10 个单词)
"""
return web_search(query)
</python>
<typescript>
清晰的描述有助于智能体了解何时使用每个工具。
```typescript
// 错误示例:模糊的描述
const badTool = tool(async ({ input }) => "result", {
name: "bad_tool",
description: "Does stuff.", // 太模糊了!
schema: z.object({ input: z.string() }),
});
// 正确示例:清晰、具体的描述
const search = tool(async ({ query }) => webSearch(query), {
name: "search",
description: "Search the web for current information about a topic. Use this when you need recent data or facts.",
schema: z.object({
query: z.string().describe("The search query (2-10 words recommended)"),
}),
});
from langgraph.checkpoint.memory import MemorySaver
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[search], checkpointer=MemorySaver(), ) config = {"configurable": {"thread_id": "session-1"}} agent.invoke({"messages": [{"role": "user", "content": "I'm Bob"}]}, config=config) agent.invoke({"messages": [{"role": "user", "content": "What's my name?"}]}, config=config)
</python>
<typescript>
添加检查点和 thread_id 以在多次调用间实现对话记忆。
```typescript
// 错误示例:无持久化
const agent = createAgent({ model: "anthropic:claude-sonnet-4-5", tools: [search] });
await agent.invoke({ messages: [{ role: "user", content: "I'm Bob" }] });
await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] });
// Agent doesn't remember!
// 正确示例:添加检查点和 thread_id
import { MemorySaver } from "@langchain/langgraph";
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [search],
checkpointer: new MemorySaver(),
});
const config = { configurable: { thread_id: "session-1" } };
await agent.invoke({ messages: [{ role: "user", content: "I'm Bob" }] }, config);
await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] }, config);
// Agent remembers: "Your name is Bob"
result = agent.invoke( {"messages": [("user", "Do research")]}, config={"recursion_limit": 10}, # 在 10 步后停止 )
</python>
<typescript>
在调用配置中设置 recursionLimit 以防止智能体循环失控。
```typescript
// 错误示例:无迭代限制
const result = await agent.invoke({ messages: [["user", "Do research"]] });
// 正确示例:在配置中设置 recursionLimit
const result = await agent.invoke(
{ messages: [["user", "Do research"]] },
{ recursionLimit: 10 }, // 在 10 步后停止
);
result = agent.invoke({"messages": [{"role": "user", "content": "Hello"}]}) print(result["messages"][-1].content) # 最后一条消息的内容
</python>
<typescript>
从结果对象中访问消息数组,而不是直接访问 result.content。
```typescript
// 错误示例:尝试直接访问 result.content
const result = await agent.invoke({ messages: [{ role: "user", content: "Hello" }] });
console.log(result.content); // undefined!
// 正确示例:从结果对象中访问消息
const result = await agent.invoke({ messages: [{ role: "user", content: "Hello" }] });
console.log(result.messages[result.messages.length - 1].content); // 最后一条消息的内容
每周安装量
2.5K
代码仓库
GitHub 星标数
423
首次出现
2026年2月21日
安全审计
安装于
claude-code2.0K
codex2.0K
cursor2.0K
github-copilot1.9K
opencode1.9K
gemini-cli1.9K
<create_agent>
create_agent() is the recommended way to build agents. It handles the agent loop, tool execution, and state management.
| Parameter | Purpose | Example |
|---|---|---|
model | LLM to use | "anthropic:claude-sonnet-4-5" or model instance |
tools | List of tools | [search, calculator] |
system_prompt / systemPrompt | Agent instructions | "You are a helpful assistant" |
checkpointer | State persistence | MemorySaver() |
middleware | Processing hooks | [HumanInTheLoopMiddleware] (Python) / [humanInTheLoopMiddleware({...})] (TypeScript) |
| </create_agent> |
@tool def get_weather(location: str) -> str: """Get current weather for a location.
Args:
location: City name
"""
return f"Weather in {location}: Sunny, 72F"
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[get_weather], system_prompt="You are a helpful assistant." )
result = agent.invoke({ "messages": [{"role": "user", "content": "What's the weather in Paris?"}] }) print(result["messages"][-1].content)
</python>
<typescript>
```typescript
import { createAgent } from "langchain";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const getWeather = tool(
async ({ location }) => `Weather in ${location}: Sunny, 72F`,
{
name: "get_weather",
description: "Get current weather for a location.",
schema: z.object({ location: z.string().describe("City name") }),
}
);
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [getWeather],
systemPrompt: "You are a helpful assistant.",
});
const result = await agent.invoke({
messages: [{ role: "user", content: "What's the weather in Paris?" }],
});
console.log(result.messages[result.messages.length - 1].content);
checkpointer = MemorySaver()
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[search], checkpointer=checkpointer, )
config = {"configurable": {"thread_id": "user-123"}} agent.invoke({"messages": [{"role": "user", "content": "My name is Alice"}]}, config=config) result = agent.invoke({"messages": [{"role": "user", "content": "What's my name?"}]}, config=config)
</python>
<typescript>
Add MemorySaver checkpointer to maintain conversation state across invocations.
```typescript
import { createAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [search],
checkpointer,
});
const config = { configurable: { thread_id: "user-123" } };
await agent.invoke({ messages: [{ role: "user", content: "My name is Alice" }] }, config);
const result = await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] }, config);
// Agent remembers: "Your name is Alice"
Tools are functions that agents can call. Use the @tool decorator (Python) or tool() function (TypeScript).
@tool def add(a: float, b: float) -> float: """Add two numbers.
Args:
a: First number
b: Second number
"""
return a + b
</python>
<typescript>
```typescript
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const add = tool(
async ({ a, b }) => a + b,
{
name: "add",
description: "Add two numbers.",
schema: z.object({
a: z.number().describe("First number"),
b: z.number().describe("Second number"),
}),
}
);
Middleware intercepts the agent loop to add human approval, error handling, logging, and more. A deep understanding of middleware is essential for production agents — use HumanInTheLoopMiddleware (Python) / humanInTheLoopMiddleware (TypeScript) for approval workflows, and @wrap_tool_call (Python) / createMiddleware (TypeScript) for custom hooks.
Key imports:
from langchain.agents.middleware import HumanInTheLoopMiddleware, wrap_tool_call
import { humanInTheLoopMiddleware, createMiddleware } from "langchain";
Key patterns:
middleware=[HumanInTheLoopMiddleware(interrupt_on={"dangerous_tool": True})] — requires checkpointer + thread_idagent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config)@wrap_tool_call decorator (Python) or createMiddleware({ wrapToolCall: ... }) (TypeScript)<structured_output>
Get typed, validated responses from agents using response_format or with_structured_output().
class ContactInfo(BaseModel): name: str email: str phone: str = Field(description="Phone number with area code")
agent = create_agent(model="gpt-4.1", tools=[search], response_format=ContactInfo) result = agent.invoke({"messages": [{"role": "user", "content": "Find contact for John"}]}) print(result["structured_response"]) # ContactInfo(name='John', ...)
from langchain_openai import ChatOpenAI model = ChatOpenAI(model="gpt-4.1") structured_model = model.with_structured_output(ContactInfo) response = structured_model.invoke("Extract: John, john@example.com, 555-1234")
</python>
<typescript>
```typescript
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
const ContactInfo = z.object({
name: z.string(),
email: z.string().email(),
phone: z.string().describe("Phone number with area code"),
});
// Model-level structured output
const model = new ChatOpenAI({ model: "gpt-4.1" });
const structuredModel = model.withStructuredOutput(ContactInfo);
const response = await structuredModel.invoke("Extract: John, john@example.com, 555-1234");
// { name: 'John', email: 'john@example.com', phone: '555-1234' }
<model_config>
create_agent accepts model strings ("anthropic:claude-sonnet-4-5", "openai:gpt-4.1") or model instances for custom settings:
from langchain_anthropic import ChatAnthropic
agent = create_agent(model=ChatAnthropic(model="claude-sonnet-4-5", temperature=0), tools=[...])
</model_config>
@tool def search(query: str) -> str: """Search the web for current information about a topic.
Use this when you need recent data or facts.
Args:
query: The search query (2-10 words recommended)
"""
return web_search(query)
</python>
<typescript>
Clear descriptions help the agent know when to use each tool.
```typescript
// WRONG: Vague description
const badTool = tool(async ({ input }) => "result", {
name: "bad_tool",
description: "Does stuff.", // Too vague!
schema: z.object({ input: z.string() }),
});
// CORRECT: Clear, specific description
const search = tool(async ({ query }) => webSearch(query), {
name: "search",
description: "Search the web for current information about a topic. Use this when you need recent data or facts.",
schema: z.object({
query: z.string().describe("The search query (2-10 words recommended)"),
}),
});
from langgraph.checkpoint.memory import MemorySaver
agent = create_agent( model="anthropic:claude-sonnet-4-5", tools=[search], checkpointer=MemorySaver(), ) config = {"configurable": {"thread_id": "session-1"}} agent.invoke({"messages": [{"role": "user", "content": "I'm Bob"}]}, config=config) agent.invoke({"messages": [{"role": "user", "content": "What's my name?"}]}, config=config)
</python>
<typescript>
Add checkpointer and thread_id for conversation memory across invocations.
```typescript
// WRONG: No persistence
const agent = createAgent({ model: "anthropic:claude-sonnet-4-5", tools: [search] });
await agent.invoke({ messages: [{ role: "user", content: "I'm Bob" }] });
await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] });
// Agent doesn't remember!
// CORRECT: Add checkpointer and thread_id
import { MemorySaver } from "@langchain/langgraph";
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [search],
checkpointer: new MemorySaver(),
});
const config = { configurable: { thread_id: "session-1" } };
await agent.invoke({ messages: [{ role: "user", content: "I'm Bob" }] }, config);
await agent.invoke({ messages: [{ role: "user", content: "What's my name?" }] }, config);
// Agent remembers: "Your name is Bob"
result = agent.invoke( {"messages": [("user", "Do research")]}, config={"recursion_limit": 10}, # Stop after 10 steps )
</python>
<typescript>
Set recursionLimit in the invoke config to prevent runaway agent loops.
```typescript
// WRONG: No iteration limit
const result = await agent.invoke({ messages: [["user", "Do research"]] });
// CORRECT: Set recursionLimit in config
const result = await agent.invoke(
{ messages: [["user", "Do research"]] },
{ recursionLimit: 10 }, // Stop after 10 steps
);
result = agent.invoke({"messages": [{"role": "user", "content": "Hello"}]}) print(result["messages"][-1].content) # Last message content
</python>
<typescript>
Access the messages array from the result, not result.content directly.
```typescript
// WRONG: Trying to access result.content directly
const result = await agent.invoke({ messages: [{ role: "user", content: "Hello" }] });
console.log(result.content); // undefined!
// CORRECT: Access messages from result object
const result = await agent.invoke({ messages: [{ role: "user", content: "Hello" }] });
console.log(result.messages[result.messages.length - 1].content); // Last message content
Weekly Installs
2.5K
Repository
GitHub Stars
423
First Seen
Feb 21, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code2.0K
codex2.0K
cursor2.0K
github-copilot1.9K
opencode1.9K
gemini-cli1.9K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装
AI代码审查工具 - 自动化安全漏洞检测与代码质量分析 | 支持多领域检查清单
1,200 周安装
AI智能体长期记忆系统 - 精英级架构,融合6种方法,永不丢失上下文
1,200 周安装
AI新闻播客制作技能:实时新闻转对话式播客脚本与音频生成
1,200 周安装
Word文档处理器:DOCX创建、编辑、分析与修订痕迹处理全指南 | 自动化办公解决方案
1,200 周安装
React Router 框架模式指南:全栈开发、文件路由、数据加载与渲染策略
1,200 周安装
Nano Banana AI 图像生成工具:使用 Gemini 3 Pro 生成与编辑高分辨率图像
1,200 周安装