langgraph by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill langgraph角色:LangGraph 智能体架构师
您是一位使用 LangGraph 构建生产级 AI 智能体的专家。您深知智能体需要明确的结构——图使流程可见且易于调试。您精心设计状态,恰当使用归约器,并始终为生产环境考虑持久化。您知道何时需要循环以及如何防止无限循环。
带有工具的简单 ReAct 风格智能体
使用场景:具有工具调用功能的单一智能体
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
# 1. 定义状态
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
# add_messages 归约器会追加,不会覆盖
# 2. 定义工具
@tool
def search(query: str) -> str:
"""搜索网络信息。"""
# 此处实现
return f"Results for: {query}"
@tool
def calculator(expression: str) -> str:
"""计算数学表达式。"""
return str(eval(expression))
tools = [search, calculator]
# 3. 创建带有工具的 LLM
llm = ChatOpenAI(model="gpt-4o").bind_tools(tools)
# 4. 定义节点
def agent(state: AgentState) -> dict:
"""智能体节点 - 调用 LLM。"""
response = llm.invoke(state["messages"])
return {"messages": [response]}
# 工具节点处理工具执行
tool_node = ToolNode(tools)
# 5. 定义路由
def should_continue(state: AgentState) -> str:
"""根据是否调用了工具进行路由。"""
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return END
# 6. 构建图
graph = StateGraph(AgentState)
# 添加节点
graph.add_node("agent", agent)
graph.add_node("tools", tool_node)
# 添加边
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_continue, ["tools", END])
graph.add_edge("tools", "agent") # 循环返回
# 编译
app = graph.compile()
# 7. 运行
result = app.invoke({
"messages": [("user", "What is 25 * 4?")]
})
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用自定义归约器进行复杂状态管理
使用场景:多个智能体更新共享状态
from typing import Annotated, TypedDict
from operator import add
from langgraph.graph import StateGraph
# 用于合并字典的自定义归约器
def merge_dicts(left: dict, right: dict) -> dict:
return {**left, **right}
# 具有多个归约器的状态
class ResearchState(TypedDict):
# 消息追加(不覆盖)
messages: Annotated[list, add_messages]
# 研究发现合并
findings: Annotated[dict, merge_dicts]
# 来源累积
sources: Annotated[list[str], add]
# 当前步骤(覆盖 - 无归约器)
current_step: str
# 错误计数(自定义归约器)
errors: Annotated[int, lambda a, b: a + b]
# 节点返回部分状态更新
def researcher(state: ResearchState) -> dict:
# 只返回正在更新的字段
return {
"findings": {"topic_a": "New finding"},
"sources": ["source1.com"],
"current_step": "researching"
}
def writer(state: ResearchState) -> dict:
# 访问累积状态
all_findings = state["findings"]
all_sources = state["sources"]
return {
"messages": [("assistant", f"Report based on {len(all_sources)} sources")],
"current_step": "writing"
}
# 构建图
graph = StateGraph(ResearchState)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)
# ... 添加边
根据状态路由到不同路径
使用场景:多种可能的工作流
from langgraph.graph import StateGraph, START, END
class RouterState(TypedDict):
query: str
query_type: str
result: str
def classifier(state: RouterState) -> dict:
"""对查询类型进行分类。"""
query = state["query"].lower()
if "code" in query or "program" in query:
return {"query_type": "coding"}
elif "search" in query or "find" in query:
return {"query_type": "search"}
else:
return {"query_type": "chat"}
def coding_agent(state: RouterState) -> dict:
return {"result": "Here's your code..."}
def search_agent(state: RouterState) -> dict:
return {"result": "Search results..."}
def chat_agent(state: RouterState) -> dict:
return {"result": "Let me help..."}
# 路由函数
def route_query(state: RouterState) -> str:
"""路由到相应的智能体。"""
query_type = state["query_type"]
return query_type # 返回节点名称
# 构建图
graph = StateGraph(RouterState)
graph.add_node("classifier", classifier)
graph.add_node("coding", coding_agent)
graph.add_node("search", search_agent)
graph.add_node("chat", chat_agent)
graph.add_edge(START, "classifier")
# 来自分类器的条件边
graph.add_conditional_edges(
"classifier",
route_query,
{
"coding": "coding",
"search": "search",
"chat": "chat"
}
)
# 所有智能体都通向 END
graph.add_edge("coding", END)
graph.add_edge("search", END)
graph.add_edge("chat", END)
app = graph.compile()
为何不好:智能体永远循环。消耗令牌和成本。最终会出错。
替代方案:始终设置退出条件:
def should_continue(state):
if state["iterations"] > 10:
return END
if state["task_complete"]:
return END
return "agent"
为何不好:失去了 LangGraph 的优势。状态未持久化。无法恢复对话。
替代方案:始终使用状态进行数据流。从节点返回状态更新。使用归约器进行累积。让 LangGraph 管理状态。
为何不好:难以推理。上下文中有不必要的数据。序列化开销大。
替代方案:使用输入/输出模式以获得清晰的接口。内部数据使用私有状态。职责清晰分离。
与以下技能配合良好:crewai、autonomous-agents、langfuse、structured-output
每周安装量
208
仓库
GitHub 星标数
22.6K
首次出现
Jan 25, 2026
安全审计
安装于
opencode173
codex162
gemini-cli161
claude-code157
github-copilot152
cursor131
Role : LangGraph Agent Architect
You are an expert in building production-grade AI agents with LangGraph. You understand that agents need explicit structure - graphs make the flow visible and debuggable. You design state carefully, use reducers appropriately, and always consider persistence for production. You know when cycles are needed and how to prevent infinite loops.
Simple ReAct-style agent with tools
When to use : Single agent with tool calling
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
# 1. Define State
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
# add_messages reducer appends, doesn't overwrite
# 2. Define Tools
@tool
def search(query: str) -> str:
"""Search the web for information."""
# Implementation here
return f"Results for: {query}"
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
tools = [search, calculator]
# 3. Create LLM with tools
llm = ChatOpenAI(model="gpt-4o").bind_tools(tools)
# 4. Define Nodes
def agent(state: AgentState) -> dict:
"""The agent node - calls LLM."""
response = llm.invoke(state["messages"])
return {"messages": [response]}
# Tool node handles tool execution
tool_node = ToolNode(tools)
# 5. Define Routing
def should_continue(state: AgentState) -> str:
"""Route based on whether tools were called."""
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return END
# 6. Build Graph
graph = StateGraph(AgentState)
# Add nodes
graph.add_node("agent", agent)
graph.add_node("tools", tool_node)
# Add edges
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_continue, ["tools", END])
graph.add_edge("tools", "agent") # Loop back
# Compile
app = graph.compile()
# 7. Run
result = app.invoke({
"messages": [("user", "What is 25 * 4?")]
})
Complex state management with custom reducers
When to use : Multiple agents updating shared state
from typing import Annotated, TypedDict
from operator import add
from langgraph.graph import StateGraph
# Custom reducer for merging dictionaries
def merge_dicts(left: dict, right: dict) -> dict:
return {**left, **right}
# State with multiple reducers
class ResearchState(TypedDict):
# Messages append (don't overwrite)
messages: Annotated[list, add_messages]
# Research findings merge
findings: Annotated[dict, merge_dicts]
# Sources accumulate
sources: Annotated[list[str], add]
# Current step (overwrites - no reducer)
current_step: str
# Error count (custom reducer)
errors: Annotated[int, lambda a, b: a + b]
# Nodes return partial state updates
def researcher(state: ResearchState) -> dict:
# Only return fields being updated
return {
"findings": {"topic_a": "New finding"},
"sources": ["source1.com"],
"current_step": "researching"
}
def writer(state: ResearchState) -> dict:
# Access accumulated state
all_findings = state["findings"]
all_sources = state["sources"]
return {
"messages": [("assistant", f"Report based on {len(all_sources)} sources")],
"current_step": "writing"
}
# Build graph
graph = StateGraph(ResearchState)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)
# ... add edges
Route to different paths based on state
When to use : Multiple possible workflows
from langgraph.graph import StateGraph, START, END
class RouterState(TypedDict):
query: str
query_type: str
result: str
def classifier(state: RouterState) -> dict:
"""Classify the query type."""
query = state["query"].lower()
if "code" in query or "program" in query:
return {"query_type": "coding"}
elif "search" in query or "find" in query:
return {"query_type": "search"}
else:
return {"query_type": "chat"}
def coding_agent(state: RouterState) -> dict:
return {"result": "Here's your code..."}
def search_agent(state: RouterState) -> dict:
return {"result": "Search results..."}
def chat_agent(state: RouterState) -> dict:
return {"result": "Let me help..."}
# Routing function
def route_query(state: RouterState) -> str:
"""Route to appropriate agent."""
query_type = state["query_type"]
return query_type # Returns node name
# Build graph
graph = StateGraph(RouterState)
graph.add_node("classifier", classifier)
graph.add_node("coding", coding_agent)
graph.add_node("search", search_agent)
graph.add_node("chat", chat_agent)
graph.add_edge(START, "classifier")
# Conditional edges from classifier
graph.add_conditional_edges(
"classifier",
route_query,
{
"coding": "coding",
"search": "search",
"chat": "chat"
}
)
# All agents lead to END
graph.add_edge("coding", END)
graph.add_edge("search", END)
graph.add_edge("chat", END)
app = graph.compile()
Why bad : Agent loops forever. Burns tokens and costs. Eventually errors out.
Instead : Always have exit conditions:
def should_continue(state): if state["iterations"] > 10: return END if state["task_complete"]: return END return "agent"
Why bad : Loses LangGraph's benefits. State not persisted. Can't resume conversations.
Instead : Always use state for data flow. Return state updates from nodes. Use reducers for accumulation. Let LangGraph manage state.
Why bad : Hard to reason about. Unnecessary data in context. Serialization overhead.
Instead : Use input/output schemas for clean interfaces. Private state for internal data. Clear separation of concerns.
Works well with: crewai, autonomous-agents, langfuse, structured-output
Weekly Installs
208
Repository
GitHub Stars
22.6K
First Seen
Jan 25, 2026
Security Audits
Gen Agent Trust HubWarnSocketPassSnykWarn
Installed on
opencode173
codex162
gemini-cli161
claude-code157
github-copilot152
cursor131
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装