重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
agent-tracing by lobehub/lobehub
npx skills add https://github.com/lobehub/lobehub --skill agent-tracing@lobechat/agent-tracing 是一个零配置的本地开发工具,用于将智能体执行快照记录到磁盘,并提供 CLI 来检查它们。
在 NODE_ENV=development 环境下,AgentRuntimeService.executeStep() 会自动将每个步骤记录为部分快照到 .agent-tracing/ 目录。当操作完成时,部分快照会被最终化为一个完整的 ExecutionSnapshot JSON 文件。
数据流 : executeStep 循环 -> 构建 StepPresentationData -> 将部分快照写入磁盘 -> 完成后,最终化为 .agent-tracing/{timestamp}_{traceId}.json
上下文引擎捕获 : 在 RuntimeExecutors.ts 中, 执行器在 处理完消息后,会发出一个 事件。该事件携带完整的 (数据库消息、systemRole、model、knowledge、tools、userMemory 等)和处理后的 消息(最终的 LLM 负载)。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
call_llmserverMessagesEngine()context_engine_resultcontextEngineInputoutputpackages/agent-tracing/
src/
types.ts # ExecutionSnapshot, StepSnapshot, SnapshotSummary
store/
types.ts # ISnapshotStore 接口
file-store.ts # FileSnapshotStore (.agent-tracing/*.json)
recorder/
index.ts # appendStepToPartial(), finalizeSnapshot()
viewer/
index.ts # 终端渲染:renderSnapshot, renderStepDetail, renderMessageDetail, renderSummaryTable, renderPayload, renderPayloadTools, renderMemory
cli/
index.ts # CLI 入口点 (#!/usr/bin/env bun)
inspect.ts # Inspect 命令(默认)
partial.ts # 部分快照命令(list, inspect, clean)
index.ts # 桶导出
.agent-tracing/{ISO-timestamp}_{traceId-short}.json.agent-tracing/latest.json.agent-tracing/_partial/{operationId}.jsonFileSnapshotStore 从 process.cwd() 解析路径 — 请从仓库根目录运行 CLI所有命令都从仓库根目录运行:
# 查看最新追踪(树状概览,`inspect` 是默认命令)
agent-tracing
agent-tracing inspect
agent-tracing inspect <traceId>
agent-tracing inspect latest
# 列出最近的快照
agent-tracing list
agent-tracing list -l 20
# 检查特定步骤(-s 是 --step 的简写)
agent-tracing inspect <traceId> -s 0
# 查看消息(-m 是 --messages 的简写)
agent-tracing inspect <traceId> -s 0 -m
# 查看特定消息的完整内容(通过 -m 输出中显示的索引)
agent-tracing inspect <traceId> -s 0 --msg 2
agent-tracing inspect <traceId> -s 0 --msg-input 1
# 查看工具调用/结果详情(-t 是 --tools 的简写)
agent-tracing inspect <traceId> -s 1 -t
# 查看原始事件(-e 是 --events 的简写)
agent-tracing inspect <traceId> -s 0 -e
# 查看运行时上下文(-c 是 --context 的简写)
agent-tracing inspect <traceId> -s 0 -c
# 查看上下文引擎输入概览(-p 是 --payload 的简写)
agent-tracing inspect <traceId> -p
agent-tracing inspect <traceId> -s 0 -p
# 查看负载中可用的工具(-T 是 --payload-tools 的简写)
agent-tracing inspect <traceId> -T
agent-tracing inspect <traceId> -s 0 -T
# 查看用户记忆(-M 是 --memory 的简写)
agent-tracing inspect <traceId> -M
agent-tracing inspect <traceId> -s 0 -M
# 原始 JSON 输出(-j 是 --json 的简写)
agent-tracing inspect <traceId> -j
agent-tracing inspect <traceId> -s 0 -j
# 列出进行中的部分快照
agent-tracing partial list
# 检查一个部分快照(直接使用 `inspect` — 所有标志都适用于部分快照 ID)
agent-tracing inspect <partialOperationId>
agent-tracing inspect <partialOperationId> -T
agent-tracing inspect <partialOperationId> -p
# 清理过时的部分快照
agent-tracing partial clean
| 标志 | 简写 | 描述 | 默认步骤 |
|---|---|---|---|
--step <n> | -s | 定位特定步骤 | — |
--messages | -m | 消息上下文(CE 输入 → 参数 → LLM 负载) | — |
--tools | -t | 工具调用和结果(智能体调用了什么) | — |
--events | -e | 原始事件(llm_start, llm_result 等) | — |
--context | -c | 运行时上下文和负载(原始) | — |
--system-role | -r | 完整的系统角色内容 | 0 |
--env | 环境上下文 | 0 | |
--payload | -p | 上下文引擎输入概览(模型、知识、工具摘要、记忆摘要、平台上下文) | 0 |
--payload-tools | -T | 可用工具详情(插件清单 + LLM 函数定义) | 0 |
--memory | -M | 完整的用户记忆(角色、身份、上下文、偏好、经历) | 0 |
--diff <n> | -d | 与步骤 N 进行差异比较(与 -r 或 --env 一起使用) | — |
--msg <n> | 来自最终 LLM 负载的消息 N 的完整内容 | — | |
--msg-input <n> | 来自上下文引擎输入的消息 N 的完整内容 | — | |
--json | -j | 以 JSON 格式输出(可与上述任何标志组合使用) | — |
标记为“默认步骤:0”的标志在未提供 --step 时会自动选择步骤 0。所有标志都支持 latest 或省略 traceId。
# 1. 在开发 UI 中触发一个智能体操作
# 2. 查看概览
agent-tracing inspect
# 3. 列出所有追踪,获取 traceId
agent-tracing list
# 4. 快速查看输入到上下文引擎的内容概览
agent-tracing inspect -p
# 5. 检查特定步骤的消息,查看发送给 LLM 的内容
agent-tracing inspect TRACE_ID -s 0 -m
# 6. 深入查看被截断的消息的完整内容
agent-tracing inspect TRACE_ID -s 0 --msg 2
# 7. 检查可用工具与实际工具调用的对比
agent-tracing inspect -T # 可用工具
agent-tracing inspect -s 1 -t # 实际工具调用和结果
# 8. 检查注入到对话中的用户记忆
agent-tracing inspect -M
# 9. 比较步骤间的系统角色差异(多步骤智能体)
agent-tracing inspect TRACE_ID -r -d 2
interface ExecutionSnapshot {
traceId: string;
operationId: string;
model?: string;
provider?: string;
startedAt: number;
completedAt?: number;
completionReason?:
| 'done'
| 'error'
| 'interrupted'
| 'max_steps'
| 'cost_limit'
| 'waiting_for_human';
totalSteps: number;
totalTokens: number;
totalCost: number;
error?: { type: string; message: string };
steps: StepSnapshot[];
}
interface StepSnapshot {
stepIndex: number;
stepType: 'call_llm' | 'call_tool';
executionTimeMs: number;
content?: string; // LLM 输出
reasoning?: string; // 推理/思考
inputTokens?: number;
outputTokens?: number;
toolsCalling?: Array<{ apiName: string; identifier: string; arguments?: string }>;
toolsResult?: Array<{
apiName: string;
identifier: string;
isSuccess?: boolean;
output?: string;
}>;
messages?: any[]; // 步骤前的数据库消息
context?: { phase: string; payload?: unknown; stepContext?: unknown };
events?: Array<{ type: string; [key: string]: unknown }>;
// context_engine_result 事件包含:
// input: 完整的 contextEngineInput(消息、systemRole、model、knowledge、tools、userMemory、...)
// output: 处理后的消息数组(最终的 LLM 负载)
}
使用 --messages 时,输出显示三个部分(如果上下文引擎数据可用):
[0]、[1]、... 索引。使用 --msg-input N 查看完整内容。[0]、[1]、... 索引。使用 --msg N 查看完整内容。src/server/services/agentRuntime/AgentRuntimeService.ts — 在 executeStep() 方法中,构建 stepPresentationData 后,在开发模式下写入部分快照src/server/modules/AgentRuntime/RuntimeExecutors.ts — 在 call_llm 执行器中,serverMessagesEngine() 返回后,发出 context_engine_result 事件FileSnapshotStore 相对于 process.cwd() 读写 .agent-tracing/ 目录每周安装量
68
仓库
GitHub Stars
74.4K
首次出现
2026年3月3日
安全审计
安装于
kimi-cli68
gemini-cli68
amp68
cline68
github-copilot68
codex68
@lobechat/agent-tracing is a zero-config local dev tool that records agent execution snapshots to disk and provides a CLI to inspect them.
In NODE_ENV=development, AgentRuntimeService.executeStep() automatically records each step to .agent-tracing/ as partial snapshots. When the operation completes, the partial is finalized into a complete ExecutionSnapshot JSON file.
Data flow : executeStep loop -> build StepPresentationData -> write partial snapshot to disk -> on completion, finalize to .agent-tracing/{timestamp}_{traceId}.json
Context engine capture : In RuntimeExecutors.ts, the call_llm executor emits a context_engine_result event after serverMessagesEngine() processes messages. This event carries the full contextEngineInput (DB messages, systemRole, model, knowledge, tools, userMemory, etc.) and the processed output messages (the final LLM payload).
packages/agent-tracing/
src/
types.ts # ExecutionSnapshot, StepSnapshot, SnapshotSummary
store/
types.ts # ISnapshotStore interface
file-store.ts # FileSnapshotStore (.agent-tracing/*.json)
recorder/
index.ts # appendStepToPartial(), finalizeSnapshot()
viewer/
index.ts # Terminal rendering: renderSnapshot, renderStepDetail, renderMessageDetail, renderSummaryTable, renderPayload, renderPayloadTools, renderMemory
cli/
index.ts # CLI entry point (#!/usr/bin/env bun)
inspect.ts # Inspect command (default)
partial.ts # Partial snapshot commands (list, inspect, clean)
index.ts # Barrel exports
.agent-tracing/{ISO-timestamp}_{traceId-short}.json.agent-tracing/latest.json.agent-tracing/_partial/{operationId}.jsonFileSnapshotStore resolves from process.cwd() — run CLI from the repo rootAll commands run from the repo root :
# View latest trace (tree overview, `inspect` is the default command)
agent-tracing
agent-tracing inspect
agent-tracing inspect <traceId>
agent-tracing inspect latest
# List recent snapshots
agent-tracing list
agent-tracing list -l 20
# Inspect specific step (-s is short for --step)
agent-tracing inspect <traceId> -s 0
# View messages (-m is short for --messages)
agent-tracing inspect <traceId> -s 0 -m
# View full content of a specific message (by index shown in -m output)
agent-tracing inspect <traceId> -s 0 --msg 2
agent-tracing inspect <traceId> -s 0 --msg-input 1
# View tool call/result details (-t is short for --tools)
agent-tracing inspect <traceId> -s 1 -t
# View raw events (-e is short for --events)
agent-tracing inspect <traceId> -s 0 -e
# View runtime context (-c is short for --context)
agent-tracing inspect <traceId> -s 0 -c
# View context engine input overview (-p is short for --payload)
agent-tracing inspect <traceId> -p
agent-tracing inspect <traceId> -s 0 -p
# View available tools in payload (-T is short for --payload-tools)
agent-tracing inspect <traceId> -T
agent-tracing inspect <traceId> -s 0 -T
# View user memory (-M is short for --memory)
agent-tracing inspect <traceId> -M
agent-tracing inspect <traceId> -s 0 -M
# Raw JSON output (-j is short for --json)
agent-tracing inspect <traceId> -j
agent-tracing inspect <traceId> -s 0 -j
# List in-progress partial snapshots
agent-tracing partial list
# Inspect a partial (use `inspect` directly — all flags work with partial IDs)
agent-tracing inspect <partialOperationId>
agent-tracing inspect <partialOperationId> -T
agent-tracing inspect <partialOperationId> -p
# Clean up stale partial snapshots
agent-tracing partial clean
| Flag | Short | Description | Default Step |
|---|---|---|---|
--step <n> | -s | Target a specific step | — |
--messages | -m | Messages context (CE input → params → LLM payload) | — |
--tools | -t | Tool calls & results (what agent invoked) |
Flags marked "Default Step: 0" auto-select step 0 if --step is not provided. All flags support latest or omitted traceId.
# 1. Trigger an agent operation in the dev UI
# 2. See the overview
agent-tracing inspect
# 3. List all traces, get traceId
agent-tracing list
# 4. Quick overview of what was fed into context engine
agent-tracing inspect -p
# 5. Inspect a specific step's messages to see what was sent to the LLM
agent-tracing inspect TRACE_ID -s 0 -m
# 6. Drill into a truncated message for full content
agent-tracing inspect TRACE_ID -s 0 --msg 2
# 7. Check available tools vs actual tool calls
agent-tracing inspect -T # available tools
agent-tracing inspect -s 1 -t # actual tool calls & results
# 8. Inspect user memory injected into the conversation
agent-tracing inspect -M
# 9. Diff system role between steps (multi-step agents)
agent-tracing inspect TRACE_ID -r -d 2
interface ExecutionSnapshot {
traceId: string;
operationId: string;
model?: string;
provider?: string;
startedAt: number;
completedAt?: number;
completionReason?:
| 'done'
| 'error'
| 'interrupted'
| 'max_steps'
| 'cost_limit'
| 'waiting_for_human';
totalSteps: number;
totalTokens: number;
totalCost: number;
error?: { type: string; message: string };
steps: StepSnapshot[];
}
interface StepSnapshot {
stepIndex: number;
stepType: 'call_llm' | 'call_tool';
executionTimeMs: number;
content?: string; // LLM output
reasoning?: string; // Reasoning/thinking
inputTokens?: number;
outputTokens?: number;
toolsCalling?: Array<{ apiName: string; identifier: string; arguments?: string }>;
toolsResult?: Array<{
apiName: string;
identifier: string;
isSuccess?: boolean;
output?: string;
}>;
messages?: any[]; // DB messages before step
context?: { phase: string; payload?: unknown; stepContext?: unknown };
events?: Array<{ type: string; [key: string]: unknown }>;
// context_engine_result event contains:
// input: full contextEngineInput (messages, systemRole, model, knowledge, tools, userMemory, ...)
// output: processed messages array (final LLM payload)
}
When using --messages, the output shows three sections (if context engine data is available):
[0], [1], ... indices. Use --msg-input N to view full content.[0], [1], ... indices. Use --msg N to view full content.src/server/services/agentRuntime/AgentRuntimeService.ts — in the executeStep() method, after building stepPresentationData, writes partial snapshot in dev modesrc/server/modules/AgentRuntime/RuntimeExecutors.ts — in call_llm executor, after serverMessagesEngine() returns, emits context_engine_result eventFileSnapshotStore reads/writes to .agent-tracing/ relative to Weekly Installs
68
Repository
GitHub Stars
74.4K
First Seen
Mar 3, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
kimi-cli68
gemini-cli68
amp68
cline68
github-copilot68
codex68
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
56,600 周安装
上市公司财务指标分析工具 - 使用Octagon MCP分析利润表同比增长数据
109 周安装
tuzi-infographic 信息图生成器:21种布局×20种风格,AI驱动一键生成专业图表
108 周安装
CocoaPods迁移至Swift Package Manager指南:提升iOS项目构建速度与原生集成
107 周安装
Claude Opus 4.5 迁移指南:一键从Sonnet/Opus旧版升级,解决提示词与API问题
107 周安装
OnchainKit:Coinbase 官方 React 区块链组件库,快速构建链上应用
107 周安装
GitHub CLI 完整使用指南:仓库管理、PR、议题操作命令大全
108 周安装
| — |
--events | -e | Raw events (llm_start, llm_result, etc.) | — |
--context | -c | Runtime context & payload (raw) | — |
--system-role | -r | Full system role content | 0 |
--env | Environment context | 0 |
--payload | -p | Context engine input overview (model, knowledge, tools summary, memory summary, platform context) | 0 |
--payload-tools | -T | Available tools detail (plugin manifests + LLM function definitions) | 0 |
--memory | -M | Full user memory (persona, identity, contexts, preferences, experiences) | 0 |
--diff <n> | -d | Diff against step N (use with -r or --env) | — |
--msg <n> | Full content of message N from Final LLM Payload | — |
--msg-input <n> | Full content of message N from Context Engine Input | — |
--json | -j | Output as JSON (combinable with any flag above) | — |
process.cwd()