npx skills add https://github.com/rysweet/amplihack --skill session-replay此技能通过分析 claude-trace JSONL 文件,提供关于 Claude Code 会话健康状况、令牌使用模式、错误频率和代理有效性的洞察。它通过专注于 API 级别的追踪数据而非对话记录,来补充 /transcripts 命令的功能。
User: Analyze my latest session health
我将分析最新的追踪文件:
# Read latest trace file from .claude-trace/
trace_dir = Path(".claude-trace")
trace_files = sorted(trace_dir.glob("*.jsonl"), key=lambda f: f.stat().st_mtime)
latest = trace_files[-1] if trace_files else None
# Parse and analyze
if latest:
analysis = analyze_trace_file(latest)
print(format_session_report(analysis))
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
User: Compare token usage across my last 5 sessions
我将跨会话聚合指标:
trace_files = sorted(Path(".claude-trace").glob("*.jsonl"))[-5:]
comparison = compare_sessions(trace_files)
print(format_comparison_table(comparison))
health从追踪文件分析会话健康指标。
操作步骤:
需提取的指标:
# From each JSONL line containing a request/response pair:
{
"timestamp": "...",
"request": {
"method": "POST",
"url": "https://api.anthropic.com/v1/messages",
"body": {
"model": "claude-...",
"messages": [...],
"tools": [...]
}
},
"response": {
"usage": {
"input_tokens": N,
"output_tokens": N
},
"content": [...],
"stop_reason": "..."
}
}
输出格式:
Session Health Report
=====================
File: log-2025-11-23-19-32-36.jsonl
Duration: 45 minutes
Token Usage:
- Input: 125,432 tokens
- Output: 34,521 tokens
- Total: 159,953 tokens
- Efficiency: 27.5% output ratio
Request Stats:
- Total requests: 23
- Average latency: 2.3s
- Errors: 2 (8.7%)
Tool Usage:
- Read: 45 calls
- Edit: 12 calls
- Bash: 8 calls
- Grep: 15 calls
Health Score: 82/100 (Good)
- Minor issue: 2 errors detected
errors识别跨会话的错误模式。
操作步骤:
需检测的错误类别:
输出格式:
Error Analysis
==============
Sessions analyzed: 5
Total errors: 12
Error Categories:
1. Rate limit (429): 5 occurrences
- Recommendation: Add delays between requests
2. Token limit: 3 occurrences
- Recommendation: Use context management skill
3. Tool failures: 4 occurrences
- Bash timeout: 2
- File not found: 2
- Recommendation: Check paths before operations
compare比较多个会话间的指标。
操作步骤:
输出格式:
Session Comparison
==================
Session 1 Session 2 Session 3 Trend
Tokens (total) 150K 180K 120K -17%
Requests 25 30 18 -28%
Errors 2 0 1 stable
Duration (min) 45 60 30 -33%
Efficiency 0.27 0.32 0.35 +7%
tools分析工具使用模式。
操作步骤:
需检测的模式:
输出格式:
Tool Usage Analysis
===================
Tool Calls Avg Time Success Rate
Read 45 0.1s 100%
Edit 12 0.3s 92%
Bash 8 1.2s 75%
Grep 15 0.2s 100%
Task 3 45s 100%
Optimization Opportunities:
1. 5 Read calls to same file within 2 minutes
- Consider caching strategy
2. 3 sequential Bash calls could be parallelized
- Use multiple Bash calls in single message
Claude-trace 文件是包含请求/响应对的 JSONL 格式:
import json
from pathlib import Path
from typing import Dict, List, Any
def parse_trace_file(path: Path) -> List[Dict[str, Any]]:
"""Parse a claude-trace JSONL file."""
entries = []
with open(path) as f:
for line in f:
if line.strip():
try:
entry = json.loads(line)
entries.append(entry)
except json.JSONDecodeError:
continue
return entries
def extract_metrics(entries: List[Dict]) -> Dict[str, Any]:
"""Extract session metrics from trace entries."""
metrics = {
"total_input_tokens": 0,
"total_output_tokens": 0,
"request_count": 0,
"error_count": 0,
"tool_usage": {},
"timestamps": [],
}
for entry in entries:
if "request" in entry:
metrics["request_count"] += 1
metrics["timestamps"].append(entry.get("timestamp", 0))
if "response" in entry:
usage = entry["response"].get("usage", {})
metrics["total_input_tokens"] += usage.get("input_tokens", 0)
metrics["total_output_tokens"] += usage.get("output_tokens", 0)
# Check for errors
if entry["response"].get("error"):
metrics["error_count"] += 1
# Extract tool usage from request body
if "request" in entry and "body" in entry["request"]:
body = entry["request"]["body"]
if isinstance(body, dict) and "tools" in body:
for tool in body["tools"]:
name = tool.get("name", "unknown")
metrics["tool_usage"][name] = metrics["tool_usage"].get(name, 0) + 1
return metrics
def find_trace_files(trace_dir: str = ".claude-trace") -> List[Path]:
"""Find all trace files, sorted by modification time."""
trace_path = Path(trace_dir)
if not trace_path.exists():
return []
return sorted(
trace_path.glob("*.jsonl"),
key=lambda f: f.stat().st_mtime,
reverse=True # Most recent first
)
优雅地处理常见错误场景:
def safe_parse_trace_file(path: Path) -> Tuple[List[Dict], List[str]]:
"""Parse trace file with error collection for malformed lines.
Returns:
Tuple of (valid_entries, error_messages)
"""
entries = []
errors = []
if not path.exists():
return [], [f"Trace file not found: {path}"]
try:
with open(path) as f:
for line_num, line in enumerate(f, 1):
if not line.strip():
continue
try:
entry = json.loads(line)
entries.append(entry)
except json.JSONDecodeError as e:
errors.append(f"Line {line_num}: Invalid JSON - {e}")
except PermissionError:
return [], [f"Permission denied: {path}"]
except UnicodeDecodeError:
return [], [f"Encoding error: {path} (expected UTF-8)"]
return entries, errors
def format_error_report(errors: List[str], path: Path) -> str:
"""Format error report for user display."""
if not errors:
return ""
report = f"""
Trace File Issues
=================
File: {path.name}
Issues found: {len(errors)}
"""
for error in errors[:10]: # Limit to first 10
report += f"- {error}\n"
if len(errors) > 10:
report += f"\n... and {len(errors) - 10} more issues"
return report
常见错误场景:
| 场景 | 原因 | 处理方式 |
|---|---|---|
| 空文件 | 会话没有 API 调用 | 报告“无数据可分析” |
| 格式错误的 JSON | 追踪文件损坏或写入中断 | 跳过该行,在错误报告中计数 |
| 缺少字段 | 旧的追踪格式 | 使用带默认值的 .get() |
| 权限被拒绝 | 文件被另一个进程锁定 | 显示清晰的错误信息,建议重试 |
| 编码错误 | 非 UTF-8 字符 | 报告编码问题 |
| 需求 | 使用此工具 | 原因 |
|---|---|---|
| "为什么我的会话很慢?" | session-replay | API 延迟和令牌指标 |
| "我上次会话讨论了什么?" | /transcripts | 对话内容 |
| "从会话中提取经验" | CodexTranscriptsBuilder | 知识提取 |
| "减少我的令牌使用量" | session-replay + context_management | 指标 + 优化 |
| "恢复中断的工作" | /transcripts | 上下文恢复 |
/transcripts(对话管理):
session-replay skill(API 级别分析):
CodexTranscriptsBuilder(知识提取):
session-replay skill(指标分析):
工作流 1:诊断并修复令牌问题
1. session-replay: Analyze token usage patterns (health action)
2. Identify high-token operations
3. context_management skill: Apply proactive trimming
4. session-replay: Compare before/after sessions (compare action)
工作流 2:事后分析
1. session-replay: Identify error patterns (errors action)
2. /transcripts: Review conversation context around errors
3. session-replay: Check tool usage around failures (tools action)
4. Document findings in DISCOVERIES.md
工作流 3:性能基线
1. session-replay: Analyze 5-10 recent sessions (compare action)
2. Establish baseline metrics (tokens, latency, errors)
3. Track deviations from baseline over time
.claude-trace/*.jsonl~/.amplihack/.claude/runtime/logs/<session_id>/此技能无法:
claude-trace npm 包来创建追踪文件/transcripts 命令进行会话恢复health 操作errors 查找重复出现的问题tools 查找低效之处compare 跨会话比较/transcripts 获取上下文User: My last session was really slow, analyze it
1. Run health action on latest trace
2. Check request latencies
3. Identify tool bottlenecks
4. Report findings with recommendations
User: I'm hitting token limits, help me understand usage
1. Compare token usage across sessions
2. Identify high-token operations
3. Suggest context management strategies
4. Recommend workflow optimizations
User: I keep getting errors, find the pattern
1. Run errors action across last 10 sessions
2. Categorize and count error types
3. Identify root causes
4. Provide targeted fixes
.claude-trace//transcriptscontext-management~/.amplihack/.claude/context/PHILOSOPHY.md症状:".claude-trace/ 中没有追踪文件"
原因和修复:
AMPLIHACK_USE_TRACE=1.claude-trace/ 目录的项目根目录下症状:缺少令牌计数或零值
原因和修复:
症状:评分与会话体验不符
理解评分:
健康评分的因素:
症状:分析缓慢或占用大量内存
解决方案:
tools 操作进行针对性分析mv .claude-trace/old-*.jsonl .claude-trace/archive/此技能提供会话级别的调试和优化洞察。它通过 API 级别的可见性来补充记录管理。使用它来诊断问题、优化工作流并理解 Claude Code 行为模式。
关键要点:追踪文件包含关于会话性能的原始真相。此技能从这些数据中提取可操作的洞察。
每周安装数
87
仓库
GitHub 星标数
43
首次出现
Jan 23, 2026
安全审计
安装于
opencode78
claude-code76
codex72
cursor70
gemini-cli69
github-copilot68
This skill analyzes claude-trace JSONL files to provide insights into Claude Code session health, token usage patterns, error frequencies, and agent effectiveness. It complements the /transcripts command by focusing on API-level trace data rather than conversation transcripts.
User: Analyze my latest session health
I'll analyze the most recent trace file:
# Read latest trace file from .claude-trace/
trace_dir = Path(".claude-trace")
trace_files = sorted(trace_dir.glob("*.jsonl"), key=lambda f: f.stat().st_mtime)
latest = trace_files[-1] if trace_files else None
# Parse and analyze
if latest:
analysis = analyze_trace_file(latest)
print(format_session_report(analysis))
User: Compare token usage across my last 5 sessions
I'll aggregate metrics across sessions:
trace_files = sorted(Path(".claude-trace").glob("*.jsonl"))[-5:]
comparison = compare_sessions(trace_files)
print(format_comparison_table(comparison))
healthAnalyze session health metrics from a trace file.
What to do:
Metrics to extract:
# From each JSONL line containing a request/response pair:
{
"timestamp": "...",
"request": {
"method": "POST",
"url": "https://api.anthropic.com/v1/messages",
"body": {
"model": "claude-...",
"messages": [...],
"tools": [...]
}
},
"response": {
"usage": {
"input_tokens": N,
"output_tokens": N
},
"content": [...],
"stop_reason": "..."
}
}
Output format:
Session Health Report
=====================
File: log-2025-11-23-19-32-36.jsonl
Duration: 45 minutes
Token Usage:
- Input: 125,432 tokens
- Output: 34,521 tokens
- Total: 159,953 tokens
- Efficiency: 27.5% output ratio
Request Stats:
- Total requests: 23
- Average latency: 2.3s
- Errors: 2 (8.7%)
Tool Usage:
- Read: 45 calls
- Edit: 12 calls
- Bash: 8 calls
- Grep: 15 calls
Health Score: 82/100 (Good)
- Minor issue: 2 errors detected
errorsIdentify error patterns across sessions.
What to do:
Error categories to detect:
Output format:
Error Analysis
==============
Sessions analyzed: 5
Total errors: 12
Error Categories:
1. Rate limit (429): 5 occurrences
- Recommendation: Add delays between requests
2. Token limit: 3 occurrences
- Recommendation: Use context management skill
3. Tool failures: 4 occurrences
- Bash timeout: 2
- File not found: 2
- Recommendation: Check paths before operations
compareCompare metrics across multiple sessions.
What to do:
Output format:
Session Comparison
==================
Session 1 Session 2 Session 3 Trend
Tokens (total) 150K 180K 120K -17%
Requests 25 30 18 -28%
Errors 2 0 1 stable
Duration (min) 45 60 30 -33%
Efficiency 0.27 0.32 0.35 +7%
toolsAnalyze tool usage patterns.
What to do:
Patterns to detect:
Output format:
Tool Usage Analysis
===================
Tool Calls Avg Time Success Rate
Read 45 0.1s 100%
Edit 12 0.3s 92%
Bash 8 1.2s 75%
Grep 15 0.2s 100%
Task 3 45s 100%
Optimization Opportunities:
1. 5 Read calls to same file within 2 minutes
- Consider caching strategy
2. 3 sequential Bash calls could be parallelized
- Use multiple Bash calls in single message
Claude-trace files are JSONL format with request/response pairs:
import json
from pathlib import Path
from typing import Dict, List, Any
def parse_trace_file(path: Path) -> List[Dict[str, Any]]:
"""Parse a claude-trace JSONL file."""
entries = []
with open(path) as f:
for line in f:
if line.strip():
try:
entry = json.loads(line)
entries.append(entry)
except json.JSONDecodeError:
continue
return entries
def extract_metrics(entries: List[Dict]) -> Dict[str, Any]:
"""Extract session metrics from trace entries."""
metrics = {
"total_input_tokens": 0,
"total_output_tokens": 0,
"request_count": 0,
"error_count": 0,
"tool_usage": {},
"timestamps": [],
}
for entry in entries:
if "request" in entry:
metrics["request_count"] += 1
metrics["timestamps"].append(entry.get("timestamp", 0))
if "response" in entry:
usage = entry["response"].get("usage", {})
metrics["total_input_tokens"] += usage.get("input_tokens", 0)
metrics["total_output_tokens"] += usage.get("output_tokens", 0)
# Check for errors
if entry["response"].get("error"):
metrics["error_count"] += 1
# Extract tool usage from request body
if "request" in entry and "body" in entry["request"]:
body = entry["request"]["body"]
if isinstance(body, dict) and "tools" in body:
for tool in body["tools"]:
name = tool.get("name", "unknown")
metrics["tool_usage"][name] = metrics["tool_usage"].get(name, 0) + 1
return metrics
def find_trace_files(trace_dir: str = ".claude-trace") -> List[Path]:
"""Find all trace files, sorted by modification time."""
trace_path = Path(trace_dir)
if not trace_path.exists():
return []
return sorted(
trace_path.glob("*.jsonl"),
key=lambda f: f.stat().st_mtime,
reverse=True # Most recent first
)
Handle common error scenarios gracefully:
def safe_parse_trace_file(path: Path) -> Tuple[List[Dict], List[str]]:
"""Parse trace file with error collection for malformed lines.
Returns:
Tuple of (valid_entries, error_messages)
"""
entries = []
errors = []
if not path.exists():
return [], [f"Trace file not found: {path}"]
try:
with open(path) as f:
for line_num, line in enumerate(f, 1):
if not line.strip():
continue
try:
entry = json.loads(line)
entries.append(entry)
except json.JSONDecodeError as e:
errors.append(f"Line {line_num}: Invalid JSON - {e}")
except PermissionError:
return [], [f"Permission denied: {path}"]
except UnicodeDecodeError:
return [], [f"Encoding error: {path} (expected UTF-8)"]
return entries, errors
def format_error_report(errors: List[str], path: Path) -> str:
"""Format error report for user display."""
if not errors:
return ""
report = f"""
Trace File Issues
=================
File: {path.name}
Issues found: {len(errors)}
"""
for error in errors[:10]: # Limit to first 10
report += f"- {error}\n"
if len(errors) > 10:
report += f"\n... and {len(errors) - 10} more issues"
return report
Common error scenarios:
| Scenario | Cause | Handling |
|---|---|---|
| Empty file | Session had no API calls | Report "No data to analyze" |
| Malformed JSON | Corrupted trace or interrupted write | Skip line, count in error report |
| Missing fields | Older trace format | Use .get() with defaults |
| Permission denied | File locked by another process | Clear error message, suggest retry |
| Encoding error | Non-UTF-8 characters | Report encoding issue |
| Need | Use This | Why |
|---|---|---|
| "Why was my session slow?" | session-replay | API latency and token metrics |
| "What did I discuss last session?" | /transcripts | Conversation content |
| "Extract learnings from sessions" | CodexTranscriptsBuilder | Knowledge extraction |
| "Reduce my token usage" | session-replay + context_management | Metrics + optimization |
| "Resume interrupted work" | /transcripts | Context restoration |
/transcripts (conversation management):
session-replay skill (API-level analysis):
CodexTranscriptsBuilder (knowledge extraction):
session-replay skill (metrics analysis):
Workflow 1: Diagnose and Fix Token Issues
1. session-replay: Analyze token usage patterns (health action)
2. Identify high-token operations
3. context_management skill: Apply proactive trimming
4. session-replay: Compare before/after sessions (compare action)
Workflow 2: Post-Incident Analysis
1. session-replay: Identify error patterns (errors action)
2. /transcripts: Review conversation context around errors
3. session-replay: Check tool usage around failures (tools action)
4. Document findings in DISCOVERIES.md
Workflow 3: Performance Baseline
1. session-replay: Analyze 5-10 recent sessions (compare action)
2. Establish baseline metrics (tokens, latency, errors)
3. Track deviations from baseline over time
.claude-trace/*.jsonl~/.amplihack/.claude/runtime/logs/<session_id>/This skill CANNOT:
claude-trace npm package to create trace files/transcripts command for session restorationhealth action firsterrors to find recurring issuestools to find inefficienciescompare across sessions/transcripts for contextUser: My last session was really slow, analyze it
1. Run health action on latest trace
2. Check request latencies
3. Identify tool bottlenecks
4. Report findings with recommendations
User: I'm hitting token limits, help me understand usage
1. Compare token usage across sessions
2. Identify high-token operations
3. Suggest context management strategies
4. Recommend workflow optimizations
User: I keep getting errors, find the pattern
1. Run errors action across last 10 sessions
2. Categorize and count error types
3. Identify root causes
4. Provide targeted fixes
.claude-trace//transcriptscontext-management~/.amplihack/.claude/context/PHILOSOPHY.mdSymptom : "No trace files in .claude-trace/"
Causes and fixes :
AMPLIHACK_USE_TRACE=1 before starting session.claude-trace/ directorySymptom : Missing token counts or zero values
Causes and fixes :
Symptom : Score doesn't match session experience
Understanding the score :
Factors in health score :
Symptom : Analysis is slow or memory-intensive
Solutions :
tools action for targeted analysismv .claude-trace/old-*.jsonl .claude-trace/archive/This skill provides session-level debugging and optimization insights. It complements transcript management with API-level visibility. Use it to diagnose issues, optimize workflows, and understand Claude Code behavior patterns.
Key Takeaway : Trace files contain the raw truth about session performance. This skill extracts actionable insights from that data.
Weekly Installs
87
Repository
GitHub Stars
43
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
opencode78
claude-code76
codex72
cursor70
gemini-cli69
github-copilot68
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
120,000 周安装