timeline-report by thedotmack/claude-mem
npx skills add https://github.com/thedotmack/claude-mem --skill timeline-report利用 claude-mem 的持久记忆时间线,生成项目完整开发历史的综合性叙事分析。
当用户提出以下请求时使用:
claude-mem worker 必须在 localhost:37777 上运行。项目必须有 claude-mem 观察记录。
如果上下文不明显,询问用户要分析哪个项目。项目名称通常是项目的目录名(例如,"tokyo"、"my-app")。如果用户说"这个项目",则使用当前工作目录的 basename。
工作树检测: 在使用目录 basename 之前,检查当前目录是否是 git 工作树。在工作树中,数据源是父项目,而不是工作树目录本身。运行:
git_dir=$(git rev-parse --git-dir 2>/dev/null)
git_common_dir=$(git rev-parse --git-common-dir 2>/dev/null)
if [ "$git_dir" != "$git_common_dir" ]; then
# 我们在一个工作树中 — 解析父项目名称
parent_project=$(basename "$(dirname "$git_common_dir")")
echo "检测到工作树。父项目: $parent_project"
else
parent_project=$(basename "$PWD")
fi
echo "$parent_project"
如果检测到工作树,则对所有 API 调用使用 $parent_project(父仓库的 basename)作为项目名称。通知用户:"检测到 git 工作树。使用父项目'[名称]'作为数据源。"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用 Bash 从 claude-mem worker API 获取完整时间线:
curl -s "http://localhost:37777/api/context/inject?project=PROJECT_NAME&full=true"
这将返回完整的压缩时间线——项目整个历史中的每一次观察、会话边界和摘要。响应是预格式化的 markdown,针对 LLM 消费进行了优化。
令牌估算: 完整时间线的大小取决于项目历史:
如果响应为空或返回错误,worker 可能未运行或项目名称错误。尝试 curl -s "http://localhost:37777/api/search?query=*&limit=1" 来验证 worker 是否正常运行。
在继续之前,估算获取到的时间线的令牌数量(大约每 4 个字符 1 个令牌)。向用户报告:
时间线已获取:约 X 次观察,估计约 Yk 令牌。
此分析将消耗大约 Yk 输入令牌 + ~5-10k 输出令牌。
继续? (y/n)
如果时间线超过 100K 令牌,请等待用户确认后再继续。
部署一个代理(使用 Task 工具),并提供完整的时间线和以下分析提示。将整个时间线作为上下文传递给代理。还应指示代理查询位于 ~/.claude-mem/claude-mem.db 的 SQLite 数据库以获取"代币经济与记忆投资回报率"部分。
代理提示:
你是一位技术历史学家,正在分析来自 claude-mem 持久记忆系统的软件项目的完整开发时间线。下面的时间线包含了项目整个历史中记录的每一次观察、会话边界和摘要。
你还可以访问位于 ~/.claude-mem/claude-mem.db 的 claude-mem SQLite 数据库。使用它来运行"代币经济与记忆投资回报率"部分的查询。数据库有一个"observations"表,包含以下列:id, memory_session_id, project, text, type, title, subtitle, facts, narrative, concepts, files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch, source_tool, source_input_summary。
撰写一份全面的叙事报告,标题为"探索 [PROJECT_NAME] 之旅",涵盖以下内容:
## 必填章节
1. **项目起源** -- 项目何时以及如何开始。最初的提交是什么,最初的愿景是什么,奠基性的技术决策是什么?要解决什么问题?
2. **架构演进** -- 架构如何随时间变化?有哪些重大的方向性转变?为什么会发生这些转变?追踪从初始设计到每次重大重构的演变过程。
3. **关键突破** -- 识别那些"顿悟"时刻:最终解决了一个难题时、新方法解锁进展时、原型首次工作时。这些是语气从调查转向解决的观察点。
4. **工作模式** -- 分析开发的节奏。识别调试周期(错误修复集群)、功能冲刺(快速的观察序列)、重构阶段(没有新功能的架构变更)和探索阶段(许多发现但没有变更)。
5. **技术债务** -- 追踪在何处采取了捷径以及何时偿还了债务。识别积累模式(快速的功能开发)和解决模式(专门的重构会话)。
6. **挑战与调试传奇** -- 遇到的最棘手问题。跨多个会话的调试工作、需要回溯的架构死胡同、花费数天解决的平台特定问题。
7. **记忆与连续性** -- 持久记忆(如果适用,包括 claude-mem 本身)如何影响开发过程?是否存在从先前会话中回忆起的上下文节省了大量时间或防止了重复错误的情况?
8. **代币经济与记忆投资回报率** -- 关于记忆回忆如何节省工作的定量分析:
- 使用 `sqlite3 ~/.claude-mem/claude-mem.db` 直接查询数据库以获取这些指标
- 计算所有观察中的总 discovery_tokens(所有工作的原始成本)
- 计算具有上下文注入可用的会话数(第一个会话之后的会话)
- 计算压缩比:每次观察的平均 discovery_tokens 与平均 read_tokens
- 识别最高价值的观察(最高的 discovery_tokens —— 这些是记忆可以防止重做的成本最高的决策、错误和发现)
- 识别显式回忆事件(source_tool 包含 "search"、"smart_search"、"get_observations"、"timeline",或 narrative 提及 "recalled"、"from memory"、"previous session" 的观察)
- 估算被动回忆节省:每个具有上下文注入的会话接收约 50 次观察。使用 30% 的相关性因子(保守估计 30% 的注入上下文可以防止重复工作)。节省 = sessions_with_context × avg_discovery_value_of_50_obs_window × 0.30
- 估算显式回忆节省:每次显式回忆查询约 10K 令牌
- 计算净投资回报率:total_savings / total_read_tokens_invested
- 以月度细分表格形式呈现
- 突出显示按 discovery_tokens 排名的前 5 个最昂贵的观察——这些代表了系统中最高价值的记忆(架构决策、难以解决的错误、最初花费 100K+ 令牌生成的实现计划)
使用以下 SQL 查询作为起点:
```sql
-- 总发现令牌数
SELECT SUM(discovery_tokens) FROM observations WHERE project = 'PROJECT_NAME';
-- 具有可用上下文的会话数(非第一个会话)
SELECT COUNT(DISTINCT memory_session_id) FROM observations WHERE project = 'PROJECT_NAME';
-- 每次观察的平均令牌数
SELECT AVG(discovery_tokens) as avg_discovery, AVG(LENGTH(title || COALESCE(subtitle,'') || COALESCE(narrative,'') || COALESCE(facts,'')) / 4) as avg_read FROM observations WHERE project = 'PROJECT_NAME' AND discovery_tokens > 0;
-- 前 5 个最昂贵的观察(最高价值的记忆)
SELECT id, title, discovery_tokens FROM observations WHERE project = 'PROJECT_NAME' ORDER BY discovery_tokens DESC LIMIT 5;
-- 月度细分
SELECT strftime('%Y-%m', created_at) as month, COUNT(*) as obs, SUM(discovery_tokens) as total_discovery, COUNT(DISTINCT memory_session_id) as sessions FROM observations WHERE project = 'PROJECT_NAME' GROUP BY month ORDER BY month;
-- 显式回忆事件
SELECT COUNT(*) FROM observations WHERE project = 'PROJECT_NAME' AND (source_tool LIKE '%search%' OR source_tool LIKE '%timeline%' OR source_tool LIKE '%get_observations%' OR narrative LIKE '%recalled%' OR narrative LIKE '%from memory%' OR narrative LIKE '%previous session%');
时间线统计 -- 定量摘要:
经验教训与元观察 -- 从整个历史中浮现出哪些模式?新开发者通过阅读时间线会了解关于这个代码库的哪些信息?有哪些反复出现的主题或原则指导了开发?
以下是完整的项目时间线:
[时间线内容在此]
### 步骤 5:保存报告
将代理的输出保存为 markdown 文件。默认位置:
./journey-into-PROJECT_NAME.md
或者,如果用户指定了不同的输出路径,则使用该路径。
### 步骤 6:报告完成
告知用户:
- 报告保存在何处
- 大致的令牌成本(输入时间线 + 输出报告)
- 覆盖的日期范围
- 分析的观察次数
## 错误处理
* **空时间线:** "未找到项目 'X' 的观察记录。使用以下命令检查项目名称:`curl -s 'http://localhost:37777/api/search?query=*&limit=1'`"
* **Worker 未运行:** "claude-mem worker 未在端口 37777 上响应。请使用您通常的方法启动它,或检查 `ps aux | grep worker-service`。"
* **时间线过大:** 对于拥有 50,000+ 次观察的项目,时间线可能超出上下文限制。建议使用日期范围过滤:`curl -s "http://localhost:37777/api/context/inject?project=X&full=true"` —— 当前端点返回所有观察;对于极大的项目,用户可能希望按时间窗口分段分析。
## 示例
用户:"为 tokyo 项目写一份探索报告"
1. 获取:`curl -s "http://localhost:37777/api/context/inject?project=tokyo&full=true"`
2. 估算:"时间线已获取:约 34,722 次观察,估计约 718K 令牌。继续?"
3. 用户确认
4. 使用完整时间线部署分析代理
5. 保存到 `./journey-into-tokyo.md`
6. 报告:"报告已保存。分析了 34,722 次观察,时间跨度为 2025年10月 - 2026年3月(约 718K 输入令牌,约 8K 输出令牌)。"
每周安装量
109
仓库
[thedotmack/claude-mem](https://github.com/thedotmack/claude-mem "thedotmack/claude-mem")
GitHub 星标数
40.4K
首次出现
8 天前
安全审计
[Gen Agent Trust HubPass](/thedotmack/claude-mem/timeline-report/security/agent-trust-hub)[SocketPass](/thedotmack/claude-mem/timeline-report/security/socket)[SnykPass](/thedotmack/claude-mem/timeline-report/security/snyk)
安装于
gemini-cli108
github-copilot108
codex108
amp108
kimi-cli108
cursor108
Generate a comprehensive narrative analysis of a project's entire development history using claude-mem's persistent memory timeline.
Use when users ask for:
The claude-mem worker must be running on localhost:37777. The project must have claude-mem observations recorded.
Ask the user which project to analyze if not obvious from context. The project name is typically the directory name of the project (e.g., "tokyo", "my-app"). If the user says "this project", use the current working directory's basename.
Worktree Detection: Before using the directory basename, check if the current directory is a git worktree. In a worktree, the data source is the parent project , not the worktree directory itself. Run:
git_dir=$(git rev-parse --git-dir 2>/dev/null)
git_common_dir=$(git rev-parse --git-common-dir 2>/dev/null)
if [ "$git_dir" != "$git_common_dir" ]; then
# We're in a worktree — resolve the parent project name
parent_project=$(basename "$(dirname "$git_common_dir")")
echo "Worktree detected. Parent project: $parent_project"
else
parent_project=$(basename "$PWD")
fi
echo "$parent_project"
If a worktree is detected, use $parent_project (the basename of the parent repo) as the project name for all API calls. Inform the user: "Detected git worktree. Using parent project '[name]' as the data source."
Use Bash to fetch the complete timeline from the claude-mem worker API:
curl -s "http://localhost:37777/api/context/inject?project=PROJECT_NAME&full=true"
This returns the entire compressed timeline -- every observation, session boundary, and summary across the project's full history. The response is pre-formatted markdown optimized for LLM consumption.
Token estimates: The full timeline size depends on the project's history:
If the response is empty or returns an error, the worker may not be running or the project name may be wrong. Try curl -s "http://localhost:37777/api/search?query=*&limit=1" to verify the worker is healthy.
Before proceeding, estimate the token count of the fetched timeline (roughly 1 token per 4 characters). Report this to the user:
Timeline fetched: ~X observations, estimated ~Yk tokens.
This analysis will consume approximately Yk input tokens + ~5-10k output tokens.
Proceed? (y/n)
Wait for user confirmation before continuing if the timeline exceeds 100K tokens.
Deploy an Agent (using the Task tool) with the full timeline and the following analysis prompt. Pass the ENTIRE timeline as context to the agent. The agent should also be instructed to query the SQLite database at ~/.claude-mem/claude-mem.db for the Token Economics section.
Agent prompt:
You are a technical historian analyzing a software project's complete development timeline from claude-mem's persistent memory system. The timeline below contains every observation, session boundary, and summary recorded across the project's entire history.
You also have access to the claude-mem SQLite database at ~/.claude-mem/claude-mem.db. Use it to run queries for the Token Economics & Memory ROI section. The database has an "observations" table with columns: id, memory_session_id, project, text, type, title, subtitle, facts, narrative, concepts, files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch, source_tool, source_input_summary.
Write a comprehensive narrative report titled "Journey Into [PROJECT_NAME]" that covers:
## Required Sections
1. **Project Genesis** -- When and how the project started. What were the first commits, the initial vision, the founding technical decisions? What problem was being solved?
2. **Architectural Evolution** -- How did the architecture change over time? What were the major pivots? Why did they happen? Trace the evolution from initial design through each significant restructuring.
3. **Key Breakthroughs** -- Identify the "aha" moments: when a difficult problem was finally solved, when a new approach unlocked progress, when a prototype first worked. These are the observations where the tone shifts from investigation to resolution.
4. **Work Patterns** -- Analyze the rhythm of development. Identify debugging cycles (clusters of bug fixes), feature sprints (rapid observation sequences), refactoring phases (architectural changes without new features), and exploration phases (many discoveries without changes).
5. **Technical Debt** -- Track where shortcuts were taken and when they were paid back. Identify patterns of accumulation (rapid feature work) and resolution (dedicated refactoring sessions).
6. **Challenges and Debugging Sagas** -- The hardest problems encountered. Multi-session debugging efforts, architectural dead-ends that required backtracking, platform-specific issues that took days to resolve.
7. **Memory and Continuity** -- How did persistent memory (claude-mem itself, if applicable) affect the development process? Were there moments where recalled context from prior sessions saved significant time or prevented repeated mistakes?
8. **Token Economics & Memory ROI** -- Quantitative analysis of how memory recall saved work:
- Query the database directly for these metrics using `sqlite3 ~/.claude-mem/claude-mem.db`
- Count total discovery_tokens across all observations (the original cost of all work)
- Count sessions that had context injection available (sessions after the first)
- Calculate the compression ratio: average discovery_tokens vs average read_tokens per observation
- Identify the highest-value observations (highest discovery_tokens -- these are the most expensive decisions, bugs, and discoveries that memory prevents re-doing)
- Identify explicit recall events (observations where source_tool contains "search", "smart_search", "get_observations", "timeline", or where narrative mentions "recalled", "from memory", "previous session")
- Estimate passive recall savings: each session with context injection receives ~50 observations. Use a 30% relevance factor (conservative estimate that 30% of injected context prevents re-work). Savings = sessions_with_context × avg_discovery_value_of_50_obs_window × 0.30
- Estimate explicit recall savings: ~10K tokens per explicit recall query
- Calculate net ROI: total_savings / total_read_tokens_invested
- Present as a table with monthly breakdown
- Highlight the top 5 most expensive observations by discovery_tokens -- these represent the highest-value memories in the system (architecture decisions, hard bugs, implementation plans that cost 100K+ tokens to produce originally)
Use these SQL queries as a starting point:
```sql
-- Total discovery tokens
SELECT SUM(discovery_tokens) FROM observations WHERE project = 'PROJECT_NAME';
-- Sessions with context available (not the first session)
SELECT COUNT(DISTINCT memory_session_id) FROM observations WHERE project = 'PROJECT_NAME';
-- Average tokens per observation
SELECT AVG(discovery_tokens) as avg_discovery, AVG(LENGTH(title || COALESCE(subtitle,'') || COALESCE(narrative,'') || COALESCE(facts,'')) / 4) as avg_read FROM observations WHERE project = 'PROJECT_NAME' AND discovery_tokens > 0;
-- Top 5 most expensive observations (highest-value memories)
SELECT id, title, discovery_tokens FROM observations WHERE project = 'PROJECT_NAME' ORDER BY discovery_tokens DESC LIMIT 5;
-- Monthly breakdown
SELECT strftime('%Y-%m', created_at) as month, COUNT(*) as obs, SUM(discovery_tokens) as total_discovery, COUNT(DISTINCT memory_session_id) as sessions FROM observations WHERE project = 'PROJECT_NAME' GROUP BY month ORDER BY month;
-- Explicit recall events
SELECT COUNT(*) FROM observations WHERE project = 'PROJECT_NAME' AND (source_tool LIKE '%search%' OR source_tool LIKE '%timeline%' OR source_tool LIKE '%get_observations%' OR narrative LIKE '%recalled%' OR narrative LIKE '%from memory%' OR narrative LIKE '%previous session%');
9. Timeline Statistics -- Quantitative summary:
* Date range (first observation to last)
* Total observations and sessions
* Breakdown by observation type (features, bug fixes, discoveries, decisions, changes)
* Most active days/weeks
* Longest debugging sessions
10. Lessons and Meta-Observations -- What patterns emerge from the full history? What would a new developer learn about this codebase from reading the timeline? What recurring themes or principles guided development?
Here is the complete project timeline:
[TIMELINE CONTENT GOES HERE]
### Step 5: Save the Report
Save the agent's output as a markdown file. Default location:
./journey-into-PROJECT_NAME.md
Or if the user specified a different output path, use that instead.
### Step 6: Report Completion
Tell the user:
- Where the report was saved
- The approximate token cost (input timeline + output report)
- The date range covered
- Number of observations analyzed
## Error Handling
- **Empty timeline:** "No observations found for project 'X'. Check the project name with: `curl -s 'http://localhost:37777/api/search?query=*&limit=1'`"
- **Worker not running:** "The claude-mem worker is not responding on port 37777. Start it with your usual method or check `ps aux | grep worker-service`."
- **Timeline too large:** For projects with 50,000+ observations, the timeline may exceed context limits. Suggest using date range filtering: `curl -s "http://localhost:37777/api/context/inject?project=X&full=true"` -- the current endpoint returns all observations; for extremely large projects, the user may want to analyze in time-windowed segments.
## Example
User: "Write a journey report for the tokyo project"
1. Fetch: `curl -s "http://localhost:37777/api/context/inject?project=tokyo&full=true"`
2. Estimate: "Timeline fetched: ~34,722 observations, estimated ~718K tokens. Proceed?"
3. User confirms
4. Deploy analysis agent with full timeline
5. Save to `./journey-into-tokyo.md`
6. Report: "Report saved. Analyzed 34,722 observations spanning Oct 2025 - Mar 2026 (~718K input tokens, ~8K output tokens)."
Weekly Installs
109
Repository
GitHub Stars
40.4K
First Seen
8 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
gemini-cli108
github-copilot108
codex108
amp108
kimi-cli108
cursor108