重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/rysweet/amplihack --skill knowledge-extractor此技能能自动从对话、调试会话、失败尝试和已解决问题中提取、综合并保存知识。它将短暂的互动转化为持久的组织知识,从而提升未来的表现。
会话上下文:短暂的对话上下文,若不主动保存将会丢失 持久知识:结构化的学习成果,可提升未来表现 模式识别:识别何时重复出现解决方案并应将其自动化 组织成长:将个人学习转化为系统范围的改进
捕获内容:遇到的问题、识别的根本原因、实施的解决方案
何时提取:
DISCOVERIES.md 格式:
## [简短标题] (YYYY-MM-DD)
### 问题
遇到了什么问题或挑战?
### 根本原因
为什么会发生这种情况?根本问题是什么?
### 解决方案
如何解决的?如果相关,请包含代码示例。
### 关键学习成果
获得了哪些见解?应该记住什么?
### 预防措施
未来如何避免这种情况?
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
质量标准:
捕获内容:针对重复出现问题的成熟解决方案、架构方法、设计模式
何时提取:
PATTERNS.md 格式:
## 模式:[名称]
### 挑战
此模式解决什么问题?
### 解决方案
此模式如何工作?包含代码/示例。
### 要点
- 主要见解 1
- 主要见解 2
- 何时使用 / 何时不使用
### 使用场景
此模式适用的具体场景。
### 实际影响
此模式在何处已成功使用?
### 相关模式
指向类似或互补模式的链接。
质量标准:
捕获内容:频繁重复的工作流程、专业领域知识、复杂的多步骤流程
何时提取:
代理创建触发检查清单:
代理创建示例:
## 推荐的新代理:[领域]-[专业]
### 问题
此代理将处理什么重复的工作流程?
### 范围
在范围内 | 明确不在范围内
### 输入
代理需要什么信息?
### 流程
代理遵循的逐步工作流程
### 输出
代理产生什么?
### 价值
这能节省多少时间/精力?
### 集成
这在工作流程中的哪个环节适用?
回顾整个对话/会话:
1. 最初的问题/请求是什么?
2. 尝试了哪些方法?
3. 哪些尝试失败了?为什么?
4. 哪些成功了?为什么?
5. 在这个过程中学到了什么?
6. 什么让你感到惊讶?
7. 什么花费的时间比预期长?
8. 什么会有帮助?
识别工作中的模式:
1. 我以前见过这个问题吗?(→ DISCOVERIES)
2. 这是一个可泛化的解决方案吗?(→ PATTERNS)
3. 这值得自动化吗?(→ AGENT)
4. 根本原因是什么?(为什么,而不仅仅是什么)
5. 其他人应该了解什么?
6. 应该记住什么以避免重复?
提取并结构化知识:
对于 DISCOVERIES.md:
对于 PATTERNS.md:
对于新代理:
将知识放置在正确的位置:
Memory → 使用 amplihack.memory.discoveries 中的 store_discovery() 存储发现
PATTERNS.md → 在适当部分添加新模式
Agent → 在 .claude/agents/amplihack/specialized/ 中创建
确保质量:
[ ] 条目具体,而非通用
[ ] 条目遵循模板格式
[ ] 提供了示例/代码
[ ] 质量足以供重用
[ ] 检查了语法和清晰度
[ ] 交叉引用了相关项目
场景:刚花了 45 分钟调试为什么 pre-commit 钩子不工作
用户:从调试 pre-commit 问题中提取我们学到的东西。
Claude:
1. 分析对话显示:
- 初始问题:钩子似乎运行了但没有修改文件
- 错误线索:归咎于合并冲突、工具版本
- 发现的根本原因:钩子由于云同步锁无法写入文件
- 解决方案:添加了指数退避重试逻辑
- 发现时间:45 分钟
2. 创建 DISCOVERIES.md 条目:
- 问题:Pre-commit 钩子静默失败
- 根本原因:云同步(OneDrive)文件锁
- 解决方案:指数退避重试(0.1s, 0.2s, 0.4s...)
- 关键学习:云同步问题静默 - 需要防御性编码
- 预防:为文件 I/O 添加重试逻辑,使用云存储测试
3. 还提供:
- 将“弹性文件 I/O”模式添加到 PATTERNS.md
- 建议用于云同步问题检测的新代理
场景:刚完成实现模块再生策略
用户:提取我们发现的用于构建可再生模块的模式。
Claude:
1. 分析实现显示:
- 清晰的 __all__ 导出定义公共接口
- 自包含的模块结构
- 带有文档字符串的类型提示函数
- 全面的测试覆盖率
- 有效的示例
2. 创建 PATTERNS.md 条目:
- 模式:模块再生结构
- 挑战:模块变得紧密耦合,难以再生
- 解决方案:使用 __all__ 公共 API 的砖块和立柱设计
- 要点:清晰的契约、单一职责、可测试的行为
- 使用场景:任何实用模块,任何可能被再生的东西
- 实际影响:使 AI 能够在不破坏系统的情况下重建模块
3. 交叉引用:
- 关联到“砖块与立柱模块设计”
- 启用“构建器代理工作流程”
场景:第三次使用 5 步诊断流程调试 CI 失败
用户:我们一直在使用相同的 CI 调试流程。我们应该自动化它吗?
Claude:
1. 分析过去的会话:
- 会话 1:45 分钟 CI 调试(版本不匹配问题)
- 会话 2:30 分钟 CI 调试(pre-commit 钩子问题)
- 会话 3:25 分钟 CI 调试(合并冲突问题)
- 模式:每次都是相同的 5 步流程
2. 推荐新代理:
- 名称:ci-diagnostic-workflow
- 问题:每次 CI 失败都需要手动诊断,耗时 30-45 分钟
- 范围:诊断 CI 失败,建议修复(不自动修复)
- 流程:
1. 检查环境(Python 版本、工具版本)
2. 分析 CI 日志中的错误模式
3. 检测常见问题(冲突、缺少依赖)
4. 建议解决方案
5. 协调并行诊断代理
- 价值:将诊断时间从 45 分钟减少到 15 分钟
3. 提议创建:
- .claude/agents/amplihack/ci-diagnostic-workflow.md
- 可以委托给:fix-agent 进行实现
场景:具有多重学习成果的复杂架构决策
用户:从决定内存架构中提取我们学到的一切。
Claude:
1. 会话涉及:
- 2 种失败方法(太复杂、威胁模型错误)
- 1 种成功方法(简单、符合威胁模型)
- 发现用于架构决策的元模式
2. 创建三个提取项:
DISCOVERIES.md:
- 问题:模式适用性分析
- 根本原因:导入模式而未验证威胁匹配
- 解决方案:五阶段适用性分析框架
- 学习:威胁模型匹配是关键的首次检查
PATTERNS.md:
- 新模式:威胁模型精确性原则
- 挑战:容错机制与问题空间不匹配
- 解决方案:在采用模式前验证威胁模型
- 时机:在从不同领域采用任何“最佳实践”之前
推荐代理:
- 名称:pattern-applicability-analyzer
- 自动化:快速评估模式适用性
- 价值:早期防止采用错误模式
在最终确定提取之前,请验证:
store_discovery() 存储在记忆中get_recent_discoveries() 检索无知识提取:重复相同的 45 分钟调试过程 有提取:从记忆中检索,10 分钟内修复
无提取:从头重新发现解决方案 有提取:参考 PATTERNS.md,应用已知解决方案
无提取:每次手动 CI 调试(30-45 分钟) 有新代理:5-10 分钟内自动诊断
错误:"了解到良好的错误处理很重要"
正确:"发现云同步问题导致静默的文件 I/O 失败 - 需要指数退避重试"
错误:"CI 失败,修复了它"
正确:"CI 失败是因为版本不匹配(本地 3.12 vs CI 3.11)- 通过更新 pyproject.toml 版本约束修复"
错误:"这很复杂"
正确:"在每个数据转换层进行多层清理可防止凭据泄漏"
错误:"始终在所有地方使用缓存"
正确:"对于结果可能过期的昂贵操作,使用具有生命周期管理的智能缓存"
错误:"为每季度发生一次的任务创建代理"
正确:"为每周发生 2-3 次且耗时 30-45 分钟的 CI 调试工作流程创建代理"
使用这些提示来触发知识提取:
提取我们从本次会话中发现/学到的东西。
重点关注:根本原因、意外行为、有效的解决方案。
适当地更新 DISCOVERIES.md。
我们应该为将来重用捕获哪些模式?
这些应该是适用于多种情况的经过验证的解决方案。
适当地更新 PATTERNS.md。
我们应该创建一个新代理来自动化任何重复的工作流程吗?
检查是否有任何工作流程已执行 2 次以上且耗时 30 分钟以上。
推荐创建并附上范围和价值计算。
对本次会话执行完整的知识提取。
提取:发现、模式和代理创建建议。
验证质量并更新所有三个知识库。
跟踪知识提取的有效性:
此技能应基于以下方面发展:
将学习成果记录在 ~/.amplihack/.claude/context/DISCOVERIES.md 中。
每周安装次数
145
仓库
GitHub 星标
39
首次出现
2026年1月23日
安全审计
安装于
opencode135
codex130
claude-code125
gemini-cli125
cursor125
github-copilot124
This skill automatically extracts, synthesizes, and preserves knowledge from conversations, debugging sessions, failed attempts, and solved problems. It converts ephemeral interactions into persistent organizational knowledge that improves future performance.
Session Context : Ephemeral conversation context that will be lost without active preservation Persistent Knowledge : Structured learnings that improve future performance Pattern Recognition : Identifying when solutions are repeated and should be automated Organizational Growth : Converting individual learning into system-wide improvement
What it captures : Problems encountered, root causes identified, solutions implemented
When to extract :
Format for DISCOVERIES.md :
## [Brief Title] (YYYY-MM-DD)
### Issue
What problem or challenge was encountered?
### Root Cause
Why did this happen? What was the underlying issue?
### Solution
How was it resolved? Include code examples if relevant.
### Key Learnings
What insights were gained? What should be remembered?
### Prevention
How can this be avoided in the future?
Quality Criteria :
What it captures : Proven solutions to recurring problems, architectural approaches, design patterns
When to extract :
Format for PATTERNS.md :
## Pattern: [Name]
### Challenge
What problem does this pattern solve?
### Solution
How does the pattern work? Include code/examples.
### Key Points
- Main insight 1
- Main insight 2
- When to use / when not to use
### When to Use
Specific scenarios where this pattern applies.
### Real Impact
Where has this pattern been used successfully?
### Related Patterns
Links to similar or complementary patterns.
Quality Criteria :
What it captures : Workflows that are repeated frequently, specialized expertise areas, complex multi-step processes
When to extract :
Agent Creation Trigger Checklist :
Example Agent Creation :
## Recommended New Agent: [domain]-[specialty]
### Problem
What repeated workflow would this agent handle?
### Scope
What's in scope | What's explicitly out of scope
### Inputs
What information does the agent need?
### Process
Step-by-step workflow the agent follows
### Outputs
What does the agent produce?
### Value
How much time/effort does this save?
### Integration
Where in the workflow does this fit?
Review entire conversation/session:
1. What was the original problem/request?
2. What approaches were tried?
3. Which attempts failed and why?
4. What succeeded and why?
5. What was learned in the process?
6. What surprised you?
7. What took longer than expected?
8. What would have helped?
Identify patterns in the work:
1. Have I seen this problem before? (→ DISCOVERIES)
2. Is this a generalizable solution? (→ PATTERNS)
3. Would this be worth automating? (→ AGENT)
4. What was the root cause? (Why, not just what)
5. What should others know about this?
6. What should be remembered to avoid repetition?
Extract and structure knowledge:
For DISCOVERIES.md :
For PATTERNS.md :
For New Agent :
Place knowledge in correct locations:
Memory → Store discovery using store_discovery() from amplihack.memory.discoveries
PATTERNS.md → New pattern in appropriate section
Agent → Create in .claude/agents/amplihack/specialized/
Ensure quality:
[ ] Entry is specific, not generic
[ ] Entry follows template format
[ ] Examples/code provided
[ ] Quality sufficient for reuse
[ ] Grammar and clarity checked
[ ] Related items cross-referenced
Scenario : Just spent 45 minutes debugging why pre-commit hooks weren't working
User: Extract what we learned from debugging the pre-commit issue.
Claude:
1. Analyzes conversation showing:
- Initial problem: hooks appeared to run but didn't modify files
- False leads: blamed merge conflicts, tool versions
- Root cause found: Hooks couldn't write to files due to cloud sync lock
- Solution: Added exponential backoff retry logic
- Time to discover: 45 minutes
2. Creates DISCOVERIES.md entry:
- Issue: Pre-commit hooks silently failing
- Root Cause: Cloud sync (OneDrive) file locks
- Solution: Exponential backoff retry (0.1s, 0.2s, 0.4s...)
- Key Learning: Cloud sync issues silent - need defensive coding
- Prevention: Add retry logic for file I/O, test with cloud storage
3. Offers to also:
- Add pattern for "Resilient File I/O" to PATTERNS.md
- Suggest new agent for cloud-sync issue detection
Scenario : Just finished implementing module regeneration strategy
User: Extract the pattern we discovered for building regeneratable modules.
Claude:
1. Analyzes implementation showing:
- Clear __all__ exports defining public interface
- Self-contained module structure
- Type-hinted functions with docstrings
- Comprehensive test coverage
- Working examples
2. Creates PATTERNS.md entry:
- Pattern: Module Regeneration Structure
- Challenge: Modules become tightly coupled, hard to regenerate
- Solution: Brick & studs design with __all__ public API
- Key Points: Clear contracts, single responsibility, testable behavior
- When to Use: Any utility module, anything that might be regenerated
- Real Impact: Enables AI to rebuild modules without breaking system
3. Cross-references:
- Relates to "Bricks & Studs Module Design"
- Enables "Builder Agent Workflow"
Scenario : Just debugged CI failure using 5-step diagnostic process for 3rd time
User: We keep using the same CI debugging process. Should we automate it?
Claude:
1. Analyzes past sessions:
- Session 1: 45 minutes CI debugging (version mismatch issue)
- Session 2: 30 minutes CI debugging (pre-commit hook issue)
- Session 3: 25 minutes CI debugging (merge conflict issue)
- Pattern: Same 5-step process each time
2. Recommends new agent:
- Name: ci-diagnostic-workflow
- Problem: Each CI failure requires manual diagnosis, takes 30-45 min
- Scope: Diagnoses CI failures, suggests fixes (doesn't auto-fix)
- Process:
1. Check environment (Python version, tool versions)
2. Analyze CI logs for error patterns
3. Detect common issues (conflicts, missing deps)
4. Suggest solutions
5. Coordinate parallel diagnostic agents
- Value: Reduce diagnosis time from 45 min → 15 min
3. Proposes creating:
- .claude/agents/amplihack/ci-diagnostic-workflow.md
- Can delegate to: fix-agent for implementation
Scenario : Complex architectural decision with multiple learnings
User: Extract everything we learned from deciding on the memory architecture.
Claude:
1. Session involved:
- 2 failed approaches (too complex, wrong threat model)
- 1 successful approach (simple, fits threat model)
- Discovery of meta-pattern for architecture decisions
2. Creates THREE extractions:
DISCOVERIES.md:
- Issue: Pattern Applicability Analysis
- Root Cause: Importing patterns without validating threat match
- Solution: Five-phase applicability analysis framework
- Learning: Threat model match is critical first check
PATTERNS.md:
- New Pattern: Threat Model Precision Principle
- Challenge: Fault tolerance mechanisms mismatch problem space
- Solution: Validate threat model before adopting patterns
- When: Before adopting any "best practice" from different domain
Recommended Agent:
- Name: pattern-applicability-analyzer
- Automate: Quick assessment of pattern applicability
- Value: Prevent adopting wrong patterns early
Before finalizing an extraction, verify:
store_discovery() during sessionget_recent_discoveries() at session startWithout knowledge extraction : Repeat same 45-minute debugging process With extraction : Retrieve from memory, fix in 10 minutes
Without extraction : Rediscover solutions from scratch With extraction : Reference PATTERNS.md, apply known solution
Without extraction : Manual CI debugging every time (30-45 min) With new agent : Automated diagnosis in 5-10 minutes
BAD: "Learned that good error handling is important"
GOOD: "Discovered cloud sync issues cause silent file I/O failures - need exponential backoff retry"
BAD: "CI failed, fixed it"
GOOD: "CI failed because version mismatch (local 3.12 vs CI 3.11) - fixed by updating pyproject.toml version constraint"
BAD: "This was complicated"
GOOD: "Multi-layer sanitization at every data transformation prevents credential leakage"
BAD: "Always use caching everywhere"
GOOD: "Use smart caching with lifecycle management for expensive operations where results may become stale"
BAD: "Create agent for task that happens once per quarter"
GOOD: "Create agent for CI debugging workflow that happens 2-3x per week and takes 30-45 minutes"
Use these prompts to trigger knowledge extraction:
Extract what we discovered/learned from this session.
Focus on: root causes, unexpected behaviors, solutions that worked.
Update DISCOVERIES.md appropriately.
What patterns should we capture for future reuse?
These should be proven solutions that apply to multiple situations.
Update PATTERNS.md appropriately.
Should we create a new agent to automate any repeated workflows?
Check if any workflow has been done 2+ times and takes 30+ minutes.
Recommend creation with scope and value calculation.
Perform complete knowledge extraction on this session.
Extract: discoveries, patterns, and agent creation recommendations.
Verify quality and update all three knowledge bases.
Track effectiveness of knowledge extraction:
This skill should grow based on:
Document learnings in ~/.amplihack/.claude/context/DISCOVERIES.md.
Weekly Installs
145
Repository
GitHub Stars
39
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode135
codex130
claude-code125
gemini-cli125
cursor125
github-copilot124
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
48,700 周安装
Orval:OpenAPI/Swagger 转 TypeScript 客户端代码生成器 | 支持 React Query、Vue、Angular
81 周安装
移动应用开发技能指南 - 掌握iOS/Android开发、Flutter框架与移动端最佳实践
58 周安装
ActiveCampaign自动化集成指南:通过Rube MCP实现CRM与营销自动化
84 周安装
Slidev主题使用指南:官方与社区主题安装、自定义配色、弹出与创建完整教程
82 周安装
React Router 代码审查指南:最佳实践、常见问题与优化清单
83 周安装
npm-publish:使用Bun自动化发布npm包,集成版本管理和Git工作流
81 周安装