session-learning by rysweet/amplihack
npx skills add https://github.com/rysweet/amplihack --skill session-learning本技能通过以下方式提供跨会话学习:
~/.amplihack/.claude/data/learnings/)/amplihack:learnings 功能管理学习内容极致简约的方法:
学习内容存储在五个类别中:
| 类别 | 文件 | 用途 |
|---|---|---|
| errors | errors.yaml |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 错误模式及其解决方案 |
| workflows | workflows.yaml | 工作流见解和快捷方式 |
| tools | tools.yaml | 工具使用模式和注意事项 |
| architecture | architecture.yaml | 设计决策和权衡 |
| debugging | debugging.yaml | 调试策略和根本原因 |
每个学习文件遵循以下结构:
# .claude/data/learnings/errors.yaml
category: errors
last_updated: "2025-11-25T12:00:00Z"
learnings:
- id: "err-001"
created: "2025-11-25T12:00:00Z"
keywords:
- "import"
- "module not found"
- "circular dependency"
summary: "Circular imports cause 'module not found' errors"
insight: |
When module A imports from module B and module B imports from module A,
Python raises ImportError. Solution: Move shared code to a third module
or use lazy imports.
example: |
# Bad: circular import
# utils.py imports from models.py
# models.py imports from utils.py
# Good: extract shared code
# shared.py has common functions
# both utils.py and models.py import from shared.py
confidence: 0.9
times_used: 3
自动使用(通过钩子):
手动使用:
在会话停止时,扫描以下内容:
对于每个重要的见解:
从会话启动提示中提取:
对于每个学习类别:
overlap_score * confidence * recency_weight 排序将相关学习内容格式化为上下文:
## Past Learnings Relevant to This Task
### [Category]: [Summary]
## [Insight with example if helpful]
Session: Debugging circular import issue in Neo4j module
Duration: 45 minutes
Resolution: Moved shared types to separate file
Extracted Learning:
- Category: errors
- Keywords: [import, circular, neo4j, type]
- Summary: Circular imports in Neo4j types cause ImportError
- Insight: When Neo4jNode imports from connection.py which imports
Node types, move types to separate types.py module
- Example: types.py with dataclasses, connection.py imports from types.py
Session Start Prompt: "Fix the import error in the memory module"
Matched Learnings:
1. errors/err-001: "Circular imports cause 'module not found' errors" (85% match)
2. debugging/dbg-003: "Use `python -c` to isolate import issues" (60% match)
Injected Context:
## Past Learnings Relevant to This Task
### Errors: Circular imports cause 'module not found' errors
When module A imports from module B and B imports from A, Python raises
ImportError. Solution: Move shared code to a third module or use lazy imports.
---
User: Show me what I've learned about testing
Claude (using this skill):
1. Reads .claude/data/learnings/workflows.yaml
2. Filters learnings with keywords containing "test"
3. Displays formatted list with summaries and examples
简单但有效的匹配:
def calculate_relevance(task_keywords: set, learning_keywords: set) -> float:
"""Calculate relevance score between 0 and 1."""
if not task_keywords or not learning_keywords:
return 0.0
# Count overlapping keywords
overlap = task_keywords & learning_keywords
# Score: overlap / min(task, learning) to not penalize short queries
return len(overlap) / min(len(task_keywords), len(learning_keywords))
Stop 钩子可以调用此技能来提取学习内容:
Session Start 钩子可以注入相关学习内容:
用于学习管理的命令接口:
/amplihack:learnings show [category] - 显示学习内容/amplihack:learnings search <query> - 在所有类别中搜索/amplihack:learnings add - 手动添加学习内容/amplihack:learnings stats - 显示学习统计信息在以下情况下提取学习内容:
在以下情况下跳过提取:
.claude/
data/
learnings/
errors.yaml # 错误模式和解决方案
workflows.yaml # 工作流见解
tools.yaml # 工具使用模式
architecture.yaml # 设计决策
debugging.yaml # 调试策略
_stats.yaml # 使用统计信息(自动生成)
| 功能 | DISCOVERIES.md | PATTERNS.md | Session Learning |
|---|---|---|---|
| 格式 | Markdown | Markdown | YAML |
| 受众 | 人类 | 人类 | 代理 + 人类 |
| 存储 | 单个文件 | 单个文件 | 按类别文件 |
| 匹配 | 手动阅读 | 手动阅读 | 基于关键词的自动匹配 |
| 注入 | 手动 | 手动 | 自动 |
| 范围 | 重大发现 | 已验证的模式 | 任何有用的见解 |
互补使用:
如果学习文件损坏或无效:
import yaml
from pathlib import Path
def safe_load_learnings(filepath: Path) -> dict:
"""Load learnings with graceful error handling."""
try:
content = filepath.read_text()
data = yaml.safe_load(content)
if not isinstance(data, dict) or "learnings" not in data:
print(f"Warning: Invalid structure in {filepath}, using empty learnings")
return {"category": filepath.stem, "learnings": []}
return data
except yaml.YAMLError as e:
print(f"Warning: YAML error in {filepath}: {e}")
# Create backup before recovery
backup = filepath.with_suffix(".yaml.bak")
filepath.rename(backup)
print(f"Backed up corrupted file to {backup}")
return {"category": filepath.stem, "learnings": []}
except Exception as e:
print(f"Warning: Could not read {filepath}: {e}")
return {"category": filepath.stem, "learnings": []}
如果学习目录不存在,则创建它:
def ensure_learnings_directory():
"""Create learnings directory and empty files if missing."""
learnings_dir = Path(".claude/data/learnings")
learnings_dir.mkdir(parents=True, exist_ok=True)
categories = ["errors", "workflows", "tools", "architecture", "debugging"]
for cat in categories:
filepath = learnings_dir / f"{cat}.yaml"
if not filepath.exists():
filepath.write_text(f"category: {cat}\nlearnings: []\n")
学习系统遵循故障安全设计:
将学习内容提取添加到您的 Stop 钩子:
# .claude/tools/amplihack/hooks/stop_hook.py
async def extract_session_learnings(transcript: str, session_id: str):
"""Extract learnings from session transcript at stop."""
from pathlib import Path
import yaml
from datetime import datetime
# Only extract if session was substantive (not just a quick question)
if len(transcript) < 1000:
return
# Use Claude to extract insights (simplified example)
extraction_prompt = f"""
Analyze this session transcript and extract any reusable learnings.
Categories:
- errors: Error patterns and solutions
- workflows: Process improvements
- tools: Tool usage insights
- architecture: Design decisions
- debugging: Debug strategies
For each learning, provide:
- category (one of the above)
- keywords (3-5 searchable terms)
- summary (one sentence)
- insight (detailed explanation)
- example (code if applicable)
- confidence (0.5-1.0)
Transcript:
{transcript[:5000]} # Truncate for token limits
"""
# ... call Claude to extract ...
# ... parse response and add to appropriate YAML files ...
def on_stop(session_data: dict):
"""Stop hook entry point."""
# ... other stop hook logic ...
# Extract learnings (non-blocking)
try:
import asyncio
asyncio.create_task(
extract_session_learnings(
session_data.get("transcript", ""),
session_data.get("session_id", "")
)
)
except Exception as e:
print(f"Learning extraction failed (non-blocking): {e}")
将学习内容注入添加到您的 Session Start 钩子:
# .claude/tools/amplihack/hooks/session_start_hook.py
def inject_relevant_learnings(initial_prompt: str) -> str:
"""Find and format relevant learnings for injection."""
from pathlib import Path
import yaml
learnings_dir = Path(".claude/data/learnings")
if not learnings_dir.exists():
return ""
# Extract keywords from prompt
prompt_lower = initial_prompt.lower()
task_keywords = set()
for word in prompt_lower.split():
if len(word) > 3: # Skip short words
task_keywords.add(word.strip(".,!?"))
# Find matching learnings
matches = []
for yaml_file in learnings_dir.glob("*.yaml"):
if yaml_file.name.startswith("_"):
continue # Skip _stats.yaml
try:
data = yaml.safe_load(yaml_file.read_text())
for learning in data.get("learnings", []):
learning_keywords = set(k.lower() for k in learning.get("keywords", []))
overlap = task_keywords & learning_keywords
if overlap:
score = len(overlap) * learning.get("confidence", 0.5)
matches.append((score, learning))
except Exception:
continue
# Return top 3 matches
matches.sort(key=lambda x: x[0], reverse=True)
if not matches:
return ""
context = "## Past Learnings Relevant to This Task\n\n"
for score, learning in matches[:3]:
context += f"### {learning.get('summary', 'Insight')}\n"
context += f"{learning.get('insight', '')}\n\n"
return context
def on_session_start(session_data: dict) -> dict:
"""Session start hook entry point."""
initial_prompt = session_data.get("prompt", "")
# Inject relevant learnings
try:
learning_context = inject_relevant_learnings(initial_prompt)
if learning_context:
session_data["injected_context"] = learning_context
except Exception as e:
print(f"Learning injection failed (non-blocking): {e}")
return session_data
如有需要,可考虑:
跟踪有效性:
每周安装量
100
代码仓库
GitHub 星标数
45
首次出现
Jan 23, 2026
安全审计
安装于
opencode89
codex85
claude-code82
cursor82
gemini-cli81
github-copilot80
This skill provides cross-session learning by:
~/.amplihack/.claude/data/learnings/)/amplihack:learnings capabilityRuthlessly Simple Approach:
Learnings are stored in five categories:
| Category | File | Purpose |
|---|---|---|
| errors | errors.yaml | Error patterns and their solutions |
| workflows | workflows.yaml | Workflow insights and shortcuts |
| tools | tools.yaml | Tool usage patterns and gotchas |
| architecture | architecture.yaml | Design decisions and trade-offs |
| debugging | debugging.yaml | Debugging strategies and root causes |
Each learning file follows this structure:
# .claude/data/learnings/errors.yaml
category: errors
last_updated: "2025-11-25T12:00:00Z"
learnings:
- id: "err-001"
created: "2025-11-25T12:00:00Z"
keywords:
- "import"
- "module not found"
- "circular dependency"
summary: "Circular imports cause 'module not found' errors"
insight: |
When module A imports from module B and module B imports from module A,
Python raises ImportError. Solution: Move shared code to a third module
or use lazy imports.
example: |
# Bad: circular import
# utils.py imports from models.py
# models.py imports from utils.py
# Good: extract shared code
# shared.py has common functions
# both utils.py and models.py import from shared.py
confidence: 0.9
times_used: 3
Automatic Usage (via hooks):
Manual Usage:
At session stop, scan for:
For each significant insight:
From session start prompt, extract:
For each learning category:
overlap_score * confidence * recency_weightFormat relevant learnings as context:
## Past Learnings Relevant to This Task
### [Category]: [Summary]
## [Insight with example if helpful]
Session: Debugging circular import issue in Neo4j module
Duration: 45 minutes
Resolution: Moved shared types to separate file
Extracted Learning:
- Category: errors
- Keywords: [import, circular, neo4j, type]
- Summary: Circular imports in Neo4j types cause ImportError
- Insight: When Neo4jNode imports from connection.py which imports
Node types, move types to separate types.py module
- Example: types.py with dataclasses, connection.py imports from types.py
Session Start Prompt: "Fix the import error in the memory module"
Matched Learnings:
1. errors/err-001: "Circular imports cause 'module not found' errors" (85% match)
2. debugging/dbg-003: "Use `python -c` to isolate import issues" (60% match)
Injected Context:
## Past Learnings Relevant to This Task
### Errors: Circular imports cause 'module not found' errors
When module A imports from module B and B imports from A, Python raises
ImportError. Solution: Move shared code to a third module or use lazy imports.
---
User: Show me what I've learned about testing
Claude (using this skill):
1. Reads .claude/data/learnings/workflows.yaml
2. Filters learnings with keywords containing "test"
3. Displays formatted list with summaries and examples
Simple but effective matching:
def calculate_relevance(task_keywords: set, learning_keywords: set) -> float:
"""Calculate relevance score between 0 and 1."""
if not task_keywords or not learning_keywords:
return 0.0
# Count overlapping keywords
overlap = task_keywords & learning_keywords
# Score: overlap / min(task, learning) to not penalize short queries
return len(overlap) / min(len(task_keywords), len(learning_keywords))
The stop hook can call this skill to extract learnings:
The session start hook can inject relevant learnings:
Command interface for learning management:
/amplihack:learnings show [category] - Display learnings/amplihack:learnings search <query> - Search across all categories/amplihack:learnings add - Manually add a learning/amplihack:learnings stats - Show learning statisticsExtract a learning when:
Skip extraction when:
.claude/
data/
learnings/
errors.yaml # Error patterns and solutions
workflows.yaml # Workflow insights
tools.yaml # Tool usage patterns
architecture.yaml # Design decisions
debugging.yaml # Debugging strategies
_stats.yaml # Usage statistics (auto-generated)
| Feature | DISCOVERIES.md | PATTERNS.md | Session Learning |
|---|---|---|---|
| Format | Markdown | Markdown | YAML |
| Audience | Humans | Humans | Agents + Humans |
| Storage | Single file | Single file | Per-category files |
| Matching | Manual read | Manual read | Keyword-based auto |
| Injection | Manual | Manual | Automatic |
| Scope | Major discoveries | Proven patterns | Any useful insight |
Complementary Use:
If a learning file becomes corrupted or invalid:
import yaml
from pathlib import Path
def safe_load_learnings(filepath: Path) -> dict:
"""Load learnings with graceful error handling."""
try:
content = filepath.read_text()
data = yaml.safe_load(content)
if not isinstance(data, dict) or "learnings" not in data:
print(f"Warning: Invalid structure in {filepath}, using empty learnings")
return {"category": filepath.stem, "learnings": []}
return data
except yaml.YAMLError as e:
print(f"Warning: YAML error in {filepath}: {e}")
# Create backup before recovery
backup = filepath.with_suffix(".yaml.bak")
filepath.rename(backup)
print(f"Backed up corrupted file to {backup}")
return {"category": filepath.stem, "learnings": []}
except Exception as e:
print(f"Warning: Could not read {filepath}: {e}")
return {"category": filepath.stem, "learnings": []}
If the learnings directory doesn't exist, create it:
def ensure_learnings_directory():
"""Create learnings directory and empty files if missing."""
learnings_dir = Path(".claude/data/learnings")
learnings_dir.mkdir(parents=True, exist_ok=True)
categories = ["errors", "workflows", "tools", "architecture", "debugging"]
for cat in categories:
filepath = learnings_dir / f"{cat}.yaml"
if not filepath.exists():
filepath.write_text(f"category: {cat}\nlearnings: []\n")
The learning system follows fail-safe design:
Add learning extraction to your stop hook:
# .claude/tools/amplihack/hooks/stop_hook.py
async def extract_session_learnings(transcript: str, session_id: str):
"""Extract learnings from session transcript at stop."""
from pathlib import Path
import yaml
from datetime import datetime
# Only extract if session was substantive (not just a quick question)
if len(transcript) < 1000:
return
# Use Claude to extract insights (simplified example)
extraction_prompt = f"""
Analyze this session transcript and extract any reusable learnings.
Categories:
- errors: Error patterns and solutions
- workflows: Process improvements
- tools: Tool usage insights
- architecture: Design decisions
- debugging: Debug strategies
For each learning, provide:
- category (one of the above)
- keywords (3-5 searchable terms)
- summary (one sentence)
- insight (detailed explanation)
- example (code if applicable)
- confidence (0.5-1.0)
Transcript:
{transcript[:5000]} # Truncate for token limits
"""
# ... call Claude to extract ...
# ... parse response and add to appropriate YAML files ...
def on_stop(session_data: dict):
"""Stop hook entry point."""
# ... other stop hook logic ...
# Extract learnings (non-blocking)
try:
import asyncio
asyncio.create_task(
extract_session_learnings(
session_data.get("transcript", ""),
session_data.get("session_id", "")
)
)
except Exception as e:
print(f"Learning extraction failed (non-blocking): {e}")
Add learning injection to your session start hook:
# .claude/tools/amplihack/hooks/session_start_hook.py
def inject_relevant_learnings(initial_prompt: str) -> str:
"""Find and format relevant learnings for injection."""
from pathlib import Path
import yaml
learnings_dir = Path(".claude/data/learnings")
if not learnings_dir.exists():
return ""
# Extract keywords from prompt
prompt_lower = initial_prompt.lower()
task_keywords = set()
for word in prompt_lower.split():
if len(word) > 3: # Skip short words
task_keywords.add(word.strip(".,!?"))
# Find matching learnings
matches = []
for yaml_file in learnings_dir.glob("*.yaml"):
if yaml_file.name.startswith("_"):
continue # Skip _stats.yaml
try:
data = yaml.safe_load(yaml_file.read_text())
for learning in data.get("learnings", []):
learning_keywords = set(k.lower() for k in learning.get("keywords", []))
overlap = task_keywords & learning_keywords
if overlap:
score = len(overlap) * learning.get("confidence", 0.5)
matches.append((score, learning))
except Exception:
continue
# Return top 3 matches
matches.sort(key=lambda x: x[0], reverse=True)
if not matches:
return ""
context = "## Past Learnings Relevant to This Task\n\n"
for score, learning in matches[:3]:
context += f"### {learning.get('summary', 'Insight')}\n"
context += f"{learning.get('insight', '')}\n\n"
return context
def on_session_start(session_data: dict) -> dict:
"""Session start hook entry point."""
initial_prompt = session_data.get("prompt", "")
# Inject relevant learnings
try:
learning_context = inject_relevant_learnings(initial_prompt)
if learning_context:
session_data["injected_context"] = learning_context
except Exception as e:
print(f"Learning injection failed (non-blocking): {e}")
return session_data
If needed, consider:
Track effectiveness:
Weekly Installs
100
Repository
GitHub Stars
45
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
opencode89
codex85
claude-code82
cursor82
gemini-cli81
github-copilot80
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
46,600 周安装