learning-quality by closedloop-ai/claude-plugins
npx skills add https://github.com/closedloop-ai/claude-plugins --skill learning-quality此技能定义了在 ClosedLoop 运行期间何时以及如何捕获学习成果。
在撰写学习成果之前,请按顺序运行此决策树:
1. 我是否犯了错误并纠正了它,或者发现了某些不明显的事情?
否 → 不捕获(无学习事件)
是 → 继续
2. 它是一个配置值吗?(特定的 URL、文件路径、项目命令、类型名称)
是 → 写入 CLAUDE.md(项目范围),而非 org-patterns
否 → 继续
3. 它是否与单一功能/错误相关联,且没有可推广的原则?
是 → 跳过
否 → 继续
4. 它在 6 个月后是否仍然成立?
否 → 跳过(或推广该原则)
是 → 继续
5. 它是否已存在于 org-patterns.toon 或 CLAUDE.md 中?
是 → 跳过(或者如果是纠正,则注明“取代:[旧模式]”)
否 → 捕获它
注意: 即使“基础”知识,如果你确实犯了那个错误,也值得捕获。这些学习成果之所以存在,是因为 LLM 代理在某些人类可能认为显而易见的模式上存在困难。目标是帮助未来的代理运行避免同样的错误。
如果满足以下任何一条,则跳过:
| 标准 | 示例 | 原因 |
|---|---|---|
| 特定的 URL/路径/配置 | “使用 https://github.com/org/repo” | 配置,非原则 → CLAUDE.md |
| 项目特定名称 | “使用 MyProjectType 而非 OtherType” | 属于 CLAUDE.md |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 一次性错误修复 | “第 123 行的字段 X 为空” | 不可复用 |
| 已捕获 | (检查 pending/、CLAUDE.md、org-patterns.toon) | 避免重复 |
注意: 即使看起来像“基础知识”的模式,如果你确实犯了那个错误,也值得捕获。这些学习成果之所以存在,是因为 LLM 代理在某些模式上存在困难。目标是帮助未来的代理运行避免同样的错误。
当你有一个值得捕获的学习成果时:
| 范围 | 目标位置 | 启发式方法 |
|---|---|---|
| 项目 | CLAUDE.md | 提及特定文件路径、包名或项目特有功能 |
| 全局 | org-patterns.toon | 适用于使用相同语言/框架/工具的任何项目 |
提取基本原则,而非具体实例。
测试: 这会对从事不同功能的人有帮助吗?
公式: [何时/何地] + [具体操作] + [上下文]
在撰写之前:
$CLOSEDLOOP_WORKDIR/.learnings/pending/ 中本次运行的学习成果~/.closedloop-ai/learnings/org-patterns.toon如果存在矛盾(现有模式说“做 X”,新模式说“不要做 X”):
输出位置:
$CLOSEDLOOP_WORKDIR/.learnings/pending/{agent-name}-$CLOSEDLOOP_AGENT_ID.json
格式:
{
"what_happened": "对所发生事件的简要描述",
"why": "根本原因或此事重要的原因",
"fix_applied": "你为解决它所做的操作(如果适用)",
"pattern_to_remember": "可操作的要点(至少 20 个字符)",
"applies_to": ["agent-name"],
"context": {
"file": "relative/path/to/file.ext",
"line": 42,
"function": "function_name"
}
}
如果模式适用于所有代理,则对 applies_to 使用 ["*"]。
如果你完成了工作但没有需要捕获的学习成果:
{
"no_learnings": true,
"reason": "任务直接明了,未发现新模式"
}
这是有效的——并非每项任务都会产生学习成果。
| 类别 | 示例 |
|---|---|
| 常识 | TS 严格模式、git 基础、调试 101 |
| 配置值 | URL、文件路径、项目命令 |
| 实现细节 | 查询顺序、字段名称、样式选择 |
| 临时性 | 错误变通方案、功能特定决策 |
提及特定路径/包/功能? → 项目(CLAUDE.md)
适用于使用相同技术的任何项目? → 全局(org-patterns.toon)
你的代理定义可能会引用领域特定的学习提示(例如 prompts/plan-writer-learning.md)。如果是这样,请在捕获学习成果之前阅读它。
每周安装次数
1
仓库
GitHub 星标数
71
首次发现
今天
安全审计
安装于
windsurf1
amp1
cline1
openclaw1
opencode1
cursor1
This skill defines when and how to capture learnings during ClosedLoop runs.
Before writing a learning, run through this decision tree in order:
1. Did I make a mistake and correct it, or discover something non-obvious?
NO → Don't capture (no learnings event)
YES → Continue
2. Is it a config value? (specific URL, file path, project command, type name)
YES → Write to CLAUDE.md (project scope), not org-patterns
NO → Continue
3. Is it tied to a single feature/bug with no generalizable principle?
YES → SKIP
NO → Continue
4. Will it still be true in 6 months?
NO → SKIP (or generalize the principle)
YES → Continue
5. Does it already exist in org-patterns.toon or CLAUDE.md?
YES → SKIP (or note "Supersedes: [old pattern]" if correcting)
NO → CAPTURE IT
Note: Even "basic" knowledge is worth capturing if you actually made that mistake. These learnings exist because LLM agents struggle with certain patterns that humans might consider obvious. The goal is to help future agent runs avoid the same mistakes.
SKIP if ANY of these apply:
| Criterion | Example | Why |
|---|---|---|
| Specific URL/path/config | "Use https://github.com/org/repo" | Config, not principle → CLAUDE.md |
| Project-specific names | "Use MyProjectType not OtherType" | Belongs in CLAUDE.md |
| One-off bug fix | "Field X was null in row 123" | Not reusable |
| Already captured | (check pending/, CLAUDE.md, org-patterns.toon) | Avoid duplicates |
Note: Even patterns that seem like "basic knowledge" are worth capturing if you actually made that mistake. These learnings exist because LLM agents struggle with certain patterns. The goal is to help future agent runs avoid the same mistakes.
When you have a learning worth capturing:
| Scope | Destination | Heuristic |
|---|---|---|
| Project | CLAUDE.md | Mentions specific file paths, package names, or project-unique features |
| Global | org-patterns.toon | Applies to any project using the same language/framework/tool |
Extract the underlying principle, not the specific instance.
Test: Would this help someone working on a different feature?
Formula: [When/Where] + [specific action] + [context]
Before writing:
$CLOSEDLOOP_WORKDIR/.learnings/pending/ for learnings in this run~/.closedloop-ai/learnings/org-patterns.toonIf contradiction exists (existing says "do X", new says "don't do X"):
Output location:
$CLOSEDLOOP_WORKDIR/.learnings/pending/{agent-name}-$CLOSEDLOOP_AGENT_ID.json
Format:
{
"what_happened": "Brief description of what occurred",
"why": "Root cause or reason this matters",
"fix_applied": "What you did to resolve it (if applicable)",
"pattern_to_remember": "The actionable takeaway (minimum 20 chars)",
"applies_to": ["agent-name"],
"context": {
"file": "relative/path/to/file.ext",
"line": 42,
"function": "function_name"
}
}
Use ["*"] for applies_to if the pattern applies to all agents.
If you completed work without learnings to capture:
{
"no_learnings": true,
"reason": "Task was straightforward with no new patterns discovered"
}
This is valid—not every task produces learnings.
| Category | Examples |
|---|---|
| Common knowledge | TS strict mode, git basics, debugging 101 |
| Config values | URLs, file paths, project commands |
| Implementation details | Query order, field names, styling choices |
| Temporary | Bug workarounds, feature-specific decisions |
Mentions specific paths/packages/features? → Project (CLAUDE.md)
Applies to any project with same tech? → Global (org-patterns.toon)
Your agent definition may reference a domain-specific learning prompt (e.g., prompts/plan-writer-learning.md). If so, read it before capturing learnings.
Weekly Installs
1
Repository
GitHub Stars
71
First Seen
Today
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
windsurf1
amp1
cline1
openclaw1
opencode1
cursor1
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
114,200 周安装