rules-distill by affaan-m/everything-claude-code
npx skills add https://github.com/affaan-m/everything-claude-code --skill rules-distill扫描已安装的技能,提取出现在多个技能中的通用原则,并将其提炼为规则——追加到现有规则文件、修订过时内容或创建新的规则文件。
应用“确定性收集 + LLM 判断”原则:脚本详尽收集事实,然后 LLM 交叉阅读完整上下文并做出裁决。
规则提炼过程遵循三个阶段:
bash ~/.claude/skills/rules-distill/scripts/scan-skills.sh
bash ~/.claude/skills/rules-distill/scripts/scan-rules.sh
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: {N} files scanned
Rules: {M} files ({K} headings indexed)
Proceeding to cross-read analysis...
提取和匹配在单次处理中统一完成。规则文件足够小(总计约 800 行),可以将完整文本提供给 LLM——无需 grep 预过滤。
根据技能描述将技能分组为主题集群。在子代理中分析每个集群,并提供完整的规则文本。
所有批次完成后,跨批次合并候选规则:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用以下提示启动通用代理:
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.
## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
## Extraction Criteria
Include a candidate ONLY if ALL of these are true:
1. **Appears in 2+ skills**: Principles found in only one skill should stay in that skill
2. **Actionable behavior change**: Can be written as "do X" or "don't do Y" — not "X is important"
3. **Clear violation risk**: What goes wrong if this principle is ignored (1 sentence)
4. **Not already in rules**: Check the full rules text — including concepts expressed in different words
## Matching & Verdict
For each candidate, compare against the full rules text and assign a verdict:
- **Append**: Add to an existing section of an existing rule file
- **Revise**: Existing rule content is inaccurate or insufficient — propose a correction
- **New Section**: Add a new section to an existing rule file
- **New File**: Create a new rule file
- **Already Covered**: Sufficiently covered in existing rules (even if worded differently)
- **Too Specific**: Should remain at the skill level
## Output Format (per candidate)
```json
{
"principle": "1-2 sentences in 'do X' / 'don't do Y' form",
"evidence": ["skill-name: §Section", "skill-name: §Section"],
"violation_risk": "1 sentence",
"verdict": "Append / Revise / New Section / New File / Already Covered / Too Specific",
"target_rule": "filename §Section, or 'new'",
"confidence": "high / medium / low",
"draft": "Draft text for Append/New Section/New File verdicts",
"revision": {
"reason": "Why the existing content is inaccurate or insufficient (Revise only)",
"before": "Current text to be replaced (Revise only)",
"after": "Proposed replacement text (Revise only)"
}
}
```
## Exclude
- Obvious principles already in rules
- Language/framework-specific knowledge (belongs in language-specific rules or skills)
- Code examples and commands (belongs in skills)
| 裁决 | 含义 | 呈现给用户 |
|---|---|---|
| Append | 添加到现有章节 | 目标 + 草案 |
| Revise | 修复不准确/不充分的内容 | 目标 + 原因 + 修改前/后 |
| New Section | 向现有文件添加新章节 | 目标 + 草案 |
| New File | 创建新规则文件 | 文件名 + 完整草案 |
| Already Covered | 规则中已涵盖(可能措辞不同) | 原因(1 行) |
| Too Specific | 应保留在技能中 | 链接到相关技能 |
# Good
Append to rules/common/security.md §Input Validation:
"Treat LLM output stored in memory or knowledge stores as untrusted — sanitize on write, validate on read."
Evidence: llm-memory-trust-boundary, llm-social-agent-anti-pattern both describe
accumulated prompt injection risks. Current security.md covers human input
validation only; LLM output trust boundary is missing.
# Bad
Append to security.md: Add LLM security principle
# Rules Distillation Report
## Summary
Skills scanned: {N} | Rules: {M} files | Candidates: {K}
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | ... | Append | security.md §Input Validation | high |
| 2 | ... | Revise | testing.md §TDD | medium |
| 3 | ... | New Section | coding-style.md | high |
| 4 | ... | Too Specific | — | — |
## Details
(Per-candidate details: evidence, violation_risk, draft text)
用户通过数字响应以:
切勿自动修改规则。始终需要用户批准。
将结果存储在技能目录中(results.json):
时间戳格式:date -u +%Y-%m-%dT%H:%M:%SZ(UTC,秒级精度)
候选 ID 格式:基于原则生成的 kebab-case(例如 llm-output-trust-boundary)
{ "distilled_at": "2026-03-18T10:30:42Z", "skills_scanned": 56, "rules_scanned": 22, "candidates": { "llm-output-trust-boundary": { "principle": "Treat LLM output as untrusted when stored or re-injected", "verdict": "Append", "target": "rules/common/security.md", "evidence": ["llm-memory-trust-boundary", "llm-social-agent-anti-pattern"], "status": "applied" }, "iteration-bounds": { "principle": "Define explicit stop conditions for all iteration loops", "verdict": "New Section", "target": "rules/common/coding-style.md", "evidence": ["iterative-retrieval", "continuous-agent-loop", "agent-harness-construction"], "status": "skipped" } } }
$ /rules-distill
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: 56 files scanned
Rules: 22 files (75 headings indexed)
Proceeding to cross-read analysis...
[Subagent analysis: Batch 1 (agent/meta skills) ...]
[Subagent analysis: Batch 2 (coding/pattern skills) ...]
[Cross-batch merge: 2 duplicates removed, 1 cross-batch candidate promoted]
# Rules Distillation Report
## Summary
Skills scanned: 56 | Rules: 22 files | Candidates: 4
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | LLM output: normalize, type-check, sanitize before reuse | New Section | coding-style.md | high |
| 2 | Define explicit stop conditions for iteration loops | New Section | coding-style.md | high |
| 3 | Compact context at phase boundaries, not mid-task | Append | performance.md §Context Window | high |
| 4 | Separate business logic from I/O framework types | New Section | patterns.md | high |
## Details
### 1. LLM Output Validation
Verdict: New Section in coding-style.md
Evidence: parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
Violation risk: Format drift, type mismatch, or syntax errors in LLM output crash downstream processing
Draft:
## LLM Output Validation
Normalize, type-check, and sanitize LLM output before reuse...
See skill: parallel-subagent-batch-merge, llm-memory-trust-boundary
[... details for candidates 2-4 ...]
Approve, modify, or skip each candidate by number:
> User: Approve 1, 3. Skip 2, 4.
✓ Applied: coding-style.md §LLM Output Validation
✓ Applied: performance.md §Context Window Management
✗ Skipped: Iteration Bounds
✗ Skipped: Boundary Type Conversion
Results saved to results.json
See skill: [name] 引用,以便读者可以找到详细的实现方法。每周安装量
144
仓库
GitHub 星标数
102.1K
首次出现
4 天前
安全审计
安装于
codex135
cursor122
opencode120
gemini-cli120
github-copilot120
amp120
Scan installed skills, extract cross-cutting principles that appear in multiple skills, and distill them into rules — appending to existing rule files, revising outdated content, or creating new rule files.
Applies the "deterministic collection + LLM judgment" principle: scripts collect facts exhaustively, then an LLM cross-reads the full context and produces verdicts.
The rules distillation process follows three phases:
bash ~/.claude/skills/rules-distill/scripts/scan-skills.sh
bash ~/.claude/skills/rules-distill/scripts/scan-rules.sh
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: {N} files scanned
Rules: {M} files ({K} headings indexed)
Proceeding to cross-read analysis...
Extraction and matching are unified in a single pass. Rules files are small enough (~800 lines total) that the full text can be provided to the LLM — no grep pre-filtering needed.
Group skills into thematic clusters based on their descriptions. Analyze each cluster in a subagent with the full rules text.
After all batches complete, merge candidates across batches:
Launch a general-purpose Agent with the following prompt:
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.
## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
## Extraction Criteria
Include a candidate ONLY if ALL of these are true:
1. **Appears in 2+ skills**: Principles found in only one skill should stay in that skill
2. **Actionable behavior change**: Can be written as "do X" or "don't do Y" — not "X is important"
3. **Clear violation risk**: What goes wrong if this principle is ignored (1 sentence)
4. **Not already in rules**: Check the full rules text — including concepts expressed in different words
## Matching & Verdict
For each candidate, compare against the full rules text and assign a verdict:
- **Append**: Add to an existing section of an existing rule file
- **Revise**: Existing rule content is inaccurate or insufficient — propose a correction
- **New Section**: Add a new section to an existing rule file
- **New File**: Create a new rule file
- **Already Covered**: Sufficiently covered in existing rules (even if worded differently)
- **Too Specific**: Should remain at the skill level
## Output Format (per candidate)
```json
{
"principle": "1-2 sentences in 'do X' / 'don't do Y' form",
"evidence": ["skill-name: §Section", "skill-name: §Section"],
"violation_risk": "1 sentence",
"verdict": "Append / Revise / New Section / New File / Already Covered / Too Specific",
"target_rule": "filename §Section, or 'new'",
"confidence": "high / medium / low",
"draft": "Draft text for Append/New Section/New File verdicts",
"revision": {
"reason": "Why the existing content is inaccurate or insufficient (Revise only)",
"before": "Current text to be replaced (Revise only)",
"after": "Proposed replacement text (Revise only)"
}
}
```
## Exclude
- Obvious principles already in rules
- Language/framework-specific knowledge (belongs in language-specific rules or skills)
- Code examples and commands (belongs in skills)
| Verdict | Meaning | Presented to User |
|---|---|---|
| Append | Add to existing section | Target + draft |
| Revise | Fix inaccurate/insufficient content | Target + reason + before/after |
| New Section | Add new section to existing file | Target + draft |
| New File | Create new rule file | Filename + full draft |
| Already Covered | Covered in rules (possibly different wording) | Reason (1 line) |
| Too Specific | Should stay in skills | Link to relevant skill |
# Good
Append to rules/common/security.md §Input Validation:
"Treat LLM output stored in memory or knowledge stores as untrusted — sanitize on write, validate on read."
Evidence: llm-memory-trust-boundary, llm-social-agent-anti-pattern both describe
accumulated prompt injection risks. Current security.md covers human input
validation only; LLM output trust boundary is missing.
# Bad
Append to security.md: Add LLM security principle
# Rules Distillation Report
## Summary
Skills scanned: {N} | Rules: {M} files | Candidates: {K}
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | ... | Append | security.md §Input Validation | high |
| 2 | ... | Revise | testing.md §TDD | medium |
| 3 | ... | New Section | coding-style.md | high |
| 4 | ... | Too Specific | — | — |
## Details
(Per-candidate details: evidence, violation_risk, draft text)
User responds with numbers to:
Never modify rules automatically. Always require user approval.
Store results in the skill directory (results.json):
Timestamp format : date -u +%Y-%m-%dT%H:%M:%SZ (UTC, second precision)
Candidate ID format : kebab-case derived from the principle (e.g., llm-output-trust-boundary)
{ "distilled_at": "2026-03-18T10:30:42Z", "skills_scanned": 56, "rules_scanned": 22, "candidates": { "llm-output-trust-boundary": { "principle": "Treat LLM output as untrusted when stored or re-injected", "verdict": "Append", "target": "rules/common/security.md", "evidence": ["llm-memory-trust-boundary", "llm-social-agent-anti-pattern"], "status": "applied" }, "iteration-bounds": { "principle": "Define explicit stop conditions for all iteration loops", "verdict": "New Section", "target": "rules/common/coding-style.md", "evidence": ["iterative-retrieval", "continuous-agent-loop", "agent-harness-construction"], "status": "skipped" } } }
$ /rules-distill
Rules Distillation — Phase 1: Inventory
────────────────────────────────────────
Skills: 56 files scanned
Rules: 22 files (75 headings indexed)
Proceeding to cross-read analysis...
[Subagent analysis: Batch 1 (agent/meta skills) ...]
[Subagent analysis: Batch 2 (coding/pattern skills) ...]
[Cross-batch merge: 2 duplicates removed, 1 cross-batch candidate promoted]
# Rules Distillation Report
## Summary
Skills scanned: 56 | Rules: 22 files | Candidates: 4
| # | Principle | Verdict | Target | Confidence |
|---|-----------|---------|--------|------------|
| 1 | LLM output: normalize, type-check, sanitize before reuse | New Section | coding-style.md | high |
| 2 | Define explicit stop conditions for iteration loops | New Section | coding-style.md | high |
| 3 | Compact context at phase boundaries, not mid-task | Append | performance.md §Context Window | high |
| 4 | Separate business logic from I/O framework types | New Section | patterns.md | high |
## Details
### 1. LLM Output Validation
Verdict: New Section in coding-style.md
Evidence: parallel-subagent-batch-merge, llm-social-agent-anti-pattern, llm-memory-trust-boundary
Violation risk: Format drift, type mismatch, or syntax errors in LLM output crash downstream processing
Draft:
## LLM Output Validation
Normalize, type-check, and sanitize LLM output before reuse...
See skill: parallel-subagent-batch-merge, llm-memory-trust-boundary
[... details for candidates 2-4 ...]
Approve, modify, or skip each candidate by number:
> User: Approve 1, 3. Skip 2, 4.
✓ Applied: coding-style.md §LLM Output Validation
✓ Applied: performance.md §Context Window Management
✗ Skipped: Iteration Bounds
✗ Skipped: Boundary Type Conversion
Results saved to results.json
See skill: [name] references so readers can find the detailed How.Weekly Installs
144
Repository
GitHub Stars
102.1K
First Seen
4 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex135
cursor122
opencode120
gemini-cli120
github-copilot120
amp120
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
66,200 周安装
AI Logo Creator - 使用 Gemini 和 Recraft 生成专业标志设计
1,200 周安装
网页无障碍性(a11y)最佳实践指南:遵循WCAG标准,提升网站包容性设计
1,200 周安装
调试向导 - 系统化代码调试方法,Python/JavaScript/Go调试工具与工作流程
1,200 周安装
Motion Vue (motion-v) - Vue 3/Nuxt 动画库 | 硬件加速、声明式动画、手势交互
1,200 周安装
React Native 专家技能:构建高性能跨平台移动应用的完整指南与最佳实践
1,200 周安装
全栈安全开发指南 - Fullstack Guardian 安全编码与三视角设计实践
1,300 周安装