重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
skill-architect by erichowens/some_claude_skills
npx skills add https://github.com/erichowens/some_claude_skills --skill skill-architect创建专家级 Agent 技能的权威统一指南。它编码了区分“仅仅存在”的技能与“精准激活、高效教学、让用户立即上手”的技能所需的知识。
优秀的技能是渐进式披露的机器。 它们编码的是真正的领域专业知识(识别标志),而非表面的指令。它们遵循三层架构:用于发现的轻量级元数据、用于核心流程的精简 SKILL.md,以及用于深度探索、仅在需要时加载的参考文件。
✅ 适用于:
❌ 不适用于:
对于现有技能,按优先级顺序应用:
[做什么] [何时用] [关键词]。不适用于 [排除项] 公式/references广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
技能采用三层加载。运行时在启动时扫描元数据,激活时加载 SKILL.md,并且仅在代理决定需要时才拉取参考文件。
| 层级 | 内容 | 大小 | 加载方式 |
|---|---|---|---|
| 1. 元数据 | 前置元数据中的 name + description | ~100 tokens | 始终在上下文中(目录扫描) |
| 2. SKILL.md | 核心流程、决策树、简要反模式 | <5k tokens | 技能激活时 |
| 3. 参考文件 | 深度探索、示例、模板、规范 | 无限制 | 按需、按文件、仅在相关时 |
关键规则:
/references。| 键 | 用途 | 示例 |
|---|---|---|
name | 小写连字符标识符 | react-server-components |
description | 激活触发器:[做什么] [何时用] [关键词]。不适用于 [排除项] | 参见描述公式 |
| 键 | 用途 | 示例 |
|---|---|---|
allowed-tools | 逗号分隔的工具名称(最小权限原则) | Read,Write,Grep |
argument-hint | 自动完成中显示的预期参数提示 | "[path] [format]" |
license | 许可证标识符 | MIT |
disable-model-invocation | 如果为 true,则仅能通过 /skill-name 由用户触发 | true |
user-invocable | 控制技能是否出现在 UI 菜单中 | true |
context | 执行上下文;fork 在隔离的子代理中运行技能 | fork |
agent | 当 context: fork 时使用的子代理类型 | code-reviewer |
model | 技能激活时覆盖使用的模型 | sonnet |
hooks | 限定在此技能生命周期内的钩子 | 参见钩子参考 |
metadata | 用于工具/仪表板的任意键值映射 | author: your-org |
像 category、tags、version 这样的自定义键会被 Claude Code 忽略,但可以安全地包含用于你自己的工具(图库网站、文档生成器、仪表板)。它们不会与运行时解析冲突。
# ❌ 这些看起来像有效键但不是 —— 请使用正确的替代项
tools: Read,Write # 改用 'allowed-tools'
integrates_with: [...] # 改用 SKILL.md 正文文本
triggers: [...] # 改用 'description' 中的关键词
outputs: [...] # 改用 SKILL.md 中的输出格式部分
coordinates_with: [...] # 改用 SKILL.md 正文文本
python_dependencies: [...] # 改用 SKILL.md 正文文本
模式:[它做什么] [何时使用] [触发关键词]。不适用于 [排除项]。
描述是激活最重要的行。Claude 的运行时扫描描述来决定加载哪个技能。一个弱的描述意味着零激活或持续的误报。
| 问题 | 差 | 好 |
|---|---|---|
| 太模糊 | "帮助处理图像" | "用于图像-文本匹配和零样本分类的 CLIP 语义搜索。不适用于计数、空间推理或生成。" |
| 没有排除项 | "审查代码变更" | "审查 TypeScript/React 差异和 PR 的正确性。不适用于编写新功能。" |
| 微型手册 | "研究,然后概述,然后起草..." | "生成 1-3 页综合报告的结构化研究。不适用于快速事实性问题。" |
| 包罗万象 | "帮助进行产品管理" | "编写和完善产品需求文档(PRD)。不适用于战略演示文稿。" |
| 名称不匹配 | name: db-migration / desc: "撰写营销邮件" | name: db-migration / desc: "规划具有回滚策略的数据库模式迁移。" |
包含更多示例的完整指南:参见 references/description-guide.md
---
name: your-skill-name
description: [What] [When] [Keywords]. NOT for [Exclusions].
allowed-tools: Read,Write
---
# Skill Name
[One sentence purpose]
## When to Use
✅ Use for: [A, B, C with specific trigger keywords]
❌ NOT for: [D, E, F — explicit boundaries]
## Core Process
[Mermaid diagrams — 23 types available. See visual-artifacts.md for full catalog]
## Anti-Patterns
### [Pattern Name]
**Novice**: [Wrong assumption]
**Expert**: [Why it's wrong + correct approach]
**Timeline**: [When this changed, if temporal]
## References
- `references/guide.md` — Consult when [specific situation]
- `references/examples.md` — Consult for [worked examples of X]
flowchart LR
S1[1. Gather Examples] --> S2[2. Plan Contents]
S2 --> S3[3. Initialize]
S3 --> S4[4. Write Skill]
S4 --> S5[5. Validate]
S5 --> S6{Errors?}
S6 -->|Yes| S4
S6 -->|No| S7[6. Ship & Iterate]
收集 3-5 个应触发此技能的真实查询,以及 3-5 个不应触发的查询。
针对每个示例,识别哪些脚本、参考文件或资源可以防止重复工作。同时识别识别标志:领域算法、时效性知识、框架演进、常见陷阱。
scripts/init_skill.py <skill-name> --path <output-directory>
对于现有技能,跳到步骤 4。
实现顺序:
scripts/) —— 可运行的代码,而非模板references/) —— 领域知识、模式、指南使用祈使语气编写:“要完成 X,请执行 Y”,而不是“你应该做 X”。
在 SKILL.md 中回答这些问题:
references/visual-artifacts.md)python scripts/validate_skill.py <path>
python scripts/check_self_contained.py <path>
修复 错误 → 警告 → 建议。
在真实世界使用后:注意遇到的困难,改进 SKILL.md 和资源,更新 CHANGELOG.md。
当技能将由子代理(而不仅仅是直接用户调用)加载时,应用以下模式:
教导子代理将每个技能视为一个微型协议:
子代理的提示应包含四个部分:
完整模板和编排模式:参见 references/subagent-design.md
包含 Mermaid 图的技能能同时服务于两类受众。对人类而言,图表渲染为可视化流程图、状态机和时间线 —— 可立即解析。对代理而言,Mermaid 是一种基于文本的图 DSL —— A -->|Yes| B 是一条明确、无歧义的边,实际上比等效的散文更容易推理。代理读取文本;人类看到图片。两者皆赢。
规则:如果技能描述了一个流程、决策树、架构、状态机、时间线或数据关系,请包含一个 Mermaid 图。直接在 SKILL.md 中使用原始的 ````mermaid` 块 —— 不要包裹在外层的 markdown 代码块中。
Mermaid 支持 23 种图类型。为你的内容使用最具体的一种 —— 用于生命周期的状态图比带有“返回”箭头的流程图更好。
| 技能内容 | 图类型 | 语法 |
|---|---|---|
| 决策树 / 故障排除 | 流程图 | flowchart TD |
| API/代理通信协议 | 序列图 | sequenceDiagram |
| 生命周期 / 状态转换 | 状态图 | stateDiagram-v2 |
| 数据模型 / 模式 | 实体关系图 | erDiagram |
| 类型层次结构 / 接口 | 类图 | classDiagram |
| 时效性知识 / 演进 | 时间线 | timeline |
| 领域分类法 / 概念图 | 思维导图 | mindmap |
| 优先级矩阵(双轴) | 象限图 | quadrantChart |
| 组件布局 / 块 | 块图 | block-beta |
| 基础设施 / 云拓扑 | 架构图 | architecture-beta |
| 多层次系统视图(C4) | C4 图 | C4Context / C4Container / C4Component |
| 项目阶段 / 推出计划 | 甘特图 | gantt |
| Git 分支 / 发布策略 | Git 图 | gitGraph |
| 用户体验流程 | 旅程图 | journey |
| 数量流 / 预算 | 桑基图 | sankey-beta |
| 指标 / 基准 | XY 图 | xychart-beta |
| 比例细分 | 饼图 | pie |
| 分层大小比较 | 树状图 | treemap |
| 多轴能力比较 | 雷达图 | radar |
| 任务/状态跟踪 | 看板图 | kanban |
| 需求可追溯性 | 需求图 | requirementDiagram |
| 网络协议 / 二进制格式 | 包图 | packet-beta |
| 序列图(代码语法) | ZenUML | zenuml (插件) |
Mermaid 支持可选的 --- 前置元数据块用于渲染定制(主题、颜色、间距)。它不是必需的。 代理会忽略它。渲染器在没有它的情况下会应用合理的默认值。仅当你需要为发布的文档指定特定的视觉样式时才添加它。
# 可选 —— 仅用于渲染定制
---
title: My Diagram
config:
theme: neutral
flowchart:
curve: basis
---
主题:default, dark, forest, neutral, base。完整配置参考:https://mermaid.ai/open-source/config/configuration.html
包含所有 16+ 类型示例的完整图表目录:参见 references/visual-artifacts.md
区分新手与专家的专业知识。由于过时的训练数据或盲目模仿的模式,LLM 会出错的地方。
### Anti-Pattern: [Name]
**Novice**: "[Wrong assumption]"
**Expert**: [Why it's wrong, with evidence]
**Timeline**: [Date]: [Old way] → [Date]: [New way]
**LLM mistake**: [Why LLMs suggest the old pattern]
**Detection**: [How to spot this in code/config]
包含案例研究的完整目录:参见 references/antipatterns.md
技能是 Claude 七种扩展类型之一:技能(领域知识)、插件(用于分发的打包捆绑)、MCP 服务器(外部 API + 认证)、脚本(本地操作)、斜杠命令(用户触发的技能)、钩子(在 17+ 个事件点的生命周期自动化)和 Agent SDK(编程式访问 Claude Code)。大多数技能应包含脚本。MCP 仅用于认证/状态边界。插件用于跨团队/社区共享技能。
| 需求 | 扩展类型 | 关键要求 |
|---|---|---|
| 领域专业知识 / 流程 | 技能 (SKILL.md) | 决策树、反模式、输出契约 |
| 打包与分发 | 插件 (plugin.json) | 捆绑技能 + 钩子 + MCP + 代理 |
| 外部 API + 认证 | MCP 服务器 | 可运行的服务器 + 设置 README |
| 可重复的本地操作 | 脚本 | 实际可运行(非模板),最小化依赖 |
| 多步骤编排 | 子代理 | 4 部分提示、技能、工作流 |
| 用户触发的操作 | 斜杠命令 | 带有 user-invocable: true 的技能 |
| 生命周期自动化 | 钩子 | 17+ 个事件:PreToolUse, PostToolUse, Stop 等 |
| 编程式访问 | Agent SDK | npm/pip 包、CI/CD 流水线 |
演进路径:技能 → 技能 + 脚本 → 技能 + MCP 服务器 → 技能 + 子代理 → 插件(用于分发)。仅在复杂性证明合理时才升级。
包含示例和常见错误的完整分类法:参见 references/claude-extension-taxonomy.md 详细工具模式:参见 references/self-contained-tools.md 插件创建与分发:参见 references/plugin-architecture.md
原则:最小权限 —— 仅授予所需权限。
| 访问级别 | allowed-tools |
|---|---|
| 只读 | Read,Grep,Glob |
| 文件修改器 | Read,Write,Edit |
| 构建集成 | Read,Write,Bash(npm:*,git:*) |
| ⚠️ 切勿用于不受信任的 | 无限制的 Bash |
---|---|---
1 | 文档堆砌 | 在 SKILL.md 中使用决策树,深度内容放入 /references
2 | 缺少 NOT 子句 | 始终在描述中包含“不适用于 X, Y, Z”
3 | 幽灵工具 | 仅引用存在且可用的文件
4 | 模板汤 | 交付可运行的代码,否则什么也不交付
5 | 权限过大 | 最小权限:指定工具列表,限定 Bash 范围
6 | 过时的时效性知识 | 为所有建议标注日期,每季度更新
7 | 包罗万象的技能 | 按专业知识类型而非领域拆分
8 | 模糊的描述 | 使用 [做什么] [何时用] [关键词]。不适用于 [排除项]
9 | 急切加载 | 切勿“先读所有文件”;惰性加载参考文件
10 | 纯文本流程 | 使用 Mermaid 图(23 种类型)—— 流程图、序列图、状态图、实体关系图、时间线等
完整案例研究:参见 references/antipatterns.md
□ SKILL.md 存在且少于 500 行
□ 前置元数据包含 name + description(最低要求)
□ 描述遵循 [做什么][何时用][关键词] 不适用于 [排除项] 公式
□ 描述使用用户实际会输入的关键词
□ 名称和描述一致(不矛盾)
□ 至少包含 1 个使用识别标志模板的反模式
□ 所有引用的文件实际存在(无幽灵文件)
□ 脚本可运行(非模板),有清晰的 CLI,处理错误
□ 参考文件在 SKILL.md 中均有 1 行用途说明
□ 流程/决策/生命周期使用 Mermaid 图(23 种类型),而非纯文本
□ CHANGELOG.md 记录版本历史
□ 如果由子代理消费:定义了输出契约
运行自动化检查:python scripts/validate_skill.py <path> 和 python scripts/validate_mermaid.py <path>
导致 Claude Code 在加载时拒绝或错误处理技能的原因:
| 原因 | 症状 | 修复方法 |
|---|---|---|
缺少 name 或 description | 技能无法加载 | 在前置元数据中添加两者 |
tools: 而非 allowed-tools: | 工具被静默忽略 | 使用 allowed-tools:(带连字符) |
allowed-tools 中使用 YAML 列表 | 解析错误 | 使用逗号分隔:Read,Write,Edit |
allowed-tools 中包含括号 | 解析错误 | 不要使用 [ ] —— 只用 Read,Write,Edit |
无效键(triggers, outputs) | 静默忽略或错误 | 移至 SKILL.md 正文文本 |
| 名称包含空格/大写 | 可能匹配失败 | 小写连字符:my-skill-name |
| 名称与目录不匹配 | 激活不匹配 | 保持 name = 目录名 |
context: 不是 fork | 被忽略 | 唯一有效值是 fork |
disable-model-invocation: 不是布尔值 | 被忽略 | 使用 true 或 false |
| 幽灵文件引用 | 代理浪费工具调用 | 删除引用或创建文件 |
完整验证:python scripts/validate_skill.py <path> 能捕获所有这些。
| 指标 | 目标 | 如何衡量 |
|---|---|---|
| 正确激活率 | >90% | 测试应触发的查询 |
| 误报率 | <5% | 测试不应触发的查询 |
| 令牌使用量 | <5k | SKILL.md 大小 + 典型的参考文件加载量 |
| 上手时间 | <5 分钟 | 用户立即开始工作 |
| 反模式预防率 | >80% | 用户避免记录的错误 |
如需深度探索,请查阅这些文件 —— 它们不默认加载:
| 文件 | 何时查阅 |
|---|---|
references/knowledge-engineering.md | 将专家知识提取到技能中的知识工程方法;协议分析、积贮格、顿悟时刻 |
references/description-guide.md | 编写或重写技能描述时 |
references/antipatterns.md | 寻找识别标志、案例研究或时效性模式时 |
references/self-contained-tools.md | 向技能添加脚本、MCP 服务器或子代理时 |
references/subagent-design.md | 为子代理消费或编排设计技能时 |
references/claude-extension-taxonomy.md | 技能 vs 插件 vs MCP vs 钩子 vs Agent SDK —— 7 种类型分类法 |
references/plugin-architecture.md | 通过市场创建、打包和分发插件时 |
references/visual-artifacts.md | 添加 Mermaid 图时:所有 23 种类型、YAML 配置、最佳实践 |
references/mcp-template.md | 为技能构建 MCP 服务器时 |
references/subagent-template.md | 定义子代理提示和多代理流水线时 |
scripts/validate_mermaid.py | 验证任何文件中的 Mermaid 语法 —— 检查图类型、平衡的代码块、结构正确性 |
每周安装次数
54
仓库
GitHub 星标数
79
首次出现
2026年1月24日
安全审计
安装于
gemini-cli46
codex46
opencode45
cursor45
github-copilot42
claude-code40
The unified authority for creating expert-level Agent Skills. Encodes the knowledge that separates a skill that merely exists from one that activates precisely, teaches efficiently, and makes users productive immediately.
Great skills are progressive disclosure machines. They encode real domain expertise (shibboleths), not surface instructions. They follow a three-layer architecture: lightweight metadata for discovery, lean SKILL.md for core process, and reference files for deep dives loaded only on demand.
✅ Use for :
❌ NOT for :
For existing skills, apply in priority order:
[What] [When] [Keywords]. NOT for [Exclusions] formula/referencesSkills use three-layer loading. The runtime scans metadata at startup, loads SKILL.md on activation, and pulls reference files only when the agent decides it needs them.
| Layer | Content | Size | Loading |
|---|---|---|---|
| 1. Metadata | name + description in frontmatter | ~100 tokens | Always in context (catalog scan) |
| 2. SKILL.md | Core process, decision trees, brief anti-patterns | <5k tokens | On skill activation |
| 3. References | Deep dives, examples, templates, specs | Unlimited | On-demand, per-file, only when relevant |
Critical rules :
/references.| Key | Purpose | Example |
|---|---|---|
name | Lowercase-hyphenated identifier | react-server-components |
description | Activation trigger: [What] [When] [Keywords]. NOT for [Exclusions] | See Description Formula |
| Key | Purpose | Example |
|---|---|---|
allowed-tools | Comma-separated tool names (least privilege) | Read,Write,Grep |
argument-hint | Hint shown in autocomplete for expected arguments | "[path] [format]" |
license | License identifier | MIT |
disable-model-invocation |
Custom keys like category, tags, version are ignored by Claude Code but safe to include for your own tooling (gallery websites, documentation generators, dashboards). They don't conflict with runtime parsing.
# ❌ These look like valid keys but aren't — use the correct alternatives
tools: Read,Write # Use 'allowed-tools' instead
integrates_with: [...] # Use SKILL.md body text instead
triggers: [...] # Use 'description' keywords instead
outputs: [...] # Use SKILL.md Output Format section instead
coordinates_with: [...] # Use SKILL.md body text instead
python_dependencies: [...] # Use SKILL.md body text instead
Pattern : [What it does] [When to use] [Trigger keywords]. NOT for [Exclusions].
The description is the most important line for activation. Claude's runtime scans descriptions to decide which skill to load. A weak description means zero activations or constant false positives.
| Problem | Bad | Good |
|---|---|---|
| Too vague | "Helps with images" | "CLIP semantic search for image-text matching and zero-shot classification. NOT for counting, spatial reasoning, or generation." |
| No exclusions | "Reviews code changes" | "Reviews TypeScript/React diffs and PRs for correctness. NOT for writing new features." |
| Mini-manual | "Researches, then outlines, then drafts..." | "Structured research producing 1-3 page synthesis reports. NOT for quick factual questions." |
| Catch-all | "Helps with product management" | "Writes and refines product requirement documents (PRDs). NOT for strategy decks." |
| Name mismatch | name: db-migration / desc: "writes marketing emails" | name: db-migration / desc: "Plans database schema migrations with rollback strategies." |
Full guide with more examples : See references/description-guide.md
---
name: your-skill-name
description: [What] [When] [Keywords]. NOT for [Exclusions].
allowed-tools: Read,Write
---
# Skill Name
[One sentence purpose]
## When to Use
✅ Use for: [A, B, C with specific trigger keywords]
❌ NOT for: [D, E, F — explicit boundaries]
## Core Process
[Mermaid diagrams — 23 types available. See visual-artifacts.md for full catalog]
## Anti-Patterns
### [Pattern Name]
**Novice**: [Wrong assumption]
**Expert**: [Why it's wrong + correct approach]
**Timeline**: [When this changed, if temporal]
## References
- `references/guide.md` — Consult when [specific situation]
- `references/examples.md` — Consult for [worked examples of X]
flowchart LR
S1[1. Gather Examples] --> S2[2. Plan Contents]
S2 --> S3[3. Initialize]
S3 --> S4[4. Write Skill]
S4 --> S5[5. Validate]
S5 --> S6{Errors?}
S6 -->|Yes| S4
S6 -->|No| S7[6. Ship & Iterate]
Collect 3-5 real queries that should trigger this skill, and 3-5 that should NOT.
For each example, identify what scripts, references, or assets would prevent re-work. Also identify shibboleths: domain algorithms, temporal knowledge, framework evolution, common pitfalls.
scripts/init_skill.py <skill-name> --path <output-directory>
For existing skills, skip to Step 4.
Order of implementation:
scripts/) — Working code, not templatesreferences/) — Domain knowledge, schemas, guidesWrite in imperative form: "To accomplish X, do Y" not "You should do X."
Answer these questions in SKILL.md:
references/visual-artifacts.md)python scripts/validate_skill.py <path>
python scripts/check_self_contained.py <path>
Fix ERRORS → WARNINGS → SUGGESTIONS.
After real-world use: notice struggles, improve SKILL.md and resources, update CHANGELOG.md.
When skills will be loaded by subagents (not just direct user invocation), apply these patterns:
Teach the subagent to treat each skill like a mini-protocol:
The subagent's prompt should have four sections:
Full templates and orchestration patterns : See references/subagent-design.md
Skills that include Mermaid diagrams serve two audiences at once. For humans , diagrams render as visual flowcharts, state machines, and timelines — instantly parseable. For agents , Mermaid is a text-based graph DSL — A -->|Yes| B is an explicit, unambiguous edge that's actually easier to reason about than equivalent prose. The agent reads the text; the human sees the picture. Both win.
Rule : If a skill describes a process, decision tree, architecture, state machine, timeline, or data relationship, include a Mermaid diagram. Use raw ````mermaid` blocks directly in SKILL.md — not wrapped in outer markdown fences.
Mermaid supports 23 diagram types. Use the most specific one for your content — a state diagram for lifecycles is better than a flowchart with "go back" arrows.
| Skill Content | Diagram Type | Syntax |
|---|---|---|
| Decision trees / troubleshooting | Flowchart | flowchart TD |
| API/agent communication protocols | Sequence | sequenceDiagram |
| Lifecycle / status transitions | State | stateDiagram-v2 |
| Data models / schemas | ER | erDiagram |
| Type hierarchies / interfaces | Class | classDiagram |
Mermaid supports an optional --- frontmatter block for rendering customization (themes, colors, spacing). It is not required. Agents ignore it. Renderers apply sensible defaults without it. Only add it when you need specific visual styling for published documentation.
# Optional — only for render customization
---
title: My Diagram
config:
theme: neutral
flowchart:
curve: basis
---
Themes: default, dark, forest, neutral, base. Full config reference: https://mermaid.ai/open-source/config/configuration.html
Full diagram catalog with examples of all 16+ types : See references/visual-artifacts.md
Expert knowledge that separates novices from experts. Things LLMs get wrong due to outdated training data or cargo-culted patterns.
### Anti-Pattern: [Name]
**Novice**: "[Wrong assumption]"
**Expert**: [Why it's wrong, with evidence]
**Timeline**: [Date]: [Old way] → [Date]: [New way]
**LLM mistake**: [Why LLMs suggest the old pattern]
**Detection**: [How to spot this in code/config]
Full catalog with case studies : See references/antipatterns.md
Skills are one of seven Claude extension types: Skills (domain knowledge), Plugins (packaged bundles for distribution), MCP Servers (external APIs + auth), Scripts (local operations), Slash Commands (user-triggered skills), Hooks (lifecycle automation at 17+ event points), and Agent SDK (programmatic Claude Code access). Most skills should include scripts. MCPs are only for auth/state boundaries. Plugins are for sharing skills across teams/community.
| Need | Extension Type | Key Requirement |
|---|---|---|
| Domain expertise / process | Skill (SKILL.md) | Decision trees, anti-patterns, output contracts |
| Packaging & distribution | Plugin (plugin.json) | Bundles skills + hooks + MCP + agents |
| External API + auth | MCP Server | Working server + setup README |
| Repeatable local operation | Script | Actually runs (not a template), minimal deps |
| Multi-step orchestration | Subagent | 4-section prompt, skills, workflow |
| User-triggered action | Slash Command | Skill with user-invocable: true |
| Lifecycle automation |
Evolution path : Skill → Skill + Scripts → Skill + MCP Server → Skill + Subagent → Plugin (for distribution). Only promote when complexity justifies it.
Full taxonomy with examples and common mistakes : See references/claude-extension-taxonomy.md Detailed tool patterns : See references/self-contained-tools.md Plugin creation and distribution : See references/plugin-architecture.md
Principle : Least privilege — only grant what's needed.
| Access Level | allowed-tools |
|---|---|
| Read-only | Read,Grep,Glob |
| File modifier | Read,Write,Edit |
| Build integration | Read,Write,Bash(npm:*,git:*) |
| ⚠️ Never for untrusted | Unrestricted Bash |
---|---|---
1 | Documentation Dump | Decision trees in SKILL.md, depth in /references
2 | Missing NOT clause | Always include "NOT for X, Y, Z" in description
3 | Phantom Tools | Only reference files that exist and work
4 | Template Soup | Ship working code or nothing
5 | Overly Permissive Tools | Least privilege: specific tool list, scoped Bash
6 | Stale Temporal Knowledge | Date all advice, update quarterly
7 | Catch-All Skill | Split by expertise type, not domain
8 | Vague Description | Use [What] [When] [Keywords]. NOT for [Exclusions]
9 | Eager Loading | Never "read all files first"; lazy-load references
10 | Prose-Only Processes | Use Mermaid diagrams (23 types) — flowcharts, sequences, states, ER, timelines, etc.
Full case studies : See references/antipatterns.md
□ SKILL.md exists and is <500 lines
□ Frontmatter has name + description (minimum required)
□ Description follows [What][When][Keywords] NOT [Exclusions] formula
□ Description uses keywords users would actually type
□ Name and description are aligned (not contradictory)
□ At least 1 anti-pattern with shibboleth template
□ All referenced files actually exist (no phantoms)
□ Scripts work (not templates), have clear CLI, handle errors
□ Reference files each have a 1-line purpose in SKILL.md
□ Processes/decisions/lifecycles use Mermaid diagrams (23 types), not prose
□ CHANGELOG.md tracks version history
□ If subagent-consumed: output contracts are defined
Run automated checks: python scripts/validate_skill.py <path> and python scripts/validate_mermaid.py <path>
Things that make Claude Code reject or mishandle skills at load time:
| Cause | Symptom | Fix |
|---|---|---|
Missing name or description | Skill won't load | Add both to frontmatter |
tools: instead of allowed-tools: | Tools silently ignored | Use allowed-tools: (hyphenated) |
YAML list in allowed-tools | Parse error | Use comma-separated: Read,Write,Edit |
Full validation : python scripts/validate_skill.py <path> catches all of these.
| Metric | Target | How to Measure |
|---|---|---|
| Correct activation | >90% | Test queries that should trigger |
| False positive rate | <5% | Test queries that shouldn't trigger |
| Token usage | <5k | SKILL.md size + typical reference loads |
| Time to productive | <5 min | User starts working immediately |
| Anti-pattern prevention | >80% | Users avoid documented mistakes |
Consult these for deep dives — they are NOT loaded by default:
| File | Consult When |
|---|---|
references/knowledge-engineering.md | KE methods for extracting expert knowledge into skills; protocol analysis, repertory grids, aha! moments |
references/description-guide.md | Writing or rewriting a skill description |
references/antipatterns.md | Looking for shibboleths, case studies, or temporal patterns |
references/self-contained-tools.md | Adding scripts, MCP servers, or subagents to a skill |
references/subagent-design.md | Designing skills for subagent consumption or orchestration |
Weekly Installs
54
Repository
GitHub Stars
79
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli46
codex46
opencode45
cursor45
github-copilot42
claude-code40
Go依赖注入最佳实践:手动注入与库选择指南,提升代码可测试性与松耦合设计
786 周安装
If true, only user-triggered via /skill-name |
true |
user-invocable | Controls whether skill appears in UI menus | true |
context | Execution context; fork runs skill in isolated subagent | fork |
agent | Which subagent type when context: fork | code-reviewer |
model | Override model when skill is active | sonnet |
hooks | Hooks scoped to this skill's lifecycle | See hooks reference |
metadata | Arbitrary key-value map for tooling/dashboards | author: your-org |
| Temporal knowledge / evolution | Timeline | timeline |
| Domain taxonomy / concept maps | Mindmap | mindmap |
| Priority matrices (2-axis) | Quadrant | quadrantChart |
| Component layout / blocks | Block | block-beta |
| Infrastructure / cloud topology | Architecture | architecture-beta |
| Multi-level system views (C4) | C4 | C4Context / C4Container / C4Component |
| Project phases / rollout plans | Gantt | gantt |
| Git branching / release strategy | Git Graph | gitGraph |
| User experience flows | Journey | journey |
| Quantity flows / budgets | Sankey | sankey-beta |
| Metrics / benchmarks | XY Chart | xychart-beta |
| Proportional breakdowns | Pie | pie |
| Hierarchical size comparison | Treemap | treemap |
| Multi-axis capability comparison | Radar | radar |
| Task/status tracking | Kanban | kanban |
| Requirements traceability | Requirement | requirementDiagram |
| Network protocols / binary formats | Packet | packet-beta |
| Sequence diagrams (code syntax) | ZenUML | zenuml (plugin) |
| Hook |
| 17+ events: PreToolUse, PostToolUse, Stop, etc. |
| Programmatic access | Agent SDK | npm/pip package, CI/CD pipelines |
Brackets in allowed-tools | Parse error | No [ ] — just Read,Write,Edit |
Invalid keys (triggers, outputs) | Silently ignored or error | Move to SKILL.md body text |
| Name with spaces/uppercase | May fail matching | Lowercase-hyphenated: my-skill-name |
| Name doesn't match directory | Activation mismatch | Keep name = directory name |
context: not fork | Ignored | Only valid value is fork |
disable-model-invocation: not boolean | Ignored | Use true or false |
| Phantom file references | Agent wastes tool calls | Delete references or create files |
references/claude-extension-taxonomy.md| Skills vs Plugins vs MCPs vs Hooks vs Agent SDK — the 7-type taxonomy |
references/plugin-architecture.md | Creating, packaging, and distributing plugins via marketplaces |
references/visual-artifacts.md | Adding Mermaid diagrams: all 23 types, YAML config, best practices |
references/mcp-template.md | Building an MCP server for a skill |
references/subagent-template.md | Defining subagent prompts and multi-agent pipelines |
scripts/validate_mermaid.py | Validates Mermaid syntax in any file — checks diagram types, balanced blocks, structural correctness |