customaize-agent%3Aapply-anthropic-skill-best-practices by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill customaize-agent:apply-anthropic-skill-best-practices将 Anthropic 官方技能创作最佳实践应用于你的技能。
优秀的技能应简洁、结构良好,并经过实际使用测试。本指南提供实用的创作决策,帮助你编写 Claude 能够有效发现和使用的技能。
并非技能中的每个令牌都会立即产生成本。在启动时,仅预加载所有技能的元数据(名称和描述)。Claude 仅在技能变得相关时才会读取 SKILL.md,并根据需要读取其他文件。然而,保持 SKILL.md 的简洁性仍然很重要:一旦 Claude 加载它,每个令牌都会与会话历史和其他上下文竞争。
技能作为模型的补充,因此其有效性取决于底层模型。请使用你计划使用的所有模型来测试你的技能。
按模型划分的测试注意事项:
对 Opus 完美有效的内容,对 Haiku 可能需要更多细节。如果你计划在多个模型中使用你的技能,请力求指令在所有模型上都能良好工作。
name - 技能的人类可读名称(最多 64 个字符)
description - 关于技能功能及使用场景的单行描述(最多 1024 个字符)
有关完整的技能结构详情,请参阅 技能概述。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用一致的命名模式,使技能更易于引用和讨论。我们建议对技能名称使用动名词形式(动词 + -ing),因为这能清晰地描述技能提供的活动或能力。
良好的命名示例(动名词形式):
可接受的替代方案:
避免:
一致的命名使得:
description 字段支持技能发现,应包含技能的功能以及何时使用它。
具体并包含关键术语。包含技能的功能以及何时使用它的具体触发器/上下文。
每个技能只有一个描述字段。描述对于技能选择至关重要:Claude 用它从可能 100 多个可用技能中选择正确的技能。你的描述必须提供足够的细节,让 Claude 知道何时选择此技能,而 SKILL.md 的其余部分则提供实现细节。
有效的示例:
PDF 处理技能:
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
Excel 分析技能:
description: Analyze Excel spreadsheets, create pivot tables, generate charts. Use when analyzing Excel files, spreadsheets, tabular data, or .xlsx files.
Git 提交助手技能:
description: Generate descriptive commit messages by analyzing git diffs. Use when the user asks for help writing commit messages or reviewing staged changes.
避免像下面这样模糊的描述:
description: Helps with documents
description: Processes data
description: Does stuff with files
SKILL.md 作为一个概述,根据需要将 Claude 指向详细资料,就像入职指南中的目录一样。关于渐进式披露工作原理的解释,请参阅概述中的 技能工作原理。
实用指南:
一个基本的技能仅包含一个 SKILL.md 文件,其中包含元数据和指令:
随着技能的增长,你可以捆绑额外的内容,Claude 仅在需要时加载:
完整的技能目录结构可能如下所示:
pdf/
├── SKILL.md # 主要指令(触发时加载)
├── FORMS.md # 表单填写指南(需要时加载)
├── reference.md # API 参考(需要时加载)
├── examples.md # 使用示例(需要时加载)
└── scripts/
├── analyze_form.py # 实用脚本(执行,不加载)
├── fill_form.py # 表单填写脚本
└── validate.py # 验证脚本
---
name: PDF Processing
description: Extracts text and tables from PDF files, fills forms, and merges documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
---
# PDF Processing
## Quick start
Extract text with pdfplumber:
```python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
```
## Advanced features
**Form filling**: See [FORMS.md](FORMS.md) for complete guide
**API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
**Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
Claude 仅在需要时加载 FORMS.md、REFERENCE.md 或 EXAMPLES.md。
对于具有多个领域的技能,按领域组织内容,以避免加载不相关的上下文。当用户询问销售指标时,Claude 只需要读取与销售相关的模式,而不是财务或营销数据。这可以保持令牌使用量低且上下文集中。
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
# BigQuery Data Analysis
## Available datasets
**Finance**: Revenue, ARR, billing → See [reference/finance.md](reference/finance.md)
**Sales**: Opportunities, pipeline, accounts → See [reference/sales.md](reference/sales.md)
**Product**: API usage, features, adoption → See [reference/product.md](reference/product.md)
**Marketing**: Campaigns, attribution, email → See [reference/marketing.md](reference/marketing.md)
## Quick search
Find specific metrics using grep:
```bash
grep -i "revenue" reference/finance.md
grep -i "pipeline" reference/sales.md
grep -i "api usage" reference/product.md
```
显示基本内容,链接到高级内容:
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
Claude 仅在用户需要这些功能时读取 REDLINING.md 或 OOXML.md。
当从其他引用的文件中引用时,Claude 可能会部分读取文件。遇到嵌套引用时,Claude 可能会使用像 head -100 这样的命令来预览内容,而不是读取整个文件,从而导致信息不完整。
保持引用相对于 SKILL.md 只有一层深度。所有引用文件都应直接从 SKILL.md 链接,以确保 Claude 在需要时读取完整的文件。
不良示例:太深:
# SKILL.md
See [advanced.md](advanced.md)...
# advanced.md
See [details.md](details.md)...
# details.md
Here's the actual information...
良好示例:一层深度:
# SKILL.md
**Basic usage**: [instructions in SKILL.md]
**Advanced features**: See [advanced.md](advanced.md)
**API reference**: See [reference.md](reference.md)
**Examples**: See [examples.md](examples.md)
对于超过 100 行的参考文件,请在顶部包含一个目录。这确保了即使在使用部分读取预览时,Claude 也能看到可用信息的完整范围。
示例:
# API Reference
## Contents
- Authentication and setup
- Core methods (create, read, update, delete)
- Advanced features (batch operations, webhooks)
- Error handling patterns
- Code examples
## Authentication and setup
...
## Core methods
...
然后,Claude 可以根据需要读取完整文件或跳转到特定部分。
有关这种基于文件系统的架构如何实现渐进式披露的详细信息,请参阅下文高级部分中的运行时环境部分。
将复杂操作分解为清晰、连续的步骤。对于特别复杂的工作流程,提供一个清单,Claude 可以将其复制到其响应中,并在进展过程中勾选。
示例 1:研究综合工作流程(适用于无代码技能):
## Research synthesis workflow
Copy this checklist and track your progress:
```
Research Progress:
- [ ] Step 1: Read all source documents
- [ ] Step 2: Identify key themes
- [ ] Step 3: Cross-reference claims
- [ ] Step 4: Create structured summary
- [ ] Step 5: Verify citations
```
**Step 1: Read all source documents**
Review each document in the `sources/` directory. Note the main arguments and supporting evidence.
**Step 2: Identify key themes**
Look for patterns across sources. What themes appear repeatedly? Where do sources agree or disagree?
**Step 3: Cross-reference claims**
For each major claim, verify it appears in the source material. Note which source supports each point.
**Step 4: Create structured summary**
Organize findings by theme. Include:
- Main claim
- Supporting evidence from sources
- Conflicting viewpoints (if any)
**Step 5: Verify citations**
Check that every claim references the correct source document. If citations are incomplete, return to Step 3.
此示例展示了工作流程如何应用于不需要代码的分析任务。清单模式适用于任何复杂的多步骤过程。
示例 2:PDF 表单填写工作流程(适用于有代码技能):
## PDF form filling workflow
Copy this checklist and check off items as you complete them:
```
Task Progress:
- [ ] Step 1: Analyze the form (run analyze_form.py)
- [ ] Step 2: Create field mapping (edit fields.json)
- [ ] Step 3: Validate mapping (run validate_fields.py)
- [ ] Step 4: Fill the form (run fill_form.py)
- [ ] Step 5: Verify output (run verify_output.py)
```
**Step 1: Analyze the form**
Run: `python scripts/analyze_form.py input.pdf`
This extracts form fields and their locations, saving to `fields.json`.
**Step 2: Create field mapping**
Edit `fields.json` to add values for each field.
**Step 3: Validate mapping**
Run: `python scripts/validate_fields.py fields.json`
Fix any validation errors before continuing.
**Step 4: Fill the form**
Run: `python scripts/fill_form.py input.pdf fields.json output.pdf`
**Step 5: Verify output**
Run: `python scripts/verify_output.py output.pdf`
If verification fails, return to Step 2.
清晰的步骤可以防止 Claude 跳过关键的验证。清单有助于 Claude 和你跟踪多步骤工作流程的进度。
常见模式:运行验证器 → 修复错误 → 重复
这种模式极大地提高了输出质量。
示例 1:风格指南合规性(适用于无代码技能):
## Content review process
1. Draft your content following the guidelines in STYLE_GUIDE.md
2. Review against the checklist:
- Check terminology consistency
- Verify examples follow the standard format
- Confirm all required sections are present
3. If issues found:
- Note each issue with specific section reference
- Revise the content
- Review the checklist again
4. Only proceed when all requirements are met
5. Finalize and save the document
此示例展示了使用参考文档而非脚本的验证循环模式。"验证器"是 STYLE_GUIDE.md,Claude 通过读取和比较来执行检查。
示例 2:文档编辑过程(适用于有代码技能):
## Document editing process
1. Make your edits to `word/document.xml`
2. **Validate immediately**: `python ooxml/scripts/validate.py unpacked_dir/`
3. If validation fails:
- Review the error message carefully
- Fix the issues in the XML
- Run validation again
4. **Only proceed when validation passes**
5. Rebuild: `python ooxml/scripts/pack.py unpacked_dir/ output.docx`
6. Test the output document
验证循环可以及早发现错误。
不要包含会过时的信息:
不良示例:时间敏感(会变得错误):
If you're doing this before August 2025, use the old API.
After August 2025, use the new API.
良好示例(使用"旧模式"部分):
## Current method
Use the v2 API endpoint: `api.example.com/v2/messages`
## Old patterns
<details>
<summary>Legacy v1 API (deprecated 2025-08)</summary>
The v1 API used: `api.example.com/v1/messages`
This endpoint is no longer supported.
</details>
"旧模式"部分提供了历史背景,而不会使主要内容变得杂乱。
选择一个术语并在整个技能中始终使用它:
良好 - 一致:
不良 - 不一致:
一致性有助于 Claude 理解和遵循指令。
为输出格式提供模板。根据你的需求匹配严格程度。
对于严格要求(如 API 响应或数据格式):
## Report structure
ALWAYS use this exact template structure:
```markdown
# [Analysis Title]
## Executive summary
[One-paragraph overview of key findings]
## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```
对于灵活指导(当需要适应时):
## Report structure
Here is a sensible default format, but use your best judgment based on the analysis:
```markdown
# [Analysis Title]
## Executive summary
[Overview]
## Key findings
[Adapt sections based on what you discover]
## Recommendations
[Tailor to the specific context]
```
Adjust sections as needed for the specific analysis type.
对于输出质量取决于看到示例的技能,提供输入/输出对,就像在常规提示中一样:
## Commit message format
Generate commit messages following these examples:
**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication
Add login endpoint and token validation middleware
```
**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion
Use UTC timestamps consistently across report generation
```
**Example 3:**
Input: Updated dependencies and refactored error handling
Output:
```
chore: update dependencies and refactor error handling
- Upgrade lodash to 4.17.21
- Standardize error response format across endpoints
```
Follow this style: type(scope): brief description, then detailed explanation.
示例帮助 Claude 比仅靠描述更清晰地理解所需的风格和细节水平。
指导 Claude 通过决策点:
## Document modification workflow
1. Determine the modification type:
**Creating new content?** → Follow "Creation workflow" below
**Editing existing content?** → Follow "Editing workflow" below
2. Creation workflow:
- Use docx-js library
- Build document from scratch
- Export to .docx format
3. Editing workflow:
- Unpack existing document
- Modify XML directly
- Validate after each change
- Repack when complete
在编写大量文档之前创建评估。这确保你的技能解决的是实际问题,而不是记录想象中的问题。
评估驱动开发:
这种方法确保你解决的是实际问题,而不是预测可能永远不会出现的需求。
评估结构:
{
"skills": ["pdf-processing"],
"query": "Extract all text from this PDF file and save it to output.txt",
"files": ["test-files/document.pdf"],
"expected_behavior": [
"Successfully reads the PDF file using an appropriate PDF processing library or command-line tool",
"Extracts text content from all pages in the document without missing any pages",
"Saves the extracted text to a file named output.txt in a clear, readable format"
]
}
最有效的技能开发过程涉及 Claude 本身。与一个 Claude 实例("Claude A")合作创建一个将被其他实例("Claude B")使用的技能。Claude A 帮助你设计和优化指令,而 Claude B 在实际任务中测试它们。这是有效的,因为 Claude 模型既理解如何编写有效的代理指令,也理解代理需要什么信息。
创建新技能:
在没有技能的情况下完成任务:使用普通提示与 Claude A 一起解决问题。在工作过程中,你会自然地提供上下文、解释偏好并分享过程性知识。注意你反复提供了哪些信息。
识别可重用模式:完成任务后,识别你提供的哪些上下文对于类似的未来任务会有用。
示例:如果你完成了 BigQuery 分析,你可能提供了表名、字段定义、过滤规则(如"始终排除测试账户")和常见的查询模式。
请 Claude A 创建一个技能:"创建一个技能,捕捉我们刚刚使用的这个 BigQuery 分析模式。包括表模式、命名约定以及关于过滤测试账户的规则。"
检查简洁性:检查 Claude A 是否没有添加不必要的解释。询问:"删除关于胜率含义的解释 - Claude 已经知道这一点。"
改进信息架构:请 Claude A 更有效地组织内容。例如:"组织一下,将表模式放在一个单独的参考文件中。我们以后可能会添加更多表。"
在类似任务上测试:使用该技能与 Claude B(一个加载了该技能的新实例)处理相关用例。观察 Claude B 是否能找到正确的信息、正确应用规则并成功处理任务。
根据观察结果迭代:如果 Claude B 遇到困难或遗漏了什么,请带着具体细节回到 Claude A:"当 Claude 使用这个技能时,它忘记为 Q4 按日期过滤。我们是否应该添加一个关于日期过滤模式的部分?"
迭代现有技能:
在改进技能时,同样的分层模式仍在继续。你在以下两者之间交替:
在实际工作流程中使用技能:给 Claude B(加载了技能)分配实际任务,而不是测试场景
观察 Claude B 的行为:注意它在哪些地方遇到困难、成功或做出意外选择
观察示例:"当我请 Claude B 提供区域销售报告时,它编写了查询,但忘记过滤掉测试账户,即使技能提到了这个规则。"
返回 Claude A 进行改进:分享当前的 SKILL.md 并描述你观察到的内容。询问:"我注意到当我请求区域报告时,Claude B 忘记过滤测试账户。技能提到了过滤,但也许不够突出?"
审查 Claude A 的建议:Claude A 可能会建议重组以使规则更突出,使用更强的语言如"MUST filter"而不是"always filter",或者重构工作流程部分。
应用并测试更改:使用 Claude A 的优化更新技能,然后在类似请求上再次与 Claude B 测试
根据使用情况重复:当你遇到新场景时,继续这个观察-优化-测试循环。每次迭代都基于真实的代理行为(而非假设)改进技能。
收集团队反馈:
这种方法有效的原因:Claude A 理解代理需求,你提供领域专业知识,Claude B 通过实际使用揭示差距,迭代优化基于观察到的行为(而非假设)改进技能。
在迭代技能时,注意 Claude 在实践中实际如何使用它们。观察:
根据这些观察结果进行迭代,而不是基于假设。技能元数据中的'name'和'description'尤其关键。Claude 在决定是否针对当前任务触发技能时使用它们。确保它们清晰地描述了技能的功能以及何时应该使用它。
始终在文件路径中使用正斜杠,即使在 Windows 上也是如此:
scripts/helper.py, reference/guide.mdscripts\helper.py, reference\guide.mdUnix 风格的路径在所有平台上都有效,而 Windows 风格的路径在 Unix 系统上会导致错误。
除非必要,不要提供多种方法:
**Bad example: Too many choices** (confusing):
"You can use pypdf, or pdfplumber, or PyMuPDF, or pdf2image, or..."
**Good example: Provide a default** (with escape hatch):
"Use pdfplumber for text extraction:
```python
import pdfplumber
```
For scanned PDFs requiring OCR, use pdf2image with pytesseract instead."
以下部分侧重于包含可执行脚本的技能。如果你的技能仅使用 Markdown 指令,请跳转到有效技能清单。
在为技能编写脚本时,处理错误条件,而不是推诿给 Claude。
良好示例:显式处理错误:
def process_file(path):
"""Process a file, creating it if it doesn't exist."""
try:
with open(path) as f:
return f.read()
except FileNotFoundError:
# Create file with default content instead of failing
print(f"File {path} not found, creating default")
with open(path, 'w') as f:
f.write('')
return ''
except PermissionError:
# Provide alternative instead of failing
print(f"Cannot access {path}, using default")
return ''
不良示例:推诿给 Claude:
def process_file(path):
# Just fail and let Claude figure it out
return open(path).read()
配置参数也应加以说明和记录,以避免"巫毒常量"(Ousterhout 定律)。如果你不知道正确的值,Claude 如何确定它?
良好示例:自文档化:
# HTTP requests typically complete within 30 seconds
# Longer timeout accounts for slow connections
REQUEST_TIMEOUT = 30
# Three retries balances reliability vs speed
# Most intermittent failures resolve by the second retry
MAX_RETRIES = 3
不良示例:魔法数字:
TIMEOUT = 47 # Why 47?
RETRIES = 5 # Why 5?
即使 Claude 可以编写脚本,预制脚本也有其优势:
实用脚本的好处:
上图展示了可执行脚本如何与指令文件协同工作。指令文件(forms.md)引用脚本,Claude 可以执行它,而无需将其内容加载到上下文中。
重要区别:在你的指令中明确说明 Claude 应该:
analyze_form.py 以提取字段"analyze_form.py 了解字段提取算法"对于大多数实用脚本,执行是首选,因为它更可靠和高效。有关脚本执行工作原理的详细信息,请参阅下面的运行时环境部分。
示例:
## Utility scripts
**analyze_form.py**: Extract all form fields from PDF
```bash
python scripts/analyze_form.py input.pdf > fields.json
```
Output format:
```json
{
"field_name": {"type": "text", "x": 100, "y": 200},
"signature": {"type": "sig", "x": 150, "y": 500}
}
```
**validate_boxes.py**: Check for overlapping bounding boxes
```bash
python scripts/validate_boxes.py fields.json
# Returns: "OK" or lists conflicts
```
**fill_form.py**: Apply field values to PDF
```bash
python scripts/fill_form.py input.pdf fields.json output.pdf
```
当输入可以渲染为图像时,让 Claude 分析它们:
## Form layout analysis
1. Convert PDF to images:
```bash
python scripts/pdf_to_images.py form.pdf
```
2. Analyze each page image to identify form fields
3. Claude can see field locations and types visually
Claude 的视觉能力有助于理解布局和结构。
当 Claude 执行复杂的、开放式的任务时,它可能会出错。"计划-验证-执行"模式通过让 Claude 首先以结构化格式创建计划,然后在执行前用脚本验证该计划,从而及早捕获错误。
示例:想象一下,要求 Claude 根据电子表格更新 PDF 中的 50 个表单字段。如果没有验证,Claude 可能会引用不存在的字段、创建冲突的值、遗漏必填字段或错误地应用更新。
解决方案:使用上面展示的工作流程模式(PDF 表单填写),但添加一个中间 changes.json 文件,该文件在应用更改之前进行验证。工作流程变为:分析 → 创建计划文件 → 验证计划 → 执行 → 验证。
这种模式有效的原因:
何时使用:批量操作、破坏性更改、复杂的验证规则、高风险操作。
实现技巧:使验证脚本详细,并带有特定的错误消息,如"未找到字段 'signature_date'。可用字段:customer_name, order_total, signature_date_signed",以帮助 Claude 修复问题。
技能在代码执行环境中运行,具有平台特定的限制:
在 SKILL.md 中列出所需的包,并验证它们在 代码执行工具文档 中可用。
技能在具有文件系统访问、bash 命令和代码执行功能的代码执行环境中运行。有关此架构的概念性解释,请参阅概述中的 技能架构。
这对你的创作有何影响:
Claude 如何访问技能:
reference/guide.md),而不是反斜杠form_validation_rules.md,而不是 doc2.mdreference/finance.md, reference/sales.mddocs/file1.md, docs/file2.mdvalidate_form.py,而不是要求 Claude 生成验证代码analyze_form.py 以提取字段"(执行)analyze_form.py 了解提取算法"(作为参考读取)示例:
bigquery-skill/
├── SKILL.md (overview, points to reference files)
└── reference/
├── finance.md (revenue metrics)
├── sales.md (pipeline data)
└── product.md (usage analytics)
当用户询问收入时,Claude 读取 SKILL.md,看到对 reference/finance.md 的引用,并调用 bash 仅读取该文件。sales.md 和 product.md 文件保留在文件系统上,在需要之前消耗零个上下文令牌。这种基于文件系统的模型正是实现渐进式披露的原因。Claude 可以导航并有选择地加载每个任务所需的内容。
有关技术架构的完整详情,请参阅技能概述中的 技能工作原理。
如果你的技能使用 MCP(模型上下文协议)工具,请始终使用完全限定的工具名称,以避免"工具未找到"错误。
格式:ServerName:tool_name
示例:
Use the BigQuery:bigquery_schema tool to retrieve table schemas.
Use the GitHub:create_issue tool to create issues.
其中:
BigQuery 和 GitHub 是 MCP 服务器名称bigquery_schema 和 create_issue 是这些服务器内的工具名称没有服务器前缀,Claude 可能无法定位工具,尤其是在有多个 MCP 服务器可用时。
不要假设包可用:
**Bad example: Assumes installation**:
"Use the pdf library to process the file."
**Good example: Explicit about dependencies**:
"Install required package: `pip install pypdf`
Then use it:
```python
from pypdf import PdfReader
reader = PdfReader("file.pdf")
```"
SKILL.md frontmatter 仅包含 name(最多 64 个字符)和 description(最多 1024 个字符)字段。有关完整结构详情,请参阅 技能概述。
为获得最佳性能,保持 SKILL.md 主体在 500 行以内。如果你的内容超过此限制,请使用前面描述的渐进式披露模式将其拆分到单独的文件中。有关架构详情,请参阅 技能概述。
在分享技能之前,请验证:
Apply Anthropic's official skill authoring best practices to your skill.
Good Skills are concise, well-structured, and tested with real usage. This guide provides practical authoring decisions to help you write Skills that Claude can discover and use effectively.
Not every token in your Skill has an immediate cost. At startup, only the metadata (name and description) from all Skills is pre-loaded. Claude reads SKILL.md only when the Skill becomes relevant, and reads additional files only as needed. However, being concise in SKILL.md still matters: once Claude loads it, every token competes with conversation history and other context.
Skills act as additions to models, so effectiveness depends on the underlying model. Test your Skill with all the models you plan to use it with.
Testing considerations by model :
What works perfectly for Opus might need more detail for Haiku. If you plan to use your Skill across multiple models, aim for instructions that work well with all of them.
name - Human-readable name of the Skill (64 characters maximum)
description - One-line description of what the Skill does and when to use it (1024 characters maximum)
For complete Skill structure details, see the Skills overview.
Use consistent naming patterns to make Skills easier to reference and discuss. We recommend using gerund form (verb + -ing) for Skill names, as this clearly describes the activity or capability the Skill provides.
Good naming examples (gerund form) :
Acceptable alternatives :
Avoid :
Consistent naming makes it easier to:
The description field enables Skill discovery and should include both what the Skill does and when to use it.
Be specific and include key terms. Include both what the Skill does and specific triggers/contexts for when to use it.
Each Skill has exactly one description field. The description is critical for skill selection: Claude uses it to choose the right Skill from potentially 100+ available Skills. Your description must provide enough detail for Claude to know when to select this Skill, while the rest of SKILL.md provides the implementation details.
Effective examples:
PDF Processing skill:
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
Excel Analysis skill:
description: Analyze Excel spreadsheets, create pivot tables, generate charts. Use when analyzing Excel files, spreadsheets, tabular data, or .xlsx files.
Git Commit Helper skill:
description: Generate descriptive commit messages by analyzing git diffs. Use when the user asks for help writing commit messages or reviewing staged changes.
Avoid vague descriptions like these:
description: Helps with documents
description: Processes data
description: Does stuff with files
SKILL.md serves as an overview that points Claude to detailed materials as needed, like a table of contents in an onboarding guide. For an explanation of how progressive disclosure works, see How Skills work in the overview.
Practical guidance:
A basic Skill starts with just a SKILL.md file containing metadata and instructions:
As your Skill grows, you can bundle additional content that Claude loads only when needed:
The complete Skill directory structure might look like this:
pdf/
├── SKILL.md # Main instructions (loaded when triggered)
├── FORMS.md # Form-filling guide (loaded as needed)
├── reference.md # API reference (loaded as needed)
├── examples.md # Usage examples (loaded as needed)
└── scripts/
├── analyze_form.py # Utility script (executed, not loaded)
├── fill_form.py # Form filling script
└── validate.py # Validation script
---
name: PDF Processing
description: Extracts text and tables from PDF files, fills forms, and merges documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
---
# PDF Processing
## Quick start
Extract text with pdfplumber:
```python
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
```
## Advanced features
**Form filling**: See [FORMS.md](FORMS.md) for complete guide
**API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
**Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context. When a user asks about sales metrics, Claude only needs to read sales-related schemas, not finance or marketing data. This keeps token usage low and context focused.
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
# BigQuery Data Analysis
## Available datasets
**Finance**: Revenue, ARR, billing → See [reference/finance.md](reference/finance.md)
**Sales**: Opportunities, pipeline, accounts → See [reference/sales.md](reference/sales.md)
**Product**: API usage, features, adoption → See [reference/product.md](reference/product.md)
**Marketing**: Campaigns, attribution, email → See [reference/marketing.md](reference/marketing.md)
## Quick search
Find specific metrics using grep:
```bash
grep -i "revenue" reference/finance.md
grep -i "pipeline" reference/sales.md
grep -i "api usage" reference/product.md
```
Show basic content, link to advanced content:
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
Claude may partially read files when they're referenced from other referenced files. When encountering nested references, Claude might use commands like head -100 to preview content rather than reading entire files, resulting in incomplete information.
Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md to ensure Claude reads complete files when needed.
Bad example: Too deep :
# SKILL.md
See [advanced.md](advanced.md)...
# advanced.md
See [details.md](details.md)...
# details.md
Here's the actual information...
Good example: One level deep :
# SKILL.md
**Basic usage**: [instructions in SKILL.md]
**Advanced features**: See [advanced.md](advanced.md)
**API reference**: See [reference.md](reference.md)
**Examples**: See [examples.md](examples.md)
For reference files longer than 100 lines, include a table of contents at the top. This ensures Claude can see the full scope of available information even when previewing with partial reads.
Example :
# API Reference
## Contents
- Authentication and setup
- Core methods (create, read, update, delete)
- Advanced features (batch operations, webhooks)
- Error handling patterns
- Code examples
## Authentication and setup
...
## Core methods
...
Claude can then read the complete file or jump to specific sections as needed.
For details on how this filesystem-based architecture enables progressive disclosure, see the Runtime environment section in the Advanced section below.
Break complex operations into clear, sequential steps. For particularly complex workflows, provide a checklist that Claude can copy into its response and check off as it progresses.
Example 1: Research synthesis workflow (for Skills without code):
## Research synthesis workflow
Copy this checklist and track your progress:
```
Research Progress:
- [ ] Step 1: Read all source documents
- [ ] Step 2: Identify key themes
- [ ] Step 3: Cross-reference claims
- [ ] Step 4: Create structured summary
- [ ] Step 5: Verify citations
```
**Step 1: Read all source documents**
Review each document in the `sources/` directory. Note the main arguments and supporting evidence.
**Step 2: Identify key themes**
Look for patterns across sources. What themes appear repeatedly? Where do sources agree or disagree?
**Step 3: Cross-reference claims**
For each major claim, verify it appears in the source material. Note which source supports each point.
**Step 4: Create structured summary**
Organize findings by theme. Include:
- Main claim
- Supporting evidence from sources
- Conflicting viewpoints (if any)
**Step 5: Verify citations**
Check that every claim references the correct source document. If citations are incomplete, return to Step 3.
This example shows how workflows apply to analysis tasks that don't require code. The checklist pattern works for any complex, multi-step process.
Example 2: PDF form filling workflow (for Skills with code):
## PDF form filling workflow
Copy this checklist and check off items as you complete them:
```
Task Progress:
- [ ] Step 1: Analyze the form (run analyze_form.py)
- [ ] Step 2: Create field mapping (edit fields.json)
- [ ] Step 3: Validate mapping (run validate_fields.py)
- [ ] Step 4: Fill the form (run fill_form.py)
- [ ] Step 5: Verify output (run verify_output.py)
```
**Step 1: Analyze the form**
Run: `python scripts/analyze_form.py input.pdf`
This extracts form fields and their locations, saving to `fields.json`.
**Step 2: Create field mapping**
Edit `fields.json` to add values for each field.
**Step 3: Validate mapping**
Run: `python scripts/validate_fields.py fields.json`
Fix any validation errors before continuing.
**Step 4: Fill the form**
Run: `python scripts/fill_form.py input.pdf fields.json output.pdf`
**Step 5: Verify output**
Run: `python scripts/verify_output.py output.pdf`
If verification fails, return to Step 2.
Clear steps prevent Claude from skipping critical validation. The checklist helps both Claude and you track progress through multi-step workflows.
Common pattern : Run validator → fix errors → repeat
This pattern greatly improves output quality.
Example 1: Style guide compliance (for Skills without code):
## Content review process
1. Draft your content following the guidelines in STYLE_GUIDE.md
2. Review against the checklist:
- Check terminology consistency
- Verify examples follow the standard format
- Confirm all required sections are present
3. If issues found:
- Note each issue with specific section reference
- Revise the content
- Review the checklist again
4. Only proceed when all requirements are met
5. Finalize and save the document
This shows the validation loop pattern using reference documents instead of scripts. The "validator" is STYLE_GUIDE.md, and Claude performs the check by reading and comparing.
Example 2: Document editing process (for Skills with code):
## Document editing process
1. Make your edits to `word/document.xml`
2. **Validate immediately**: `python ooxml/scripts/validate.py unpacked_dir/`
3. If validation fails:
- Review the error message carefully
- Fix the issues in the XML
- Run validation again
4. **Only proceed when validation passes**
5. Rebuild: `python ooxml/scripts/pack.py unpacked_dir/ output.docx`
6. Test the output document
The validation loop catches errors early.
Don't include information that will become outdated:
Bad example: Time-sensitive (will become wrong):
If you're doing this before August 2025, use the old API.
After August 2025, use the new API.
Good example (use "old patterns" section):
## Current method
Use the v2 API endpoint: `api.example.com/v2/messages`
## Old patterns
<details>
<summary>Legacy v1 API (deprecated 2025-08)</summary>
The v1 API used: `api.example.com/v1/messages`
This endpoint is no longer supported.
</details>
The old patterns section provides historical context without cluttering the main content.
Choose one term and use it throughout the Skill:
Good - Consistent :
Bad - Inconsistent :
Consistency helps Claude understand and follow instructions.
Provide templates for output format. Match the level of strictness to your needs.
For strict requirements (like API responses or data formats):
## Report structure
ALWAYS use this exact template structure:
```markdown
# [Analysis Title]
## Executive summary
[One-paragraph overview of key findings]
## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```
For flexible guidance (when adaptation is useful):
## Report structure
Here is a sensible default format, but use your best judgment based on the analysis:
```markdown
# [Analysis Title]
## Executive summary
[Overview]
## Key findings
[Adapt sections based on what you discover]
## Recommendations
[Tailor to the specific context]
```
Adjust sections as needed for the specific analysis type.
For Skills where output quality depends on seeing examples, provide input/output pairs just like in regular prompting:
## Commit message format
Generate commit messages following these examples:
**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication
Add login endpoint and token validation middleware
```
**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion
Use UTC timestamps consistently across report generation
```
**Example 3:**
Input: Updated dependencies and refactored error handling
Output:
```
chore: update dependencies and refactor error handling
- Upgrade lodash to 4.17.21
- Standardize error response format across endpoints
```
Follow this style: type(scope): brief description, then detailed explanation.
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
Guide Claude through decision points:
## Document modification workflow
1. Determine the modification type:
**Creating new content?** → Follow "Creation workflow" below
**Editing existing content?** → Follow "Editing workflow" below
2. Creation workflow:
- Use docx-js library
- Build document from scratch
- Export to .docx format
3. Editing workflow:
- Unpack existing document
- Modify XML directly
- Validate after each change
- Repack when complete
Create evaluations BEFORE writing extensive documentation. This ensures your Skill solves real problems rather than documenting imagined ones.
Evaluation-driven development:
This approach ensures you're solving actual problems rather than anticipating requirements that may never materialize.
Evaluation structure :
{
"skills": ["pdf-processing"],
"query": "Extract all text from this PDF file and save it to output.txt",
"files": ["test-files/document.pdf"],
"expected_behavior": [
"Successfully reads the PDF file using an appropriate PDF processing library or command-line tool",
"Extracts text content from all pages in the document without missing any pages",
"Saves the extracted text to a file named output.txt in a clear, readable format"
]
}
The most effective Skill development process involves Claude itself. Work with one instance of Claude ("Claude A") to create a Skill that will be used by other instances ("Claude B"). Claude A helps you design and refine instructions, while Claude B tests them in real tasks. This works because Claude models understand both how to write effective agent instructions and what information agents need.
Creating a new Skill:
Complete a task without a Skill : Work through a problem with Claude A using normal prompting. As you work, you'll naturally provide context, explain preferences, and share procedural knowledge. Notice what information you repeatedly provide.
Identify the reusable pattern : After completing the task, identify what context you provided that would be useful for similar future tasks.
Example : If you worked through a BigQuery analysis, you might have provided table names, field definitions, filtering rules (like "always exclude test accounts"), and common query patterns.
Ask Claude A to create a Skill : "Create a Skill that captures this BigQuery analysis pattern we just used. Include the table schemas, naming conventions, and the rule about filtering test accounts."
Review for conciseness : Check that Claude A hasn't added unnecessary explanations. Ask: "Remove the explanation about what win rate means - Claude already knows that."
Improve information architecture : Ask Claude A to organize the content more effectively. For example: "Organize this so the table schema is in a separate reference file. We might add more tables later."
Test on similar tasks : Use the Skill with Claude B (a fresh instance with the Skill loaded) on related use cases. Observe whether Claude B finds the right information, applies rules correctly, and handles the task successfully.
Iterate based on observation : If Claude B struggles or misses something, return to Claude A with specifics: "When Claude used this Skill, it forgot to filter by date for Q4. Should we add a section about date filtering patterns?"
Iterating on existing Skills:
The same hierarchical pattern continues when improving Skills. You alternate between:
Use the Skill in real workflows : Give Claude B (with the Skill loaded) actual tasks, not test scenarios
Observe Claude B's behavior : Note where it struggles, succeeds, or makes unexpected choices
Example observation : "When I asked Claude B for a regional sales report, it wrote the query but forgot to filter out test accounts, even though the Skill mentions this rule."
Return to Claude A for improvements : Share the current SKILL.md and describe what you observed. Ask: "I noticed Claude B forgot to filter test accounts when I asked for a regional report. The Skill mentions filtering, but maybe it's not prominent enough?"
Review Claude A's suggestions : Claude A might suggest reorganizing to make rules more prominent, using stronger language like "MUST filter" instead of "always filter", or restructuring the workflow section.
Apply and test changes : Update the Skill with Claude A's refinements, then test again with Claude B on similar requests
Repeat based on usage : Continue this observe-refine-test cycle as you encounter new scenarios. Each iteration improves the Skill based on real agent behavior, not assumptions.
Gathering team feedback:
Why this approach works : Claude A understands agent needs, you provide domain expertise, Claude B reveals gaps through real usage, and iterative refinement improves Skills based on observed behavior rather than assumptions.
As you iterate on Skills, pay attention to how Claude actually uses them in practice. Watch for:
Iterate based on these observations rather than assumptions. The 'name' and 'description' in your Skill's metadata are particularly critical. Claude uses these when deciding whether to trigger the Skill in response to the current task. Make sure they clearly describe what the Skill does and when it should be used.
Always use forward slashes in file paths, even on Windows:
scripts/helper.py, reference/guide.mdscripts\helper.py, reference\guide.mdUnix-style paths work across all platforms, while Windows-style paths cause errors on Unix systems.
Don't present multiple approaches unless necessary:
**Bad example: Too many choices** (confusing):
"You can use pypdf, or pdfplumber, or PyMuPDF, or pdf2image, or..."
**Good example: Provide a default** (with escape hatch):
"Use pdfplumber for text extraction:
```python
import pdfplumber
```
For scanned PDFs requiring OCR, use pdf2image with pytesseract instead."
The sections below focus on Skills that include executable scripts. If your Skill uses only markdown instructions, skip to Checklist for effective Skills.
When writing scripts for Skills, handle error conditions rather than punting to Claude.
Good example: Handle errors explicitly :
def process_file(path):
"""Process a file, creating it if it doesn't exist."""
try:
with open(path) as f:
return f.read()
except FileNotFoundError:
# Create file with default content instead of failing
print(f"File {path} not found, creating default")
with open(path, 'w') as f:
f.write('')
return ''
except PermissionError:
# Provide alternative instead of failing
print(f"Cannot access {path}, using default")
return ''
Bad example: Punt to Claude :
def process_file(path):
# Just fail and let Claude figure it out
return open(path).read()
Configuration parameters should also be justified and documented to avoid "voodoo constants" (Ousterhout's law). If you don't know the right value, how will Claude determine it?
Good example: Self-documenting :
# HTTP requests typically complete within 30 seconds
# Longer timeout accounts for slow connections
REQUEST_TIMEOUT = 30
# Three retries balances reliability vs speed
# Most intermittent failures resolve by the second retry
MAX_RETRIES = 3
Bad example: Magic numbers :
TIMEOUT = 47 # Why 47?
RETRIES = 5 # Why 5?
Even if Claude could write a script, pre-made scripts offer advantages:
Benefits of utility scripts :
The diagram above shows how executable scripts work alongside instruction files. The instruction file (forms.md) references the script, and Claude can execute it without loading its contents into context.
Important distinction : Make clear in your instructions whether Claude should:
analyze_form.py to extract fields"analyze_form.py for the field extraction algorithm"For most utility scripts, execution is preferred because it's more reliable and efficient. See the Runtime environment section below for details on how script execution works.
Example :
## Utility scripts
**analyze_form.py**: Extract all form fields from PDF
```bash
python scripts/analyze_form.py input.pdf > fields.json
```
Output format:
```json
{
"field_name": {"type": "text", "x": 100, "y": 200},
"signature": {"type": "sig", "x": 150, "y": 500}
}
```
**validate_boxes.py**: Check for overlapping bounding boxes
```bash
python scripts/validate_boxes.py fields.json
# Returns: "OK" or lists conflicts
```
**fill_form.py**: Apply field values to PDF
```bash
python scripts/fill_form.py input.pdf fields.json output.pdf
```
When inputs can be rendered as images, have Claude analyze them:
## Form layout analysis
1. Convert PDF to images:
```bash
python scripts/pdf_to_images.py form.pdf
```
2. Analyze each page image to identify form fields
3. Claude can see field locations and types visually
Claude's vision capabilities help understand layouts and structures.
When Claude performs complex, open-ended tasks, it can make mistakes. The "plan-validate-execute" pattern catches errors early by having Claude first create a plan in a structured format, then validate that plan with a script before executing it.
Example : Imagine asking Claude to update 50 form fields in a PDF based on a spreadsheet. Without validation, Claude might reference non-existent fields, create conflicting values, miss required fields, or apply updates incorrectly.
Solution : Use the workflow pattern shown above (PDF form filling), but add an intermediate changes.json file that gets validated before applying changes. The workflow becomes: analyze → create plan file → validate plan → execute → verify.
Why this pattern works:
When to use : Batch operations, destructive changes, complex validation rules, high-stakes operations.
Implementation tip : Make validation scripts verbose with specific error messages like "Field 'signature_date' not found. Available fields: customer_name, order_total, signature_date_signed" to help Claude fix issues.
Skills run in the code execution environment with platform-specific limitations:
List required packages in your SKILL.md and verify they're available in the code execution tool documentation.
Skills run in a code execution environment with filesystem access, bash commands, and code execution capabilities. For the conceptual explanation of this architecture, see The Skills architecture in the overview.
How this affects your authoring:
How Claude accesses Skills:
reference/guide.md), not backslashesform_validation_rules.md, not doc2.mdreference/finance.md, reference/sales.mddocs/file1.md, docs/file2.mdvalidate_form.py rather than asking Claude to generate validation codeExample:
bigquery-skill/
├── SKILL.md (overview, points to reference files)
└── reference/
├── finance.md (revenue metrics)
├── sales.md (pipeline data)
└── product.md (usage analytics)
When the user asks about revenue, Claude reads SKILL.md, sees the reference to reference/finance.md, and invokes bash to read just that file. The sales.md and product.md files remain on the filesystem, consuming zero context tokens until needed. This filesystem-based model is what enables progressive disclosure. Claude can navigate and selectively load exactly what each task requires.
For complete details on the technical architecture, see How Skills work in the Skills overview.
If your Skill uses MCP (Model Context Protocol) tools, always use fully qualified tool names to avoid "tool not found" errors.
Format : ServerName:tool_name
Example :
Use the BigQuery:bigquery_schema tool to retrieve table schemas.
Use the GitHub:create_issue tool to create issues.
Where:
BigQuery and GitHub are MCP server namesbigquery_schema and create_issue are the tool names within those serversWithout the server prefix, Claude may fail to locate the tool, especially when multiple MCP servers are available.
Don't assume packages are available:
**Bad example: Assumes installation**:
"Use the pdf library to process the file."
**Good example: Explicit about dependencies**:
"Install required package: `pip install pypdf`
Then use it:
```python
from pypdf import PdfReader
reader = PdfReader("file.pdf")
```"
The SKILL.md frontmatter includes only name (64 characters max) and description (1024 characters max) fields. See the Skills overview for complete structure details.
Keep SKILL.md body under 500 lines for optimal performance. If your content exceeds this, split it into separate files using the progressive disclosure patterns described earlier. For architectural details, see the Skills overview.
Before sharing a Skill, verify:
Weekly Installs
227
Repository
GitHub Stars
699
First Seen
Feb 19, 2026
Installed on
opencode221
github-copilot220
codex220
gemini-cli219
kimi-cli217
cursor217
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
107,800 周安装
analyze_form.py to extract fields" (execute)analyze_form.py for the extraction algorithm" (read as reference)