skill-creator by daymade/claude-code-skills
npx skills add https://github.com/daymade/claude-code-skills --skill skill-creator一个用于创建新技能并迭代改进它们的技能。
从高层次来看,创建技能的过程如下:
claude-with-access-to-the-skilleval-viewer/generate_review.py 脚本向用户展示结果供其查看,并让他们查看定量指标使用此技能时,你的工作是弄清楚用户处于此过程的哪个阶段,然后介入并帮助他们推进这些阶段。例如,用户可能说“我想为 X 创建一个技能”。你可以帮助明确他们的意图,撰写草稿,编写测试用例,确定他们希望如何评估,运行所有提示,并重复这个过程。
另一方面,用户可能已经有了技能的草稿。在这种情况下,你可以直接进入评估/迭代循环。
当然,你应该始终保持灵活,如果用户说“我不需要运行一堆评估,只需和我一起感觉一下”,你可以照做。
然后在技能完成后(再次强调,顺序是灵活的),你还可以运行技能描述优化器,我们有一个单独的脚本来优化技能的触发。
明白了吗?很好。
技能创建器可能会被各种熟悉编码术语程度不同的人使用。如果你没听说过(你怎么可能听说过,这只是最近才开始的事情),现在有一种趋势,Claude 的强大能力正在激励水管工打开终端,父母和祖父母去谷歌搜索“如何安装 npm”。另一方面,大多数用户可能对计算机相当熟悉。
所以请注意上下文线索,以理解如何措辞你的沟通!在默认情况下,给你一些概念:
如果你不确定,可以简要解释术语,如果你不确定用户是否能理解,可以随时用简短的定义澄清术语。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
首先理解用户的意图。当前的对话可能已经包含了用户想要捕捉的工作流程(例如,他们说“把这个变成一个技能”)。如果是这样,首先从对话历史中提取答案——使用的工具、步骤顺序、用户所做的更正、观察到的输入/输出格式。用户可能需要填补空白,并在进入下一步之前进行确认。
主动询问关于边界情况、输入/输出格式、示例文件、成功标准和依赖项的问题。等到这部分确定后再编写测试提示。
检查可用的 MCPs - 如果对研究有用(搜索文档、查找类似技能、查找最佳实践),如果有子代理则通过子代理并行研究,否则内联研究。准备好上下文以减少用户的负担。
根据用户访谈,填写以下组件:
skill-name/
├── SKILL.md (必需)
│ ├── YAML 前言 (必需 name, description)
│ └── Markdown 指令
└── 捆绑资源 (可选)
├── scripts/ - 用于确定性/重复任务的可执行代码
├── references/ - 根据需要加载到上下文中的文档
└── assets/ - 输出中使用的文件(模板、图标、字体)
除了 description 之外的所有前言字段都是可选的。使用 --- 标记之间的这些字段配置技能行为:
---
name: my-skill
description: 此技能的作用以及何时使用它。在...时使用
context: fork
agent: general-purpose
argument-hint: [topic]
---
| 字段 | 必需 | 描述 |
|---|---|---|
name | 否 | 技能的显示名称。如果省略,则使用目录名。仅限小写字母、数字和连字符(最多 64 个字符)。 |
description | 推荐 | 技能的作用以及何时使用它。Claude 使用此信息来决定何时应用该技能。如果省略,则使用 markdown 内容的第一段。 |
context | 否 | 设置为 fork 以在分叉的子代理上下文中运行。 参见下面的“内联与分叉:关键决策”——选择错误会破坏你的技能。 |
agent | 否 | 当设置 context: fork 时使用的子代理类型。选项:Explore、Plan、general-purpose,或来自 .claude/agents/ 的自定义代理。默认:general-purpose。 |
disable-model-invocation | 否 | 设置为 true 可防止 Claude 自动加载此技能。用于你希望使用 /name 手动触发的工作流程。默认:false。 |
user-invocable | 否 | 设置为 false 可从 / 菜单中隐藏。用于用户不应直接调用的后台知识。默认:true。 |
allowed-tools | 否 | 预先批准的工具列表。建议:不要设置此字段。 省略它可以让技能拥有由用户权限设置控制的完整工具访问权限。设置它会不必要地限制技能的能力。 |
model | 否 | 当此技能激活时使用的模型。 |
argument-hint | 否 | 在自动完成期间显示的提示,指示预期的参数。例如:[issue-number] 或 [filename] [format]。 |
hooks | 否 | 限定在此技能生命周期内的钩子。例如:hooks: { pre-invoke: [{ command: "echo Starting" }] }。参见 Claude Code Hooks 文档。 |
特殊占位符: 技能内容中的 $ARGUMENTS 将替换为用户在技能名称后提供的文本。例如,/deep-research quantum computing 将 $ARGUMENTS 替换为 quantum computing。
这是设计技能时最重要的架构决策。 选择错误会悄无声息地破坏你技能的核心能力。
关键约束:子代理不能生成其他子代理。 以 context: fork 运行的技能(作为子代理)不能:
决策指南:
| 你的技能需要... | 使用 | 原因 |
|---|---|---|
| 编排并行代理(Task 工具) | 内联(无 context) | 子代理不能生成子代理 |
| 调用其他技能(Skill 工具) | 内联(无 context) | 子代理不能调用技能 |
| 运行 Bash 命令以使用外部 CLI | 内联(无 context) | 在主上下文中拥有完整的工具访问权限 |
| 执行单个专注任务(研究、分析) | 分叉(context: fork) | 隔离的上下文,干净的执行 |
| 提供参考知识(编码约定) | 内联(无 context) | 指南丰富主对话 |
| 被其他技能调用 | 分叉(context: fork) | 必须是一个子代理才能被生成 |
示例:编排器技能(必须内联):
---
name: product-analysis
description: 多路径并行产品分析与跨模型合成
---
# 编排并行代理 — 内联是必需的
1. 自动检测可用工具(哪个 codex 等)
2. 并行启动 3-5 个 Task 代理(Explore 子代理)
3. 可选地通过 Skill 工具调用 /competitors-analysis
4. 综合所有结果
示例:专家技能(分叉是正确的):
---
name: deep-research
description: 使用多个来源深入研究一个主题
context: fork
agent: Explore
---
彻底研究 $ARGUMENTS:
1. 使用 Glob 和 Grep 查找相关文件
2. 阅读并分析代码
3. 用具体的文件引用总结发现
示例:参考技能(内联,无任务):
---
name: api-conventions
description: 此代码库的 API 设计模式
---
编写 API 端点时:
- 使用 RESTful 命名约定
- 返回一致的错误格式
技能应该是正交的:每个技能处理一个关注点,它们通过组合结合在一起。
模式:编排器(内联)调用专家(分叉)
product-analysis (内联, 编排器)
├─ 用于并行探索的 Task 代理
├─ Skill('competitors-analysis', 'X') → 分叉子代理
└─ 综合所有结果
competitors-analysis (分叉, 专家)
└─ 单个专注任务:分析一个竞争对手的代码库
可组合性规则:
context: fork)才能使用 Task/Skill 工具context: fork 在隔离的子代理上下文中运行永远不要为可以自动检测的能力添加手动标志。 与其要求用户传递 --with-codex 或 --verbose,不如在运行时检测能力:
# 好:自动检测并告知
步骤 0:检查可用工具
- `which codex` → 如果找到,告知用户并启用跨模型分析
- `ls package.json` → 如果找到,为 Node.js 项目定制提示
- `which docker` → 如果找到,启用基于容器的执行
# 坏:手动标志
argument-hint: [scope] [--with-codex] [--docker] [--verbose]
原则: 能力自动检测,用户决定范围。技能应该发现它能做什么并相应行动,而不是要求用户记住安装了哪些工具。
| 前言 | 你可以调用 | Claude 可以调用 | 子代理可以使用 |
|---|---|---|---|
| (默认) | 是 | 是 | 否(内联运行) |
context: fork | 是 | 是 | 是 |
disable-model-invocation: true | 是 | 否 | 否 |
context: fork + disable-model-invocation: true | 是 | 否 | 是(当明确委托时) |
技能使用三级加载系统:
这些字数统计是近似的,如果需要,你可以随意写得更长。
关键模式:
领域组织: 当技能支持多个领域/框架时,按变体组织:
cloud-deploy/
├── SKILL.md (工作流程 + 选择)
└── references/
├── aws.md
├── gcp.md
└── azure.md
Claude 只读取相关的参考文件。
这不用说,但技能不得包含恶意软件、利用代码或任何可能损害系统安全的内容。如果描述得当,技能的内容不应在意图上让用户感到意外。不要配合创建误导性技能或旨在促进未经授权的访问、数据泄露或其他恶意活动的请求。像“扮演 XYZ 角色”这样的内容是可以的。
在指令中优先使用祈使形式。
定义输出格式 - 你可以这样做:
## 报告结构
始终使用此确切模板:
# [标题]
## 执行摘要
## 主要发现
## 建议
示例模式 - 包含示例很有用。你可以像这样格式化它们(但如果示例中有“输入”和“输出”,你可能需要稍微偏离一下):
## 提交消息格式
**示例 1:**
输入:添加了带有 JWT 令牌的用户身份验证
输出:feat(auth): 实现基于 JWT 的身份验证
尝试向模型解释为什么事情很重要,而不是使用生硬的 MUST。使用心理理论,尝试使技能通用,而不是超级局限于特定示例。首先写一个草稿,然后用新的眼光审视并改进它。
scripts/)用于需要确定性可靠性或重复重写的任务的可执行代码(Python/Bash 等)。
scripts/rotate_pdf.pyreferences/)旨在根据需要加载到上下文中的文档和参考资料,以告知 Claude 的流程和思考。
references/finance.md,用于公司 NDA 模板的 references/mnda.mdassets/)不打算加载到上下文中的文件,而是在 Claude 产生的输出中使用的文件。
assets/logo.png,用于 PowerPoint 模板的 assets/slides.pptx关键: 旨在公开分发的技能不得包含用户特定或公司特定的信息:
/home/username/、/Users/username/)~/.claude/skills/scripts/example.py、references/guide.md)~/workspace/project、username、your-company)关键: 技能不应在 SKILL.md 中包含版本历史或版本号:
## 版本、## 变更日志)plugins[].version 下跟踪文件名必须在不阅读内容的情况下不言自明。
模式: <内容类型>_<具体性>.md
示例:
commands.md、cli_usage.md、reference.mdscript_parameters.md、api_endpoints.md、database_schema.md测试: 仅从名称就能理解文件的内容吗?
Anthropic 编写了技能创作最佳实践,在创建或更新任何技能之前,你应该检索它,链接是 https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices.md
撰写技能草稿后,提出 2-3 个现实的测试提示——真实用户实际会说的那种。与用户分享:[你不必使用完全相同的语言]“这是我想尝试的几个测试用例。这些看起来对吗,还是你想添加更多?”然后运行它们。
将测试用例保存到 evals/evals.json。暂时不要编写断言——只写提示。你将在下一步运行进行时草拟断言。
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "用户的任务提示",
"expected_output": "预期结果的描述",
"files": []
}
]
}
有关完整模式(包括你稍后将添加的 assertions 字段),请参见 references/schemas.md。
本节是一个连续的序列——不要中途停止。不要使用 /skill-test 或任何其他测试技能。
将结果放在 <skill-name>-workspace/ 中,作为技能目录的同级目录。在工作区内,按迭代(iteration-1/、iteration-2/ 等)组织结果,在每个迭代中,每个测试用例都有一个目录(eval-0/、eval-1/ 等)。不要预先创建所有这些——只需在需要时创建目录。
对于每个测试用例,在同一轮中生成两个子代理——一个带技能,一个不带。这很重要:不要先生成带技能的运行,然后再回来处理基线。同时启动所有内容,以便它们大约在同一时间完成。
带技能运行:
执行此任务:
- 技能路径:<path-to-skill>
- 任务:<eval prompt>
- 输入文件:<eval files if any, or "none">
- 保存输出到:<workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- 要保存的输出:<用户关心的内容——例如,“.docx 文件”、“最终的 CSV”>
基线运行(相同的提示,但基线取决于上下文):
without_skill/outputs/。cp -r <skill-path> <workspace>/skill-snapshot/),然后将基线子代理指向快照。保存到 old_skill/outputs/。为每个测试用例编写一个 eval_metadata.json(断言现在可以为空)。根据测试内容给每个评估一个描述性名称——不仅仅是“eval-0”。目录也使用此名称。如果此迭代使用新的或修改的评估提示,为每个新的评估目录创建这些文件——不要假设它们从之前的迭代中延续过来。
{
"eval_id": 0,
"eval_name": "描述性名称在此",
"prompt": "用户的任务提示",
"assertions": []
}
不要只是等待运行完成——你可以有效地利用这段时间。为每个测试用例草拟定量断言并向用户解释。如果 evals/evals.json 中已存在断言,请查看它们并解释它们检查什么。
好的断言是客观可验证的,并且具有描述性名称——它们应该在基准查看器中清晰可读,以便有人浏览结果时能立即理解每个断言检查什么。主观技能(写作风格、设计质量)最好通过定性评估——不要将断言强加于需要人类判断的事物。
草拟后,用断言更新 eval_metadata.json 文件和 evals/evals.json。同时向用户解释他们将在查看器中看到什么——定性的输出和定量的基准。
当每个子代理任务完成时,你会收到一个包含 total_tokens 和 duration_ms 的通知。立即将此数据保存到运行目录中的 timing.json:
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
这是捕获此数据的唯一机会——它通过任务通知传来,不会在其他地方持久化。在每条通知到达时处理它,而不是尝试批量处理。
所有运行完成后:
对每次运行评分 - 生成一个评分子代理(或内联评分),该代理读取 agents/grader.md 并根据输出评估每个断言。将结果保存到每个运行目录中的 grading.json。grading.json 的期望数组必须使用字段 text、passed 和 evidence(而不是 name/met/details 或其他变体)——查看器依赖于这些确切的字段名称。对于可以通过编程方式检查的断言,编写并运行一个脚本,而不是目测检查——脚本更快、更可靠,并且可以在迭代中重复使用。
聚合到基准中 - 从技能创建器目录运行聚合脚本:
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
这将生成 benchmark.json 和 benchmark.md,其中包含每个配置的 pass_rate、time 和 tokens,以及均值 +/- 标准差和差值。如果手动生成 benchmark.json,请参见 references/schemas.md 了解查看器期望的确切模式。将每个带技能的版本放在其基线对应项之前。
进行分析 - 阅读基准数据,并揭示聚合统计数据可能隐藏的模式。有关要查找的内容,请参见 agents/analyzer.md(“分析基准结果”部分)——例如,无论技能如何总是通过的断言(无区分性)、高方差评估(可能不稳定)以及时间/令牌权衡。
启动查看器,包含定性输出和定量数据:
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!
对于第 2 次及以后的迭代,还要传递 --previous-workspace <workspace>/iteration-<N-1>。
Cowork / 无头环境: 如果 webbrowser.open() 不可用或环境没有显示器,使用 --static <output_path> 写入独立的 HTML 文件,而不是启动服务器。当用户点击“提交所有评论”时,反馈将作为 feedback.json 文件下载。下载后,将 feedback.json 复制到工作区目录中,以便下一次迭代拾取。
注意:请使用 generate_review.py 创建查看器;无需编写自定义 HTML。
“输出”选项卡一次显示一个测试用例:
“基准”选项卡显示统计摘要:每个配置的通过率、计时和令牌使用情况,以及每个评估的细分和分析观察。
导航通过上一个/下一个按钮或箭头键完成。完成后,他们点击“提交所有评论”,将所有反馈保存到 feedback.json。
当用户告诉你他们完成后,阅读 feedback.json:
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "图表缺少轴标签", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "完美,喜欢这个", "timestamp": "..."}
],
"status": "complete"
}
空反馈意味着用户认为没问题。将你的改进重点放在用户有具体抱怨的测试用例上。
完成后关闭查看器服务器:
kill $VIEWER_PID 2>/dev/null
这是循环的核心。你已经运行了测试用例,用户已经审查了结果,现在你需要根据他们的反馈使技能变得更好。
从反馈中归纳。 这里发生的大局是,我们试图创建可以(也许字面上是数百万次,谁知道呢,甚至更多)在许多不同提示中使用的技能。在这里,你和用户只在几个示例上反复迭代,因为这有助于更快地推进。用户对这些示例了如指掌,他们可以快速评估新的输出。但是,如果你和用户共同开发的技能只适用于那些示例,那就没用了。与其加入繁琐的过度拟合的更改,或令人窒息的强制性 MUST,如果存在一些顽固的问题,你可以尝试扩展并使用不同的隐喻,或推荐不同的工作模式。尝试的成本相对较低,也许你会找到很棒的东西。
保持提示简洁。 删除那些没有发挥作用的元素。确保阅读对话记录,而不仅仅是最终输出——如果看起来技能让模型浪费大量时间做无益的事情,你可以尝试去掉导致这种情况的技能部分,看看会发生什么。
解释原因。 努力解释你要求模型做的一切的原因。今天的 LLM 很聪明。它们具有良好的心理理论,当给予好的框架时,可以超越死板的指令,真正让事情发生。即使用户的反馈简短或沮丧,也要尝试真正理解任务以及用户为什么写他们写的内容,他们实际写了什么,然后将这种理解转化为指令。如果你发现自己用全大写写 ALWAYS 或 NEVER,或者使用超级僵化的结构,那是一个黄色警告——如果可能,重新构建并解释推理,以便模型理解你要求的事情为什么重要。这是一种更人性化、更强大、更有效的方法。
寻找跨测试用例的重复工作。 阅读测试运行的对话记录,注意子代理是否都独立编写了类似的辅助脚本或对某些事情采取了相同的多步骤方法。如果所有 3 个测试用例都导致子代理编写了 create_docx.py 或 build_chart.py,这是一个强烈的信号,表明技能应该捆绑该脚本。编写一次,放在 scripts/ 中,并告诉技能使用它。这可以避免未来的每次调用都重新发明轮子。
这项任务非常重要(我们试图在这里创造每年数十亿美元的经济价值!),你的思考时间不是瓶颈;花时间仔细考虑。我建议写一个修订草稿,然后用新的眼光审视并做出改进。尽最大努力进入用户的头脑,理解他们想要和需要什么。
改进技能后:
iteration-<N+1>/ 目录中,包括基线运行。如果你正在创建新技能,基线始终是 without_skill(无技能)——这在迭代中保持不变。如果你正在改进现有技能,根据什么有意义作为基线来判断:用户最初带来的原始版本,或上一次迭代。--previous-workspace 指向上一次迭代继续直到:
对于你想要在两个技能版本之间进行更严格比较的情况(例如,用户问“新版本真的更好吗?”),有一个盲比较系统。有关详细信息,请阅读 agents/comparator.md 和 agents/analyzer.md。基本思想是:将两个输出交给一个独立的代理,不告诉它哪个是哪个,让它判断质量。然后分析获胜者为什么获胜。
这是可选的,需要子代理,大多数用户不需要它。人工审查循环通常就足够了。
SKILL.md 前言中的描述字段是决定 Claude 是否调用技能的主要机制。在创建或改进技能后,提供优化描述以提高触发准确性。
创建 20 个评估查询——混合应该触发和不应该触发的查询。保存为 JSON:
[
{"query": "用户提示", "should_trigger": true},
{"query": "另一个提示", "should_trigger": false}
]
查询必须是现实的,并且是 Claude Code 或 Claude.ai 用户实际会输入的内容。不是抽象的请求,而是具体、详细且有相当多细节的请求。例如,文件路径、关于用户工作或情况的个人上下文、列名和值、公司名称、URL。一点背景故事。有些可能是小写,或包含缩写、拼写错误或随意口语。使用不同长度的混合,并专注于边界情况,而不是让它们清晰明确(用户将有机会签署它们)。
差:"格式化这些数据"、"从 PDF 中提取文本"、"创建一个图表"
好:"好吧,我的老板刚发给我这个 xlsx 文件(在我的下载文件夹里,文件名类似 'Q4 sales final FINAL v2.xlsx'),她想让我添加一列显示利润率百分比。收入在 C 列,成本在 D 列,我想"
对于应该触发的查询(8-10 个),考虑覆盖率。你需要相同意图的不同措辞——有些正式,有些随意。包括用户没有明确命名技能或文件类型但显然需要它的情况。加入一些不常见的用例以及此技能与另一个竞争但应该获胜的情况。
对于不应该触发的查询(8-10 个),最有价值的是那些接近命中的查询——与技能共享关键词或概念但实际上需要不同内容的查询。考虑相邻领域、模糊措辞(天真的关键词匹配会触发但不应触发的情况),以及查询涉及技能所做的某些事情但在另一个工具更合适的上下文中的情况。
要避免的关键点:不要让不应该触发的查询明显无关。对于 PDF 技能来说,“编写斐波那契函数”作为负面测试太容易了——它没有测试任何东西。负面案例应该是真正棘手的。
使用 HTML 模板向用户展示评估集以供审查:
assets/eval_review.html 读取模板__EVAL_DATA_PLACEHOLDER__ → 评估项的 JSON 数组(不要加引号——它是一个 JS 变量赋值)__SKILL_NAME_PLACEHOLDER__ → 技能的名称__SKILL_DESCRIPTION_PLACEHOLDER__ → 技能的当前描述/tmp/eval_review_<skill-name>.html)并打开:open /tmp/eval_review_<skill-name>.html~/Downloads/eval_set.json ——检查 Downloads 文件夹中的最新版本,以防有多个(例如,eval_set (1).json)这一步很重要——糟糕的评估查询会导致糟糕的描述。
告诉用户:“这需要一些时间——我将在后台运行优化循环并定期检查。”
将评估集保存到工作区,然后在后台运行:
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
使用来自你系统提示的模型 ID(为当前会话提供动力的那个),以便触发测试与用户实际体验的相匹配。
在运行时,定期跟踪输出,向用户更新它处于哪个迭代以及分数看起来如何。
这会自动处理完整的优化循环。它将评估集拆分为 60% 的训练集和 40% 的保留测试集,评估当前描述(每个查询运行 3 次以获得可靠的触发率),然后调用 Claude 进行扩展思考,根据失败的情况提出改进建议。它在训练集和测试集上重新评估每个新描述,迭代最多 5 次。完成后,它在浏览器中打开一个 HTML 报告,显示每次迭代的结果,并返回包含 best_description 的 JSON——根据测试分数而不是训练分数选择,以避免过度拟合。
理解触发机制有助于设计更好的评估查询。技能以其名称 + 描述出现在 Claude 的 available_skills 列表中,Claude 根据该描述决定是否咨询某个技能。需要知道的重要一点是,Claude 只咨询那些它自己无法轻易处理的任务的技能——简单的、一步式的查询,如“读取此 PDF”,即使描述完全匹配,也可能不会触发技能,因为 Claude 可以直接用基本工具处理它们。复杂的、多步骤的或专门的查询在描述匹配时会可靠地触发技能。
这意味着你的评估查询应该有足够的实质性内容,Claude 实际上会受益于咨询一个技能。像“读取文件 X”这样的简单查询是糟糕的测试用例——无论描述质量如何,它们都不会触发技能。
从 JSON 输出中获取 best_description 并更新技能的 SKILL.md 前言。向用户展示前后对比并报告分数。
永远不要在 ~/.claude/plugins/cache/ 中编辑技能——那是一个只读缓存目录。那里的所有更改都是:
始终验证你正在编辑源存储库:
# 错误 - 缓存位置(只读副本)
~/.claude/plugins/cache/daymade-skills/my-skill/1.0.0/my-skill/SKILL.md
# 正确 - 源存储库
/path/to/your/claude-code-skills/my-skill/SKILL.md
在任何编辑之前,确认文件路径不包含 /cache/ 或 /plugins/cache/。
创建或更新技能时,按顺序遵循这些步骤。仅在明显不适用时才跳过步骤。
仅当技能的使用模式已经清晰理解时才跳过此步骤。
要创建有效的技能,请清楚地理解技能将如何使用的具体示例。这种理解可以来自直接的用户示例,也可以来自生成并通过用户反馈验证的示例。
例如,在构建 image-editor 技能时,相关问题包括:
为避免让用户不知所措,避免在单条消息中问太多问题。
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
eval-viewer/generate_review.py script to show the user the results for them to look at, and also let them look at the quantitative metricsYour job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
Based on the user interview, fill in these components:
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter (name, description required)
│ └── Markdown instructions
└── Bundled Resources (optional)
├── scripts/ - Executable code for deterministic/repetitive tasks
├── references/ - Docs loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts)
All frontmatter fields except description are optional. Configure skill behavior using these fields between --- markers:
---
name: my-skill
description: What this skill does and when to use it. Use when...
context: fork
agent: general-purpose
argument-hint: [topic]
---
| Field | Required | Description |
|---|---|---|
name | No | Display name for the skill. If omitted, uses the directory name. Lowercase letters, numbers, and hyphens only (max 64 characters). |
description | Recommended | What the skill does and when to use it. Claude uses this to decide when to apply the skill. If omitted, uses the first paragraph of markdown content. |
context | No | Set tofork to run in a forked subagent context. See "Inline vs Fork: Critical Decision" below — choosing wrong breaks your skill. |
agent | No |
Special placeholder: $ARGUMENTS in skill content is replaced with text the user provides after the skill name. For example, /deep-research quantum computing replaces $ARGUMENTS with quantum computing.
This is the most important architectural decision when designing a skill. Choosing wrong will silently break your skill's core capabilities.
CRITICAL CONSTRAINT: Subagents cannot spawn other subagents. A skill running with context: fork (as a subagent) CANNOT:
Decision guide:
| Your skill needs to... | Use | Why |
|---|---|---|
| Orchestrate parallel agents (Task tool) | Inline (no context) | Subagents can't spawn subagents |
| Call other skills (Skill tool) | Inline (no context) | Subagents can't invoke skills |
| Run Bash commands for external CLIs | Inline (no context) | Full tool access in main context |
| Perform a single focused task (research, analysis) | Fork (context: fork) | Isolated context, clean execution |
Example: Orchestrator skill (MUST be inline):
---
name: product-analysis
description: Multi-path parallel product analysis with cross-model synthesis
---
# Orchestrates parallel agents — inline is REQUIRED
1. Auto-detect available tools (which codex, etc.)
2. Launch 3-5 Task agents in parallel (Explore subagents)
3. Optionally invoke /competitors-analysis via Skill tool
4. Synthesize all results
Example: Specialist skill (fork is correct):
---
name: deep-research
description: Research a topic thoroughly using multiple sources
context: fork
agent: Explore
---
Research $ARGUMENTS thoroughly:
1. Find relevant files using Glob and Grep
2. Read and analyze the code
3. Summarize findings with specific file references
Example: Reference skill (inline, no task):
---
name: api-conventions
description: API design patterns for this codebase
---
When writing API endpoints:
- Use RESTful naming conventions
- Return consistent error formats
Skills should be orthogonal : each skill handles one concern, and they combine through composition.
Pattern: Orchestrator (inline) calls Specialist (fork)
product-analysis (inline, orchestrator)
├─ Task agents for parallel exploration
├─ Skill('competitors-analysis', 'X') → fork subagent
└─ Synthesizes all results
competitors-analysis (fork, specialist)
└─ Single focused task: analyze one competitor codebase
Rules for composability:
context: fork) to use Task/Skill toolscontext: fork to run in isolated subagent contextNever add manual flags for capabilities that can be auto-detected. Instead of requiring users to pass --with-codex or --verbose, detect capabilities at runtime:
# Good: Auto-detect and inform
Step 0: Check available tools
- `which codex` → If found, inform user and enable cross-model analysis
- `ls package.json` → If found, tailor prompts for Node.js project
- `which docker` → If found, enable container-based execution
# Bad: Manual flags
argument-hint: [scope] [--with-codex] [--docker] [--verbose]
Principle: Capabilities auto-detect, user decides scope. A skill should discover what it CAN do and act accordingly, not require users to remember what tools are installed.
| Frontmatter | You can invoke | Claude can invoke | Subagents can use |
|---|---|---|---|
| (default) | Yes | Yes | No (runs inline) |
context: fork | Yes | Yes | Yes |
disable-model-invocation: true | Yes | No | No |
context: fork + disable-model-invocation: true | Yes | No | Yes (when explicitly delegated) |
Skills use a three-level loading system:
These word counts are approximate and you can feel free to go longer if needed.
Key patterns:
Domain organization : When a skill supports multiple domains/frameworks, organize by variant:
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.md
Claude reads only the relevant reference file.
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
Prefer using the imperative form in instructions.
Defining output formats - You can do it like this:
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations
Examples pattern - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authentication
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
scripts/)Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
scripts/rotate_pdf.py for PDF rotation tasksreferences/)Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
references/finance.md for financial schemas, references/mnda.md for company NDA templateassets/)Files not intended to be loaded into context, but rather used within the output Claude produces.
assets/logo.png for brand assets, assets/slides.pptx for PowerPoint templatesCRITICAL : Skills intended for public distribution must not contain user-specific or company-specific information:
/home/username/, /Users/username/)~/.claude/skills/scripts/example.py, references/guide.md)~/workspace/project, username, your-company)CRITICAL : Skills should NOT contain version history or version numbers in SKILL.md:
## Version, ## Changelog) in SKILL.mdplugins[].versionFilenames must be self-explanatory without reading contents.
Pattern : <content-type>_<specificity>.md
Examples :
commands.md, cli_usage.md, reference.mdscript_parameters.md, api_endpoints.md, database_schema.mdTest : Can someone understand the file's contents from the name alone?
Anthropic has wrote skill authoring best practices, you SHOULD retrieve it before you create or update any skills, the link is https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices.md
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to evals/evals.json. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}
See references/schemas.md for the full schema (including the assertions field, which you'll add later).
This section is one continuous sequence — don't stop partway through. Do NOT use /skill-test or any other testing skill.
Put results in <skill-name>-workspace/ as a sibling to the skill directory. Within the workspace, organize results by iteration (iteration-1/, iteration-2/, etc.) and within that, each test case gets a directory (eval-0/, eval-1/, etc.). Don't create all of this upfront — just create directories as you go.
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
With-skill run:
Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
Baseline run (same prompt, but the baseline depends on context):
without_skill/outputs/.cp -r <skill-path> <workspace>/skill-snapshot/), then point the baseline subagent at the snapshot. Save to old_skill/outputs/.Write an eval_metadata.json for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in evals/evals.json, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the eval_metadata.json files and evals/evals.json with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
When each subagent task completes, you receive a notification containing total_tokens and duration_ms. Save this data immediately to timing.json in the run directory:
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
Once all runs are done:
Grade each run — spawn a grader subagent (or grade inline) that reads agents/grader.md and evaluates each assertion against the outputs. Save results to grading.json in each run directory. The grading.json expectations array must use the fields text, passed, and evidence (not name/met/details or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
Aggregate into benchmark — run the aggregation script from the skill-creator directory:
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
This produces benchmark.json and benchmark.md with pass_rate, time, and tokens for each configuration, with mean +/- stddev and the delta. If generating benchmark.json manually, see references/schemas.md for the exact schema the viewer expects. Put each with_skill version before its baseline counterpart.
Do an analyst pass — read the benchmark data and surface patterns the aggregate stats might hide. See agents/analyzer.md (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
Launch the viewer with both qualitative outputs and quantitative data:
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!
For iteration 2+, also pass --previous-workspace <workspace>/iteration-<N-1>.
Cowork / headless environments: If webbrowser.open() is not available or the environment has no display, use --static <output_path> to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a feedback.json file when the user clicks "Submit All Reviews". After download, copy feedback.json into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
The "Outputs" tab shows one test case at a time:
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to feedback.json.
When the user tells you they're done, read feedback.json:
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
kill $VIEWER_PID 2>/dev/null
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
Generalize from the feedback. The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
Keep the prompt lean. Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
Explain the why. Try hard to explain the why behind everything you're asking the model to do. Today's LLMs are smart. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
Look for repeated work across test cases. Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a create_docx.py or a , that's a strong signal the skill should bundle that script. Write it once, put it in , and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
After improving the skill:
iteration-<N+1>/ directory, including baseline runs. If you're creating a new skill, the baseline is always without_skill (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.--previous-workspace pointing at the previous iterationKeep going until:
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read agents/comparator.md and agents/analyzer.md for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
[
{"query": "the user prompt", "should_trigger": true},
{"query": "another prompt", "should_trigger": false}
]
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: "Format this data", "Extract text from PDF", "Create a chart"
Good: "ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"
For the should-trigger queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the should-not-trigger queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
Present the eval set to the user for review using the HTML template:
assets/eval_review.html__EVAL_DATA_PLACEHOLDER__ → the JSON array of eval items (no quotes around it — it's a JS variable assignment)__SKILL_NAME_PLACEHOLDER__ → the skill's name__SKILL_DESCRIPTION_PLACEHOLDER__ → the skill's current description/tmp/eval_review_<skill-name>.html) and open it: open /tmp/eval_review_<skill-name>.html~/Downloads/eval_set.json — check the Downloads folder for the most recent version in case there are multiple (e.g., eval_set (1).json)This step matters — bad eval queries lead to bad descriptions.
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude with extended thinking to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with best_description — selected by test score rather than train score to avoid overfitting.
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's available_skills list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
Take best_description from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
NEVER edit skills in~/.claude/plugins/cache/ — that's a read-only cache directory. All changes there are:
ALWAYS verify you're editing the source repository:
# WRONG - cache location (read-only copy)
~/.claude/plugins/cache/daymade-skills/my-skill/1.0.0/my-skill/SKILL.md
# RIGHT - source repository
/path/to/your/claude-code-skills/my-skill/SKILL.md
Before any edit , confirm the file path does NOT contain /cache/ or /plugins/cache/.
When creating or updating a skill, follow these steps in order. Skip steps only when clearly not applicable.
Skip this step only when the skill's usage patterns are already clearly understood.
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
For example, when building an image-editor skill, relevant questions include:
To avoid overwhelming users, avoid asking too many questions in a single message.
Analyze each example by:
Match specificity to task risk:
Skip this step if the skill already exists.
When creating a new skill from scratch, always run the init_skill.py script:
scripts/init_skill.py <skill-name> --path <output-directory>
The script creates a template skill directory with proper frontmatter, resource directories, and example files.
When editing, remember that the skill is being created for another instance of Claude to use. Focus on information that would be beneficial and non-obvious to Claude.
When updating an existing skill : Scan all existing reference files to check if they need corresponding updates.
Ask the user before executing this step: "This skill appears to be extracted from a business project. Would you like me to perform a sanitization review to remove business-specific content before public distribution?"
Skip if: skill was created from scratch for public use, user declines, or skill is for internal use.
Sanitization process:
Before packaging or distributing a skill, run the security scanner to detect hardcoded secrets and personal information:
# Required before packaging
python scripts/security_scan.py <path/to/skill-folder>
# Verbose mode includes additional checks for paths, emails, and code patterns
python scripts/security_scan.py <path/to/skill-folder> --verbose
Detection coverage:
First-time setup: Install gitleaks if not present:
# macOS
brew install gitleaks
# Linux/Windows - see script output for installation instructions
Exit codes:
0 - Clean (safe to package)1 - High severity issues2 - Critical issues (MUST fix before distribution)3 - gitleaks not installed4 - Scan errorOnce the skill is ready, package it into a distributable file:
scripts/package_skill.py <path/to/skill-folder>
Optional output directory:
scripts/package_skill.py <path/to/skill-folder> ./dist
The packaging script will:
If validation fails, the script reports errors and exits without creating a package.
After packaging, update the marketplace registry to include the new or updated skill.
For new skills , add an entry to .claude-plugin/marketplace.json:
{
"name": "skill-name",
"description": "Copy from SKILL.md frontmatter description",
"source": "./",
"strict": false,
"version": "1.0.0",
"category": "developer-tools",
"keywords": ["relevant", "keywords"],
"skills": ["./skill-name"]
}
For updated skills , bump the version in plugins[].version following semver.
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
Refinement filter: Only add what solves observed problems. If best practices already cover it, don't duplicate.
present_files tool is available)Check whether you have access to the present_files tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
python -m scripts.package_skill <path/to/skill-folder>
After packaging, direct the user to the resulting .skill file path so they can install it.
In Claude.ai, the core workflow is the same (draft -> test -> review -> improve -> repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
Running test cases : No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
Reviewing results : If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
Benchmarking : Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
The iteration loop : Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
Description optimization : This section requires the claude CLI tool (specifically claude -p) which is only available in Claude Code. Skip it if you're on Claude.ai.
Blind comparison : Requires subagents. Skip it.
Packaging : The package_skill.py script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting .skill file.
If you're in Cowork, the main things to know are:
--static <output_path> to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.generate_review.py (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER BEFORE evaluating inputs yourself. You want to get them in front of the human ASAP!feedback.json as a file. You can then read it from there (you may have to request access first).package_skill.py just needs Python and a filesystem.run_loop.py / run_eval.py) should work in Cowork just fine since it uses via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
agents/grader.md — How to evaluate assertions against outputsagents/comparator.md — How to do blind A/B comparison between two outputsagents/analyzer.md — How to analyze why one version beat anotherThe references/ directory has additional documentation:
references/schemas.md — JSON structures for evals.json, grading.json, benchmark.json, etc.references/sanitization_checklist.md — Checklist for sanitizing business-specific content before public distributionRepeating one more time the core loop here for emphasis:
eval-viewer/generate_review.py to help the user review themPlease add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run eval-viewer/generate_review.py so human can review test cases" in your TodoList to make sure it happens.
Good luck!
Weekly Installs
105
Repository
GitHub Stars
636
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykPass
Installed on
claude-code86
opencode75
codex73
gemini-cli72
cursor61
github-copilot60
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
152,900 周安装
AI博客文章撰写技能:自动研究、写作与封面生成完整指南
627 周安装
D1 Drizzle Schema:为Cloudflare D1生成正确ORM模式,避免SQLite差异错误
627 周安装
产品探索流程完整指南:从问题界定到方案验证,降低产品决策风险
626 周安装
React 网页动画库指南:GSAP、Framer Motion、Anime.js 实现 Awwwards 级 60fps 动画
632 周安装
zai-tts:基于GLM-TTS的AI语音生成工具,支持语速音量和多语音切换
632 周安装
代码安全审查技能:密钥管理、SQL注入防护、身份验证最佳实践指南
631 周安装
Which subagent type to use when context: fork is set. Options: Explore, Plan, general-purpose, or custom agents from .claude/agents/. Default: general-purpose. |
disable-model-invocation | No | Set to true to prevent Claude from automatically loading this skill. Use for workflows you want to trigger manually with /name. Default: false. |
user-invocable | No | Set to false to hide from the / menu. Use for background knowledge users shouldn't invoke directly. Default: true. |
allowed-tools | No | Pre-approved tools list. Recommendation: Do NOT set this field. Omitting it gives the skill full tool access governed by the user's permission settings. Setting it restricts the skill's capabilities unnecessarily. |
model | No | Model to use when this skill is active. |
argument-hint | No | Hint shown during autocomplete to indicate expected arguments. Example: [issue-number] or [filename] [format]. |
hooks | No | Hooks scoped to this skill's lifecycle. Example: hooks: { pre-invoke: [{ command: "echo Starting" }] }. See Claude Code Hooks documentation. |
| Provide reference knowledge (coding conventions) |
Inline (no context) |
| Guidelines enrich main conversation |
| Be callable BY other skills | Fork (context: fork) | Must be a subagent to be spawned |
build_chart.pyscripts/claude -p