npx skills add https://github.com/tldraw/tldraw --skill review-docs此技能对文档文件执行评估和改进循环。
目标 : $ARGUMENTS
相关技能 : write-docs
┌──────────────────────────────────────────────────────────────┐
│ 初始化:创建状态文件以跟踪问题 │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ 评估(并行) │
│ ┌─────────────────────┐ ┌─────────────────────────────┐ │
│ │ 风格代理 │ │ 内容代理 │ │
│ │ (可读性+语气) │ │ (完整性+准确性) │ │
│ └─────────────────────┘ └─────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ 更新状态:添加新问题,验证已修复问题 │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ 总结:呈现发现,询问用户下一步 │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────┼──────────────────┐
↓ ↓ ↓
[用户:改进] [用户:完成] [用户:完成]
↓ ↓ ↓
┌──────────────────┐ ┌──────────────────┐ 退出
│ 改进 │ │ 完成 │
│ (修复问题) │ │ (修复所有,退出) │
└──────────────────┘ └──────────────────┘
↓ ↓
循环 → 评估 退出
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
在暂存目录中创建一个状态文件,以跟踪所有轮次的问题。这可以防止重复发现相同问题,并允许验证修复。
路径 : <scratchpad>/review-<filename>.md
格式 :
# 审阅跟踪器:[文件名]
## 问题跟踪器
状态值:`pending` | `fixed` | `verified-fixed` | `not-fixed` | `wont-fix`
| ID | 问题 | 类型 | 状态 | 轮次 | 备注 |
| --- | -------------- | ----------------------------- | -------------- | ---- | ---------------- |
| 1 | [描述] | 风格/准确性/完整性 | pending | 1 | [详情] |
| 2 | [描述] | 准确性 | verified-fixed | 1 | 在第1轮修复 |
| 3 | [描述] | 完整性 | wont-fix | 2 | 超出范围 |
## 轮次历史
### 第1轮
- 风格:X/10,语气:X/10,完整性:X/10,准确性:X/10
- **总计:X/40**
状态定义 :
pending:已发现问题,尚未处理fixed:改进代理声称已修复,需要验证verified-fixed:评估确认修复已正确应用not-fixed:评估发现修复未正确应用wont-fix:误报、超出范围或有意为之(例如,需要扩展文档的完整性问题)对于第一轮,使用任务工具并行启动两个子代理:
// 包含两个任务工具调用的单条消息:
Task(subagent_type="general-purpose", model="opus", prompt="风格评估...")
Task(subagent_type="general-purpose", model="opus", prompt="内容评估...")
评估文档风格:$ARGUMENTS
阅读这些文件:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS
为以下维度评分(0-10分):
可读性 - 写作是否清晰易懂?
- 清晰、直接的句子
- 章节间逻辑流畅
- 适当使用代码片段和链接
- 没有不必要的术语
语气 - 遵循写作指南的程度如何?
- 自信的断言(不含糊其辞)
- 主动语态,现在时态
- 没有AI写作痕迹(空洞的重要性、拖沓的动名词、公式化的过渡)
- 适当的语气(专家对开发者)
- 标题使用句子大小写
重要!根据需要包含尽可能多的高优先级修复项。
严格按照此格式返回:
风格报告:[文件名]
可读性:[分数]/10
- [具体问题或优点]
- [具体问题或优点]
语气:[分数]/10
- [具体问题或优点]
- [具体问题或优点]
优先级修复项:
1. [最重要的风格问题]
2. [第二重要的]
3. [第三重要的]
4. ...
评估文档内容:$ARGUMENTS
阅读 $ARGUMENTS,然后根据 packages/editor/ 和 packages/tldraw/ 中的源代码验证其声明。
为以下维度评分(0-10分):
完整性 - 覆盖范围有多全面?
- 概述在阐述机制之前确立了目的
- 关键概念有足够深度的解释
- 在需要的地方有说明性的代码片段
- 链接到 apps/examples 中的相关示例(如果适用)
准确性 - 技术内容是否正确?
- 代码片段语法正确并使用有效的 API
- API 引用与实际实现匹配
- 描述的行为与代码匹配
- 没有过时的信息
对于准确性问题,请包含源代码的 文件:行号 引用。
重要!根据需要包含尽可能多的高优先级修复项。确保标记所有准确性问题。
严格按照此格式返回:
内容报告:[文件名]
完整性:[分数]/10
- [具体问题或优点]
- [具体问题或优点]
准确性:[分数]/10
- [具体问题,如果不准确则包含文件:行号引用]
- [具体问题或优点]
优先级修复项:
1. [最重要的内容问题]
2. [第二重要的]
3. [第三重要的]
4. ...
第1轮之后,创建状态文件并包含所有发现的问题。
两个代理返回后,将它们的报告综合成一个摘要:
## 评估:[文件名]
| 维度 | 分数 | 关键问题 |
| ------------ | ----- | ----------- |
| 可读性 | X/10 | [一句话描述] |
| 语气 | X/10 | [一句话描述] |
| 完整性 | X/10 | [一句话描述] |
| 准确性 | X/10 | [一句话描述] |
| **总计** | X/40 | |
### 优先级修复项
1. [两个报告中合并的优先级1]
2. [合并的优先级2]
3. [合并的优先级3]
4. [合并的优先级4]
5. [合并的优先级5]
6. ...
然后使用 AskUserQuestion 询问用户:
在运行改进代理之前,与用户一起审查待处理问题。将需要添加新章节的完整性问题标记为 wont-fix - 这些属于文档扩展,而非审阅修复。
根据 CLAUDE.md 指南:
"做被要求的事情;不多也不少。" "不要添加功能、重构代码或进行超出要求范围的'改进'。"
审阅技能改进现有内容。添加新章节是另一项任务。
启动一个改进代理,仅针对待处理问题:
Task(subagent_type="general-purpose", model="opus", prompt="改进文档...")
根据特定的跟踪问题改进文档:$ARGUMENTS
仅修复这些待处理问题:
| ID | 问题 | 类型 | 备注 |
|----|------|------|------|
[从状态文件粘贴待处理问题]
说明:
1. 阅读 .claude/skills/shared/writing-guide.md
2. 阅读 .claude/skills/shared/docs-guide.md
3. 阅读 $ARGUMENTS
4. 对于每个准确性修复:
- 阅读备注中引用的源文件
- 从源代码验证正确的 API/行为
- 根据源代码实际显示的内容应用修复
5. 应用风格修复
6. 运行 prettier:yarn prettier --write $ARGUMENTS
请勿:
- 添加新章节
- 扩展文档
- 修复不在上述列表中的问题
返回摘要:
已进行的更改:
| ID | 应用的修复 | 验证 |
|----|------------|------|
| X | [描述] | [检查的源文件:行号] |
| Y | [描述] | 不适用 |
改进后,更新状态文件,将问题标记为 fixed。
如果用户选择"完成并结束",则修复所有剩余的待处理问题,无需重新评估。这在评估结果令人满意且用户希望应用修复并继续前进时非常有用。
工作流:
wont-fixfixed此路径信任改进代理能正确应用修复,并跳过验证循环。在以下情况下使用:
对于后续轮次,评估代理验证修复并发现新问题:
验证修复并评估文档:$ARGUMENTS
首先阅读状态文件:[状态文件路径]
然后阅读:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS
你的工作:
1. 验证状态文件中标记为"fixed"的修复 - 确认它们确实已应用
2. 为风格维度评分(请勿重新标记 wont-fix 问题)
3. 仅标记状态文件中尚未存在的**新**问题
验证这些修复:
[从状态文件粘贴已修复的风格问题]
按此格式返回:
验证报告:
| ID | 状态 | 备注 |
|----|------|------|
| X | verified-fixed / not-fixed | [你的发现] |
风格分数:
可读性:[分数]/10
语气:[分数]/10
新问题(状态文件中尚未存在的):
- [问题] 或 "未发现"
验证修复并评估文档内容:$ARGUMENTS
首先阅读状态文件:[状态文件路径]
然后阅读 $ARGUMENTS,并根据 packages/tldraw/ 中的源代码验证其声明。
你的工作:
1. 验证状态文件中标记为"fixed"的准确性修复
2. 为内容维度评分(请勿重新标记 wont-fix 问题)
3. 仅标记状态文件中尚未存在的**新**准确性问题
验证这些修复:
[从状态文件粘贴已修复的准确性问题]
按此格式返回:
验证报告:
| ID | 状态 | 备注 |
|----|------|------|
| X | verified-fixed / not-fixed | [你在文档和源代码中的发现] |
内容分数:
完整性:[分数]/10(仅对现有内容评分,忽略 wont-fix 项)
准确性:[分数]/10
新准确性问题(状态文件中尚未存在的):
- [包含源文件:行号的问题] 或 "未发现"
验证后,使用新状态和任何新问题更新状态文件。
继续循环,直到:
verified-fixed 或 wont-fix 状态wont-fix 适用于需要新章节的完整性问题每周安装数
123
代码仓库
GitHub 星标数
46.0K
首次出现
2026年1月31日
安全审计
安装于
opencode117
codex115
gemini-cli112
github-copilot112
kimi-cli109
amp109
This skill runs an evaluation and improvement loop on a documentation file.
Target : $ARGUMENTS
Relevant skills : write-docs
┌──────────────────────────────────────────────────────────────┐
│ INITIALIZE: Create state file to track issues │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ EVALUATE (parallel) │
│ ┌─────────────────────┐ ┌─────────────────────────────┐ │
│ │ Style Agent │ │ Content Agent │ │
│ │ (readability+voice) │ │ (completeness+accuracy) │ │
│ └─────────────────────┘ └─────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ UPDATE STATE: Add new issues, verify fixed issues │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ SUMMARIZE: Present findings, ask user for next step │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────┼──────────────────┐
↓ ↓ ↓
[User: improve] [User: complete] [User: done]
↓ ↓ ↓
┌──────────────────┐ ┌──────────────────┐ EXIT
│ IMPROVE │ │ COMPLETE │
│ (fix issues) │ │ (fix all, exit) │
└──────────────────┘ └──────────────────┘
↓ ↓
LOOP → EVALUATE EXIT
Create a state file in the scratchpad directory to track all issues across rounds. This prevents re-discovering the same issues and allows verification of fixes.
Path : <scratchpad>/review-<filename>.md
Format :
# Review tracker: [filename]
## Issue tracker
Status values: `pending` | `fixed` | `verified-fixed` | `not-fixed` | `wont-fix`
| ID | Issue | Type | Status | Round | Notes |
| --- | ------------- | --------------------------- | -------------- | ----- | ---------------- |
| 1 | [description] | Style/Accuracy/Completeness | pending | 1 | [details] |
| 2 | [description] | Accuracy | verified-fixed | 1 | Fixed in round 1 |
| 3 | [description] | Completeness | wont-fix | 2 | Out of scope |
## Round history
### Round 1
- Style: X/10, Voice: X/10, Completeness: X/10, Accuracy: X/10
- **Total: X/40**
Status definitions :
pending: Issue discovered, not yet addressedfixed: Improvement agent claims to have fixed it, needs verificationverified-fixed: Evaluation confirmed the fix was applied correctlynot-fixed: Evaluation found the fix wasn't applied correctlywont-fix: False alarm, out of scope, or intentional (e.g., completeness issues that require documentation expansion)For the first round , launch two subagents in parallel using the Task tool:
// Single message with two Task tool calls:
Task(subagent_type="general-purpose", model="opus", prompt="Style evaluation...")
Task(subagent_type="general-purpose", model="opus", prompt="Content evaluation...")
Evaluate documentation style for: $ARGUMENTS
Read these files:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS
Score these dimensions (0-10):
READABILITY - How clear and easy to understand is the writing?
- Clear, direct sentences
- Logical flow between sections
- Appropriate use of code snippets and links
- No unnecessary jargon
VOICE - How well does it follow the writing guide?
- Confident assertions (no hedging)
- Active voice, present tense
- No AI writing tells (hollow importance, trailing gerunds, formulaic transitions)
- Appropriate tone (expert-to-developer)
- Sentence case headings
Important! Include as many high-priority fixes as needed.
Return in this exact format:
STYLE REPORT: [filename]
READABILITY: [score]/10
- [specific issue or strength]
- [specific issue or strength]
VOICE: [score]/10
- [specific issue or strength]
- [specific issue or strength]
PRIORITY FIXES:
1. [Most important style issue]
2. [Second most important]
3. [Third most important]
4. ...
Evaluate documentation content for: $ARGUMENTS
Read $ARGUMENTS, then verify claims against the source code in packages/editor/ and packages/tldraw/.
Score these dimensions (0-10):
COMPLETENESS - How thorough is the coverage?
- Overview establishes purpose before mechanism
- Key concepts explained with enough depth
- Illustrative code snippets where needed
- Links to relevant examples in apps/examples (if applicable)
ACCURACY - Is the technical content correct?
- Code snippets are syntactically correct and use valid APIs
- API references match actual implementation
- Described behavior matches the code
- No outdated information
For accuracy issues, include file:line references to the source code.
Important! Include as many high-priority fixes as needed. Make sure that all accuracy issues are flagged.
Return in this exact format:
CONTENT REPORT: [filename]
COMPLETENESS: [score]/10
- [specific issue or strength]
- [specific issue or strength]
ACCURACY: [score]/10
- [specific issue with file:line reference if inaccurate]
- [specific issue or strength]
PRIORITY FIXES:
1. [Most important content issue]
2. [Second most important]
3. [Third most important]
4. ...
After round 1, create the state file with all discovered issues.
After both agents return, synthesize their reports into a summary:
## Evaluation: [filename]
| Dimension | Score | Key issue |
| ------------ | ----- | ----------- |
| Readability | X/10 | [one-liner] |
| Voice | X/10 | [one-liner] |
| Completeness | X/10 | [one-liner] |
| Accuracy | X/10 | [one-liner] |
| **Total** | X/40 | |
### Priority fixes
1. [Combined priority 1 from both reports]
2. [Combined priority 2]
3. [Combined priority 3]
4. [Combined priority 4]
5. [Combined priority 5]
6. ...
Then ask the user using AskUserQuestion:
Before running the improvement agent, review the pending issues with the user. Mark completeness issues that require adding new sections as wont-fix - these are documentation expansion, not review fixes.
Per CLAUDE.md guidance:
"Do what has been asked; nothing more, nothing less." "Don't add features, refactor code, or make 'improvements' beyond what was asked."
The review skill improves existing content. Adding new sections is a separate task.
Launch a single improvement agent targeting only pending issues :
Task(subagent_type="general-purpose", model="opus", prompt="Improve documentation...")
Improve documentation based on specific tracked issues: $ARGUMENTS
Fix ONLY these pending issues:
| ID | Issue | Type | Notes |
|----|-------|------|-------|
[paste pending issues from state file]
Instructions:
1. Read .claude/skills/shared/writing-guide.md
2. Read .claude/skills/shared/docs-guide.md
3. Read $ARGUMENTS
4. For each accuracy fix:
- Read the source file referenced in the notes
- Verify the correct API/behavior from the source
- Apply the fix based on what the source code actually shows
5. Apply style fixes
6. Run prettier: yarn prettier --write $ARGUMENTS
DO NOT:
- Add new sections
- Expand the document
- Fix issues not in the list above
Return a summary:
CHANGES MADE:
| ID | Fix applied | Verification |
|----|-------------|--------------|
| X | [description] | [source file:line checked] |
| Y | [description] | n/a |
After improvement, update the state file to mark issues as fixed.
If the user selects "Complete and finish", fix all remaining pending issues without re-evaluating. This is useful when the evaluation is satisfactory and the user wants to apply fixes and move on.
Workflow :
wont-fixfixedThis path trusts the improvement agent to apply fixes correctly and skips the verification cycle. Use when:
For subsequent rounds , evaluation agents verify fixes AND find new issues:
Verify fixes and evaluate documentation: $ARGUMENTS
Read the state file first: [path to state file]
Then read:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS
Your job:
1. VERIFY fixes marked as "fixed" in the state file - confirm they were actually applied
2. Score style dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW issues not already in the state file
VERIFY THESE FIXES:
[paste fixed style issues from state file]
Return in this format:
VERIFICATION REPORT:
| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found] |
STYLE SCORES:
READABILITY: [score]/10
VOICE: [score]/10
NEW ISSUES (not already in state file):
- [issue] or "None found"
Verify fixes and evaluate documentation content: $ARGUMENTS
Read the state file first: [path to state file]
Then read $ARGUMENTS and verify claims against source code in packages/tldraw/.
Your job:
1. VERIFY accuracy fixes marked as "fixed" in the state file
2. Score content dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW accuracy issues not already in the state file
VERIFY THESE FIXES:
[paste fixed accuracy issues from state file]
Return in this format:
VERIFICATION REPORT:
| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found in doc AND source] |
CONTENT SCORES:
COMPLETENESS: [score]/10 (score existing content only, ignore wont-fix items)
ACCURACY: [score]/10
NEW ACCURACY ISSUES (not already in state file):
- [issue with source file:line] or "None found"
After verification, update the state file with new statuses and any new issues.
Continue the loop until:
verified-fixed or wont-fixwont-fix is appropriate for completeness issues requiring new sectionsWeekly Installs
123
Repository
GitHub Stars
46.0K
First Seen
Jan 31, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode117
codex115
gemini-cli112
github-copilot112
kimi-cli109
amp109
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
40,000 周安装