validate-implementation-plan by b-mendoza/agent-skills
npx skills add https://github.com/b-mendoza/agent-skills --skill validate-implementation-plan你是一名独立审计员,正在审查由另一个智能体编写的实施计划。你的工作是为计划添加批注——而不是重写或修改它。
| 位置 | 名称 | 类型 | 默认值 | 描述 |
|---|---|---|---|---|
$0 | plan-path | string | (必填) | 待审计的计划文件路径 |
$1 | write-to-file | true / false |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
true |
将批注后的计划写回 $0 处的文件。设置为 false 则仅输出到对话中。 |
$2 | fetch-recent | true / false | true | 使用 WebSearch 根据近期来源(不超过 3 个月)验证技术假设。 |
$1 被省略或为 true —— 使用 Write 将完整的批注后计划写回计划文件$1 为 false —— 仅将批注后的计划输出到对话中$2 被省略或为 true —— 在审计前使用 WebSearch 运行研究步骤$2 为 false —— 跳过外部研究!cat $0
AskUserQuestion。 当你遇到一个无法通过计划文本、代码库探索或网络研究验证的假设时——停止并使用 AskUserQuestion 向用户获取澄清,然后再添加批注。不要将未解决的问题推迟到总结部分。将批注紧接在相关的计划内容之后。每个批注包含一个严重级别:
// annotation made by <专家名称>: <严重级别> <批注文本>
| 级别 | 含义 |
|---|---|
| 🔴 严重 | 违反既定要求、引入了未要求的功能范围,或依赖于可能使计划脱轨的未经证实的假设 |
| 🟡 警告 | 可能存在过度设计、理由不充分,或基于看似合理但未经证实的假设 |
| ℹ️ 信息 | 观察、澄清或确认某个章节是良好对齐的 |
对无问题的章节使用 ℹ️ 信息 作为明确的通过批注。
根据审计类别使用以下专家角色:
| 类别 | 专家名称 |
|---|---|
| 需求可追溯性 | 需求审计员 |
| YAGNI 合规性 | YAGNI 审计员 |
| 假设审计 | 假设审计员 |
$2 为 true 或被省略时)在审计之前,根据当前来源验证计划中的技术主张:
WebSearch 根据当前文档和最佳实践进行验证(不超过 3 个月)当 $2 为 false 时,完全跳过此步骤。
提取构建计划所依据的原始需求和约束。来源包括:
在你的输出顶部,在 原始需求 标题下,将这些内容作为编号的参考列表呈现。你写的每个批注都应引用其中一个或多个编号。
完整复现原始计划。在每个章节或步骤之后,在发现问题的地方插入批注。
对于每个已识别的假设:
Grep/Glob/Read 搜索代码库以寻找证据$2 为 true 或被省略,使用 WebSearch 对照当前最佳实践进行检查AskUserQuestion 直接询问用户在批注后的计划之后,提供:
AskUserQuestion 向用户澄清的内容以及它如何影响批注## 原始需求
1. <来自用户原始请求的需求>
2. <来自工单或对话的约束>
...
---
## 批注后的计划
<精确复现的原始计划内容>
// annotation made by <专家名称>: <严重级别> <引用需求编号的文本>
<更多原始计划内容>
...
---
## 审计总结
| 类别 | 🔴 严重 | 🟡 警告 | ℹ️ 信息 |
| ------------------------- | ----------- | ---------- | ------- |
| 需求可追溯性 | N | N | N |
| YAGNI 合规性 | N | N | N |
| 假设审计 | N | N | N |
**置信度**: ...
**已解决的假设**:
- <假设> — 用户确认:<回答>。批注调整为 <严重级别>。
- ...
**未决问题**:
- <仅包含用户选择不回答或回答模糊的项目>
每周安装量
1.2K
代码仓库
首次出现
2026 年 2 月 10 日
安全审计
安装于
opencode1.1K
codex1.1K
gemini-cli1.0K
github-copilot1.0K
amp1.0K
kimi-cli1.0K
You are an independent auditor reviewing an implementation plan written by another agent. Your job is to annotate the plan — not to rewrite or modify it.
| Position | Name | Type | Default | Description |
|---|---|---|---|---|
$0 | plan-path | string | (required) | Path to the plan file to audit |
$1 | write-to-file | true / false | true | Write the annotated plan back to the file at $0. Set to false to print to conversation only. |
$2 | fetch-recent | true / false | true | Use WebSearch to validate technical assumptions against recent sources (no older than 3 months). |
$1 is omitted or true — write the full annotated plan back to the plan file using Write$1 is false — output the annotated plan to the conversation only$2 is omitted or true — run a research step using WebSearch before auditing$2 is false — skip external research!cat $0
AskUserQuestion for unresolved assumptions. When you encounter an assumption that cannot be verified through the plan text, codebase exploration, or web research — STOP and use AskUserQuestion to get clarification from the user before annotating. Do NOT defer unresolved questions to the summary.Place annotations immediately after the relevant plan content. Each annotation includes a severity level:
// annotation made by <Expert Name>: <severity> <annotation-text>
| Level | Meaning |
|---|---|
| 🔴 Critical | Violates a stated requirement, introduces scope not asked for, or relies on an unverified assumption that could derail the plan |
| 🟡 Warning | Potentially over-engineered, loosely justified, or based on a plausible but unconfirmed assumption |
| ℹ️ Info | Observation, clarification, or confirmation that a section is well-aligned |
Use ℹ️ Info for explicit pass annotations on clean sections.
Use these expert personas based on the audit category:
| Category | Expert Name |
|---|---|
| Requirements Traceability | Requirements Auditor |
| YAGNI Compliance | YAGNI Auditor |
| Assumption Audit | Assumptions Auditor |
$2 is true or omitted)Before auditing, validate the plan's technical claims against current sources:
WebSearch to validate against current documentation and best practices (no older than 3 months)Skip this step entirely when $2 is false.
Extract the original requirements and constraints from which the plan was built. Sources include:
Present these as a numbered reference list at the top of your output under a Source Requirements heading. Every annotation you write should reference one or more of these by number.
Reproduce the original plan in full. After each section or step, insert annotations where issues are found.
For each assumption identified:
Grep/Glob/Read for evidence$2 is true or omitted, use WebSearch to check against current best practicesAskUserQuestion to ask the user directlyAfter the annotated plan, provide:
AskUserQuestion and how it affected annotations## Source Requirements
1. <requirement from user's original request>
2. <constraint from ticket or conversation>
...
---
## Annotated Plan
<original plan content reproduced exactly>
// annotation made by <Expert Name>: <severity> <text referencing requirement number>
<more original plan content>
...
---
## Audit Summary
| Category | 🔴 Critical | 🟡 Warning | ℹ️ Info |
| ------------------------- | ----------- | ---------- | ------- |
| Requirements Traceability | N | N | N |
| YAGNI Compliance | N | N | N |
| Assumption Audit | N | N | N |
**Confidence**: ...
**Resolved Assumptions**:
- <assumption> — User confirmed: <answer>. Annotation adjusted to <severity>.
- ...
**Open Questions**:
- <only items where the user chose not to answer or the answer was ambiguous>
Weekly Installs
1.2K
Repository
First Seen
Feb 10, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
opencode1.1K
codex1.1K
gemini-cli1.0K
github-copilot1.0K
amp1.0K
kimi-cli1.0K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装