sadd%3Asubagent-driven-development by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill sadd:subagent-driven-development通过为每个任务或问题派遣全新的子代理来创建和执行计划,并在每个或每批任务后进行代码和输出审查。
核心原则: 每个任务使用全新子代理 + 任务间或任务后审查 = 高质量、快速迭代。
通过代理执行计划:
当您有一系列相互关联的任务或问题,并且需要按顺序执行时,按顺序调查或修改它们是最佳方式。
为每个任务或问题派遣一个代理。让它们按顺序工作。在每个任务或问题后审查输出和代码。
何时使用:
当您有多个不相关的任务或问题(不同文件、不同子系统、不同错误)时,按顺序调查或修改它们会浪费时间。每个任务或调查都是独立的,可以并行进行。
为每个独立的问题领域派遣一个代理。让它们并发工作。
何时使用:
读取计划文件,创建包含所有任务的 TodoWrite。
对于每个任务:
派遣全新子代理:
Task tool (general-purpose):
description: "实现任务 N: [任务名称]"
prompt: |
你正在实现来自 [计划文件] 的任务 N。
仔细阅读该任务。你的工作是:
1. 精确实现任务指定的内容
2. 编写测试(如果任务要求,遵循 TDD)
3. 验证实现是否有效
4. 提交你的工作
5. 报告结果
工作目录:[目录]
报告:你实现了什么,测试了什么,测试结果,更改的文件,任何问题
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
子代理报告 工作摘要。
派遣代码审查子代理:
Task tool (superpowers:code-reviewer):
使用 requesting-code-review/code-reviewer.md 处的模板
WHAT_WAS_IMPLEMENTED: [来自子代理的报告]
PLAN_OR_REQUIREMENTS: 来自 [计划文件] 的任务 N
BASE_SHA: [任务前的提交]
HEAD_SHA: [当前提交]
DESCRIPTION: [任务摘要]
代码审查员返回: 优点、问题(关键/重要/次要)、评估
如果发现问题:
如果需要,派遣后续子代理:
"修复代码审查中的问题:[列出问题]"
所有任务完成后,派遣最终代码审查员:
最终审查通过后:
你:我正在使用子代理驱动开发来执行这个计划。
[加载计划,创建 TodoWrite]
任务 1:钩子安装脚本
[派遣实现子代理]
子代理:实现了 install-hook 及测试,5/5 通过
[获取 git SHA,派遣代码审查员]
审查员:优点:测试覆盖良好。问题:无。准备就绪。
[标记任务 1 完成]
任务 2:恢复模式
[派遣实现子代理]
子代理:添加了 verify/repair,8/8 测试通过
[派遣代码审查员]
审查员:优点:扎实。问题(重要):缺少进度报告
[派遣修复子代理]
修复子代理:添加了每 100 次对话的进度报告
[验证修复,标记任务 2 完成]
...
[所有任务完成后]
[派遣最终代码审查员]
最终审查员:所有要求均已满足,准备合并
完成!
切勿:
如果子代理任务失败:
加载计划,进行批判性审查,分批执行任务,在批次间报告以供审查。
核心原则: 分批执行,设置检查点供架构师审查。
开始时宣布: "我正在使用 executing-plans 技能来实现这个计划。"
默认:前 3 个任务
对于每个任务:
批次完成时:
根据反馈:
所有任务完成并验证后:
在以下情况立即停止执行:
请求澄清,而不是猜测。
在以下情况返回审查(步骤 1):
不要强行突破阻碍 - 停止并询问。
并行执行的特殊情况,当您有多个不相关的故障,可以在没有共享状态或依赖关系的情况下进行调查时使用。
按故障内容分组:
每个领域都是独立的 - 修复工具批准不会影响中止测试。
每个代理获得:
// 在 Claude Code / AI 环境中
Task("修复 agent-tool-abort.test.ts 故障")
Task("修复 batch-completion-behavior.test.ts 故障")
Task("修复 tool-approval-race-conditions.test.ts 故障")
// 所有三个并发运行
当代理返回时:
好的代理提示是:
修复 src/agents/agent-tool-abort.test.ts 中的 3 个失败测试:
1. "should abort tool with partial output capture" - 期望消息中包含 'interrupted at'
2. "should handle mixed completed and aborted tools" - 快速工具被中止而非完成
3. "should properly track pendingToolCount" - 期望 3 个结果但得到 0
这些是时序/竞态条件问题。你的任务:
1. 阅读测试文件并理解每个测试验证的内容
2. 识别根本原因 - 是时序问题还是实际错误?
3. 通过以下方式修复:
- 用基于事件的等待替换任意超时
- 如果发现中止实现中的错误,则修复它们
- 如果测试行为发生变化,调整测试期望
不要仅仅增加超时 - 找到真正的问题。
返回:你发现的问题和你修复的内容摘要。
❌ 范围太广: "修复所有测试" - 代理迷失方向 ✅ 具体: "修复 agent-tool-abort.test.ts" - 聚焦的范围
❌ 无上下文: "修复竞态条件" - 代理不知道在哪里 ✅ 上下文: 粘贴错误消息和测试名称
❌ 无约束: 代理可能重构所有内容 ✅ 约束: "不要更改生产代码" 或 "仅修复测试"
❌ 输出模糊: "修复它" - 你不知道改变了什么 ✅ 具体: "返回根本原因和更改的摘要"
相关故障: 修复一个可能修复其他 - 首先一起调查 需要完整上下文: 理解需要看到整个系统 探索性调试: 你还不知道哪里坏了 共享状态: 代理会相互干扰(编辑相同文件,使用相同资源)
场景: 重大重构后,3 个文件出现 6 个测试失败
故障:
决策: 独立领域 - 中止逻辑独立于批次完成,独立于竞态条件
派遣:
代理 1 → 修复 agent-tool-abort.test.ts
代理 2 → 修复 batch-completion-behavior.test.ts
代理 3 → 修复 tool-approval-race-conditions.test.ts
结果:
集成: 所有修复独立,无冲突,完整套件通过
节省时间: 并行解决 3 个问题 vs 顺序解决
代理返回后:
每周安装次数
248
仓库
GitHub 星标数
708
首次出现
2026 年 2 月 19 日
安装于
codex238
opencode238
github-copilot236
gemini-cli235
cursor234
kimi-cli233
Create and execute plan by dispatching fresh subagent per task or issue, with code and output review after each or batch of tasks.
Core principle: Fresh subagent per task + review between or after tasks = high quality, fast iteration.
Executing Plans through agents:
When you have a tasks or issues that are related to each other, and they need to be executed in order, investigating or modifying them sequentially is the best way to go.
Dispatch one agent per task or issue. Let it work sequentially. Review the output and code after each task or issue.
When to use:
When you have multiple unrelated tasks or issues (different files, different subsystems, different bugs), investigatin or modifying them sequentially wastes time. Each task or investigation is independent and can happen in parallel.
Dispatch one agent per independent problem domain. Let them work concurrently.
When to use:
Read plan file, create TodoWrite with all tasks.
For each task:
Dispatch fresh subagent:
Task tool (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N from [plan-file].
Read that task carefully. Your job is to:
1. Implement exactly what the task specifies
2. Write tests (following TDD if task says to)
3. Verify implementation works
4. Commit your work
5. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
Subagent reports back with summary of work.
Dispatch code-reviewer subagent:
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from subagent's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
Code reviewer returns: Strengths, Issues (Critical/Important/Minor), Assessment
If issues found:
Dispatch follow-up subagent if needed:
"Fix issues from code review: [list issues]"
After all tasks complete, dispatch final code-reviewer:
After final review passes:
You: I'm using Subagent-Driven Development to execute this plan.
[Load plan, create TodoWrite]
Task 1: Hook installation script
[Dispatch implementation subagent]
Subagent: Implemented install-hook with tests, 5/5 passing
[Get git SHAs, dispatch code-reviewer]
Reviewer: Strengths: Good test coverage. Issues: None. Ready.
[Mark Task 1 complete]
Task 2: Recovery modes
[Dispatch implementation subagent]
Subagent: Added verify/repair, 8/8 tests passing
[Dispatch code-reviewer]
Reviewer: Strengths: Solid. Issues (Important): Missing progress reporting
[Dispatch fix subagent]
Fix subagent: Added progress every 100 conversations
[Verify fix, mark Task 2 complete]
...
[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge
Done!
Never:
If subagent fails task:
Load plan, review critically, execute tasks in batches, report for review between batches.
Core principle: Batch execution with checkpoints for architect review.
Announce at start: "I'm using the executing-plans skill to implement this plan."
Default: First 3 tasks
For each task:
When batch complete:
Based on feedback:
After all tasks complete and verified:
STOP executing immediately when:
Ask for clarification rather than guessing.
Return to Review (Step 1) when:
Don't force through blockers - stop and ask.
Special case of parallel execution, when you have multiple unrelated failures that can be investigated without shared state or dependencies.
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
Weekly Installs
248
Repository
GitHub Stars
708
First Seen
Feb 19, 2026
Installed on
codex238
opencode238
github-copilot236
gemini-cli235
cursor234
kimi-cli233
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
56,200 周安装