Dispatching Parallel Agents by aaaaqwq/agi-super-skills
npx skills add https://github.com/aaaaqwq/agi-super-skills --skill 'Dispatching Parallel Agents'当您遇到多个不相关的故障(不同的测试文件、不同的子系统、不同的错误)时,按顺序调查它们会浪费时间。每个调查都是独立的,可以并行进行。
核心原则: 为每个独立的问题域分配一个智能体。让它们并发工作。
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
在以下情况使用:
不要在以下情况使用:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
按损坏内容对故障进行分组:
每个域都是独立的——修复工具审批不会影响中止测试。
每个智能体获得:
// 在 Claude Code / AI 环境中
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// 所有三个任务并发运行
当智能体返回时:
好的智能体提示应具备:
修复 src/agents/agent-tool-abort.test.ts 中的 3 个失败测试:
1. "should abort tool with partial output capture" - 期望消息中包含 'interrupted at'
2. "should handle mixed completed and aborted tools" - 快速工具被中止而非完成
3. "should properly track pendingToolCount" - 期望 3 个结果但得到 0
这些是时序/竞态条件问题。您的任务:
1. 阅读测试文件并理解每个测试验证的内容
2. 识别根本原因 - 是时序问题还是实际错误?
3. 通过以下方式修复:
- 用基于事件的等待替换任意超时
- 如果发现中止实现中的错误,则修复它们
- 如果测试行为发生变化,则调整测试期望
不要仅仅增加超时时间 - 找到真正的问题。
返回:您发现的内容以及您修复的内容的摘要。
❌ 范围太广: "修复所有测试" - 智能体迷失方向 ✅ 具体明确: "修复 agent-tool-abort.test.ts" - 聚焦的范围
❌ 没有上下文: "修复竞态条件" - 智能体不知道在哪里 ✅ 提供上下文: 粘贴错误信息和测试名称
❌ 没有约束: 智能体可能重构所有内容 ✅ 设定约束: "不要更改生产代码" 或 "仅修复测试"
❌ 输出模糊: "修复它" - 您不知道改变了什么 ✅ 输出具体: "返回根本原因和更改的摘要"
相关故障: 修复一个可能修复其他 - 先一起调查 需要完整上下文: 理解需要查看整个系统 探索性调试: 您还不知道哪里坏了 共享状态: 智能体会相互干扰(编辑相同文件,使用相同资源)
场景: 重大重构后,3 个文件出现 6 个测试失败
故障:
决策: 独立域 - 中止逻辑、批量完成、竞态条件彼此独立
调度:
Agent 1 → 修复 agent-tool-abort.test.ts
Agent 2 → 修复 batch-completion-behavior.test.ts
Agent 3 → 修复 tool-approval-race-conditions.test.ts
结果:
集成: 所有修复相互独立,无冲突,完整套件通过
节省时间: 并行解决 3 个问题 vs 顺序解决
智能体返回后:
来自调试会话 (2025-10-03):
每周安装量
0
仓库
GitHub 星标数
11
首次出现时间
1970年1月1日
安全审计
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
From debugging session (2025-10-03):
Weekly Installs
0
Repository
GitHub Stars
11
First Seen
Jan 1, 1970
Security Audits
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
40,200 周安装