npx skills add https://github.com/hyperb1iss/hyperskills --skill orchestrate从生产代码库中 597+ 个真实智能体调度中提炼出的元编排模式。此技能告诉你使用哪种策略、如何构建提示词,以及何时使用后台与前台。
核心原则: 为工作选择合适的编排策略,按独立性划分智能体,注入上下文以实现并行化,并根据信任级别调整审查开销。
digraph strategy_selection {
rankdir=TB;
"What type of work?" [shape=diamond];
"Research / knowledge gathering" [shape=box];
"Independent feature builds" [shape=box];
"Sequential dependent tasks" [shape=box];
"Same transformation across partitions" [shape=box];
"Codebase audit / assessment" [shape=box];
"Greenfield project kickoff" [shape=box];
"Research Swarm" [shape=box style=filled fillcolor=lightyellow];
"Epic Parallel Build" [shape=box style=filled fillcolor=lightyellow];
"Sequential Pipeline" [shape=box style=filled fillcolor=lightyellow];
"Parallel Sweep" [shape=box style=filled fillcolor=lightyellow];
"Multi-Dimensional Audit" [shape=box style=filled fillcolor=lightyellow];
"Full Lifecycle" [shape=box style=filled fillcolor=lightyellow];
"What type of work?" -> "Research / knowledge gathering";
"What type of work?" -> "Independent feature builds";
"What type of work?" -> "Sequential dependent tasks";
"What type of work?" -> "Same transformation across partitions";
"What type of work?" -> "Codebase audit / assessment";
"What type of work?" -> "Greenfield project kickoff";
"Research / knowledge gathering" -> "Research Swarm";
"Independent feature builds" -> "Epic Parallel Build";
"Sequential dependent tasks" -> "Sequential Pipeline";
"Same transformation across partitions" -> "Parallel Sweep";
"Codebase audit / assessment" -> "Multi-Dimensional Audit";
"Greenfield project kickoff" -> "Full Lifecycle";
}
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 策略 | 适用场景 | 智能体数量 | 后台运行 | 关键模式 |
|---|---|---|---|---|
| 研究集群 | 知识收集、文档、前沿技术研究 | 10-60+ | 是 (100%) | 扇出,每个智能体撰写自己的文档 |
| 史诗并行构建 | 规划具有独立史诗/功能的任务 | 20-60+ | 是 (90%+) | 按子系统分波次调度 |
| 顺序流水线 | 依赖任务、共享文件 | 3-15 | 否 (0%) | 实现 -> 审查 -> 修复链 |
| 并行扫描 | 跨模块的相同修复/转换 | 4-10 | 否 (0%) | 按目录分区,扇出 |
| 多维审计 | 质量门禁、深度评估 | 6-9 | 否 (0%) | 相同代码,不同的审查视角 |
| 完整生命周期 | 从零开始的新项目 | 上述所有 | 混合 | 研究 -> 规划 -> 构建 -> 审查 -> 加固 |
大规模部署后台智能体以构建知识库。每个智能体研究一个主题并撰写一份 Markdown 文档。智能体之间零依赖。
Phase 1: Deploy research army (ALL BACKGROUND)
Wave 1 (10-20 agents): Core technology research
Wave 2 (10-20 agents): Specialized topics, integrations
Wave 3 (5-10 agents): Gap-filling based on early results
Phase 2: Monitor and supplement
- Check completed docs as they arrive
- Identify gaps, deploy targeted follow-up agents
- Read completed research to inform remaining dispatches
Phase 3: Synthesize
- Read all research docs (foreground)
- Create architecture plans, design docs
- Use Plan agent to synthesize findings
Research [TECHNOLOGY] for [PROJECT]'s [USE CASE].
Create a comprehensive research doc at [OUTPUT_PATH]/[filename].md covering:
1. Latest [TECH] version and features (search "[TECH] 2026" or "[TECH] latest")
2. [Specific feature relevant to project]
3. [Another relevant feature]
4. [Integration patterns with other stack components]
5. [Performance characteristics]
6. [Known gotchas and limitations]
7. [Best practices for production use]
8. [Code examples for key patterns]
Include code examples where possible. Use WebSearch and WebFetch to get current docs.
关键规则:
部署后台智能体以同时实现独立的功能/史诗。每个智能体在其自己的目录/模块中构建一个功能。没有两个智能体接触相同的文件。
Phase 1: Scout (FOREGROUND)
- Deploy one Explore agent to map the codebase
- Identify dependency chains and independent workstreams
- Group tasks by subsystem to prevent file conflicts
Phase 2: Deploy build army (ALL BACKGROUND)
Wave 1: Infrastructure/foundation (Redis, DB, auth)
Wave 2: Backend APIs (each in own module directory)
Wave 3: Frontend pages (each in own route directory)
Wave 4: Integrations (MCP servers, external services)
Wave 5: DevOps (CI, Docker, deployment)
Wave 6: Bug fixes from review findings
Phase 3: Monitor and coordinate
- Check git status for completed commits
- Handle git index.lock contention (expected with 30+ agents)
- Deploy remaining tasks as agents complete
- Track via Sibyl tasks or TodoWrite
Phase 4: Review and harden (FOREGROUND)
- Run Codex/code-reviewer on completed work
- Dispatch fix agents for critical findings
- Integration testing
**Task: [DESCRIPTIVE TITLE]** (task\_[ID])
Work in /path/to/project/[SPECIFIC_DIRECTORY]
## Context
[What already exists. Reference specific files, patterns, infrastructure.]
[e.g., "Redis is available at `app.state.redis`", "Follow pattern from `src/auth/`"]
## Your Job
1. Create `src/path/to/module/` with:
- `file.py` -- [Description]
- `routes.py` -- [Description]
- `models.py` -- [Schema definitions]
2. Implementation requirements:
[Detailed spec with code snippets, Pydantic models, API contracts]
3. Tests:
- Create `tests/test_module.py`
- Cover: [specific test scenarios]
4. Integration:
- Wire into [main app entry point]
- Register routes at [path]
## Git
Commit with message: "feat([module]): [description]"
Only stage files YOU created. Check `git status` before committing.
Do NOT stage files from other agents.
关键规则:
当同时运行 10+ 个智能体时:
git add .——只添加特定文件git log --oneline -20 监控通过审查关卡逐个执行依赖任务。每个任务都基于前一个任务的输出。使用 superpowers:subagent-driven-development 获取完整流水线。
For each task:
1. Dispatch implementer (FOREGROUND)
2. Dispatch spec reviewer (FOREGROUND)
3. Dispatch code quality reviewer (FOREGROUND)
4. Fix any issues found
5. Move to next task
Trust Gradient (adapt over time):
Early tasks: Implement -> Spec Review -> Code Review (full ceremony)
Middle tasks: Implement -> Spec Review (lighter)
Late tasks: Implement only (pattern proven, high confidence)
随着会话进行且模式被证明可靠,逐步减轻审查开销:
| 阶段 | 审查开销 | 适用时机 |
|---|---|---|
| 完整仪式 | 实现 + 规范审查 + 代码审查 | 前 3-4 个任务 |
| 标准 | 实现 + 规范审查 | 任务 5-8,或模式稳定后 |
| 轻度 | 实现 + 快速抽查 | 具有既定模式的后期任务 |
| 成本优化 | 使用 model: "haiku" 进行审查 | 公式化的审查轮次 |
这不是偷工减料——而是赢得的信心。如果后期任务偏离模式,则升级回完整仪式。
在代码库的分区区域应用相同的转换。每个智能体执行相同类型的工作,但针对不同的文件。
Phase 1: Analyze the scope
- Run the tool (ruff, ty, etc.) to get full issue list
- Auto-fix what you can
- Group remaining issues by module/directory
Phase 2: Fan-out fix agents (4-10 agents)
- One agent per module/directory
- Each gets: issue count by category, domain-specific guidance
- All foreground (need to verify each completes)
Phase 3: Verify and repeat
- Run the tool again to check remaining issues
- If issues remain, dispatch another wave
- Repeat until clean
Fix all [TOOL] issues in the [MODULE_NAME] directory ([PATH]).
Current issues ([COUNT] total):
- [RULE_CODE]: [description] ([count]) -- [domain-specific fix guidance]
- [RULE_CODE]: [description] ([count]) -- [domain-specific fix guidance]
Run `[TOOL_COMMAND] [PATH]` to see exact issues.
IMPORTANT for [DOMAIN] code:
[Domain-specific guidance, e.g., "GTK imports need GI.require_version() before gi.repository imports"]
After fixing, run `[TOOL_COMMAND] [PATH]` to verify zero issues remain.
关键规则:
部署多个审查者从不同角度同时检查同一代码。每个审查者有不同的关注焦点。
Dispatch 6 parallel reviewers (ALL FOREGROUND):
1. Code quality & safety reviewer
2. Integration correctness reviewer
3. Spec completeness reviewer
4. Test coverage reviewer
5. Performance analyst
6. Security auditor
Wait for all to complete, then:
- Synthesize findings into prioritized action list
- Dispatch targeted fix agents for critical issues
- Re-review only the dimensions that had findings
[DIMENSION] review of [COMPONENT] implementation.
**Files to review:**
- [file1.ext]
- [file2.ext]
- [file3.ext]
**Analyze:**
1. [Specific question for this dimension]
2. [Specific question for this dimension]
3. [Specific question for this dimension]
**Report format:**
- Findings: numbered list with severity (Critical/Important/Minor)
- Assessment: Approved / Needs Changes
- Recommendations: prioritized action items
对于新项目,按顺序组合所有策略:
Session 1: RESEARCH (Research Swarm)
-> 30-60 background agents build knowledge corpus
-> Architecture planning agents synthesize findings
-> Output: docs/research/*.md + docs/plans/*.md
Session 2: BUILD (Epic Parallel Build)
-> Scout agent maps what exists
-> 30-60 background agents build features by epic
-> Monitor, handle git contention, track completions
-> Output: working codebase with commits
Session 3: ITERATE (Build-Review-Fix Pipeline)
-> Code review agents assess work
-> Fix agents address findings
-> Deep audit agents (foreground) assess each subsystem
-> Output: quality-assessed codebase
Session 4: HARDEN (Sequential Pipeline)
-> Integration boundary reviews (foreground, sequential)
-> Security fixes, race condition fixes
-> Test infrastructure setup
-> Output: production-ready codebase
每个会话根据工作性质切换编排策略。尽可能并行,必要时顺序执行。
digraph bg_fg {
"What is the agent producing?" [shape=diamond];
"Information (research, docs)" [shape=box];
"Code modifications" [shape=box];
"Does orchestrator need it NOW?" [shape=diamond];
"BACKGROUND" [shape=box style=filled fillcolor=lightgreen];
"FOREGROUND" [shape=box style=filled fillcolor=lightyellow];
"Does next task depend on this task's files?" [shape=diamond];
"FOREGROUND (sequential)" [shape=box style=filled fillcolor=lightyellow];
"FOREGROUND (parallel)" [shape=box style=filled fillcolor=lightyellow];
"What is the agent producing?" -> "Information (research, docs)";
"What is the agent producing?" -> "Code modifications";
"Information (research, docs)" -> "Does orchestrator need it NOW?";
"Does orchestrator need it NOW?" -> "FOREGROUND" [label="yes"];
"Does orchestrator need it NOW?" -> "BACKGROUND" [label="no - synthesize later"];
"Code modifications" -> "Does next task depend on this task's files?";
"Does next task depend on this task's files?" -> "FOREGROUND (sequential)" [label="yes"];
"Does next task depend on this task's files?" -> "FOREGROUND (parallel)" [label="no - different modules"];
}
从 597+ 次调度中观察到的规则:
You are researching [DOMAIN] to create comprehensive documentation for [PROJECT].
Your mission: Create an exhaustive reference document covering ALL [TOPIC] capabilities.
Cover these areas in depth:
1. **[Category]** -- specific items
2. **[Category]** -- specific items
...
Use WebSearch and WebFetch to find blog posts, GitHub repos, and official docs.
**Task: [TITLE]** (task\_[ID])
Work in /absolute/path/to/[directory]
## Context
[What exists, what to read, what infrastructure is available]
## Your Job
1. Create `path/to/file` with [description]
2. [Detailed implementation spec]
3. [Test requirements]
4. [Integration requirements]
## Git
Commit with: "feat([scope]): [message]"
Only stage YOUR files.
Comprehensive audit of [SCOPE] for [DIMENSION].
Look for:
1. [Specific thing #1]
2. [Specific thing #2]
...
3. [Specific thing #10]
[Scope boundaries -- which directories/files]
Report format:
- Findings: numbered with severity
- Assessment: Pass / Needs Work
- Action items: prioritized
**Task:** Fix [ISSUE] -- [SEVERITY]
**Problem:** [Description with file:line references]
**Location:** [Exact file path]
**Fix Required:**
1. [Specific change]
2. [Specific change]
**Verify:**
1. Run [command] to confirm fix
2. Run tests: [test command]
智能体之所以能够独立工作,是因为编排器预先加载了它们需要的所有上下文。没有这个,智能体将需要先进行探索,从而使工作串行化。
始终注入:
src/auth/jwt.py")app.state.redis")对于并行智能体,复制共享上下文:
当运行 10+ 个后台智能体时:
git log --oneline -20 查看提交tail 智能体输出文件以查看进度## Agent Swarm Status
**[N] agents deployed** | **[M] completed** | **[P] in progress**
### Completed:
- [Agent description] -- [Key result]
- [Agent description] -- [Key result]
### In Progress:
- [Agent description] -- [Status]
### Gaps Identified:
- [Missing area] -- deploying follow-up agent
不要: 调度接触相同文件的智能体 -> 合并冲突 要: 按目录/模块分区——每个范围一个智能体
不要: 当智能体独立时全部在前台运行 -> 顺序瓶颈 要: 研究使用后台,需要协调的代码使用前台
不要: 发送 50 个带有模糊"修复所有"提示词的智能体 要: 为每个智能体提供特定的范围、问题列表和领域指导
不要: 为构建冲刺跳过侦察阶段 要: 始终先进行探索以映射现有内容并识别依赖关系
不要: 在长会话中为每个任务保持完整的审查仪式 要: 应用信任梯度——通过一致性赢得更轻的审查
不要: 让智能体运行 git add . 或 git push
要: 在每个构建提示词中包含明确的 Git 卫生说明
不要: 为需要集成的代码调度后台智能体 要: 后台仅用于研究。代码智能体在前台运行。
| 技能 | 配合使用 | 适用时机 |
|---|---|---|
superpowers:subagent-driven-development | 顺序流水线 | 单任务实现-审查循环 |
superpowers:dispatching-parallel-agents | 并行扫描 | 独立的错误修复 |
superpowers:writing-plans | 完整生命周期 | 在阶段 2 之前创建计划 |
superpowers:executing-plans | 顺序流水线 | 在单独会话中批量执行 |
superpowers:brainstorming | 完整生命周期 | 研究阶段之前 |
superpowers:requesting-code-review | 所有策略 | 阶段之间的质量门禁 |
superpowers:verification-before-completion | 所有策略 | 最终验证 |
每周安装次数
137
仓库
GitHub 星标数
2
首次出现
2026年2月19日
安全审计
安装于
claude-code132
opencode16
gemini-cli16
github-copilot16
amp16
codex16
Meta-orchestration patterns mined from 597+ real agent dispatches across production codebases. This skill tells you WHICH strategy to use, HOW to structure prompts, and WHEN to use background vs foreground.
Core principle: Choose the right orchestration strategy for the work, partition agents by independence, inject context to enable parallelism, and adapt review overhead to trust level.
digraph strategy_selection {
rankdir=TB;
"What type of work?" [shape=diamond];
"Research / knowledge gathering" [shape=box];
"Independent feature builds" [shape=box];
"Sequential dependent tasks" [shape=box];
"Same transformation across partitions" [shape=box];
"Codebase audit / assessment" [shape=box];
"Greenfield project kickoff" [shape=box];
"Research Swarm" [shape=box style=filled fillcolor=lightyellow];
"Epic Parallel Build" [shape=box style=filled fillcolor=lightyellow];
"Sequential Pipeline" [shape=box style=filled fillcolor=lightyellow];
"Parallel Sweep" [shape=box style=filled fillcolor=lightyellow];
"Multi-Dimensional Audit" [shape=box style=filled fillcolor=lightyellow];
"Full Lifecycle" [shape=box style=filled fillcolor=lightyellow];
"What type of work?" -> "Research / knowledge gathering";
"What type of work?" -> "Independent feature builds";
"What type of work?" -> "Sequential dependent tasks";
"What type of work?" -> "Same transformation across partitions";
"What type of work?" -> "Codebase audit / assessment";
"What type of work?" -> "Greenfield project kickoff";
"Research / knowledge gathering" -> "Research Swarm";
"Independent feature builds" -> "Epic Parallel Build";
"Sequential dependent tasks" -> "Sequential Pipeline";
"Same transformation across partitions" -> "Parallel Sweep";
"Codebase audit / assessment" -> "Multi-Dimensional Audit";
"Greenfield project kickoff" -> "Full Lifecycle";
}
| Strategy | When | Agents | Background | Key Pattern |
|---|---|---|---|---|
| Research Swarm | Knowledge gathering, docs, SOTA research | 10-60+ | Yes (100%) | Fan-out, each writes own doc |
| Epic Parallel Build | Plan with independent epics/features | 20-60+ | Yes (90%+) | Wave dispatch by subsystem |
| Sequential Pipeline | Dependent tasks, shared files | 3-15 | No (0%) | Implement -> Review -> Fix chain |
| Parallel Sweep | Same fix/transform across modules | 4-10 | No (0%) | Partition by directory, fan-out |
| Multi-Dimensional Audit | Quality gates, deep assessment |
Mass-deploy background agents to build a knowledge corpus. Each agent researches one topic and writes one markdown document. Zero dependencies between agents.
Phase 1: Deploy research army (ALL BACKGROUND)
Wave 1 (10-20 agents): Core technology research
Wave 2 (10-20 agents): Specialized topics, integrations
Wave 3 (5-10 agents): Gap-filling based on early results
Phase 2: Monitor and supplement
- Check completed docs as they arrive
- Identify gaps, deploy targeted follow-up agents
- Read completed research to inform remaining dispatches
Phase 3: Synthesize
- Read all research docs (foreground)
- Create architecture plans, design docs
- Use Plan agent to synthesize findings
Research [TECHNOLOGY] for [PROJECT]'s [USE CASE].
Create a comprehensive research doc at [OUTPUT_PATH]/[filename].md covering:
1. Latest [TECH] version and features (search "[TECH] 2026" or "[TECH] latest")
2. [Specific feature relevant to project]
3. [Another relevant feature]
4. [Integration patterns with other stack components]
5. [Performance characteristics]
6. [Known gotchas and limitations]
7. [Best practices for production use]
8. [Code examples for key patterns]
Include code examples where possible. Use WebSearch and WebFetch to get current docs.
Key rules:
Deploy background agents to implement independent features/epics simultaneously. Each agent builds one feature in its own directory/module. No two agents touch the same files.
Phase 1: Scout (FOREGROUND)
- Deploy one Explore agent to map the codebase
- Identify dependency chains and independent workstreams
- Group tasks by subsystem to prevent file conflicts
Phase 2: Deploy build army (ALL BACKGROUND)
Wave 1: Infrastructure/foundation (Redis, DB, auth)
Wave 2: Backend APIs (each in own module directory)
Wave 3: Frontend pages (each in own route directory)
Wave 4: Integrations (MCP servers, external services)
Wave 5: DevOps (CI, Docker, deployment)
Wave 6: Bug fixes from review findings
Phase 3: Monitor and coordinate
- Check git status for completed commits
- Handle git index.lock contention (expected with 30+ agents)
- Deploy remaining tasks as agents complete
- Track via Sibyl tasks or TodoWrite
Phase 4: Review and harden (FOREGROUND)
- Run Codex/code-reviewer on completed work
- Dispatch fix agents for critical findings
- Integration testing
**Task: [DESCRIPTIVE TITLE]** (task\_[ID])
Work in /path/to/project/[SPECIFIC_DIRECTORY]
## Context
[What already exists. Reference specific files, patterns, infrastructure.]
[e.g., "Redis is available at `app.state.redis`", "Follow pattern from `src/auth/`"]
## Your Job
1. Create `src/path/to/module/` with:
- `file.py` -- [Description]
- `routes.py` -- [Description]
- `models.py` -- [Schema definitions]
2. Implementation requirements:
[Detailed spec with code snippets, Pydantic models, API contracts]
3. Tests:
- Create `tests/test_module.py`
- Cover: [specific test scenarios]
4. Integration:
- Wire into [main app entry point]
- Register routes at [path]
## Git
Commit with message: "feat([module]): [description]"
Only stage files YOU created. Check `git status` before committing.
Do NOT stage files from other agents.
Key rules:
When running 10+ agents concurrently:
git add . -- only specific filesgit log --oneline -20 periodicallyExecute dependent tasks one at a time with review gates. Each task builds on the previous task's output. Use superpowers:subagent-driven-development for the full pipeline.
For each task:
1. Dispatch implementer (FOREGROUND)
2. Dispatch spec reviewer (FOREGROUND)
3. Dispatch code quality reviewer (FOREGROUND)
4. Fix any issues found
5. Move to next task
Trust Gradient (adapt over time):
Early tasks: Implement -> Spec Review -> Code Review (full ceremony)
Middle tasks: Implement -> Spec Review (lighter)
Late tasks: Implement only (pattern proven, high confidence)
As the session progresses and patterns prove reliable, progressively lighten review overhead:
| Phase | Review Overhead | When |
|---|---|---|
| Full ceremony | Implement + Spec Review + Code Review | First 3-4 tasks |
| Standard | Implement + Spec Review | Tasks 5-8, or after patterns stabilize |
| Light | Implement + quick spot-check | Late tasks with established patterns |
| Cost-optimized | Use model: "haiku" for reviews | Formulaic review passes |
This is NOT cutting corners -- it's earned confidence. If a late task deviates from the pattern, escalate back to full ceremony.
Apply the same transformation across partitioned areas of the codebase. Every agent does the same TYPE of work but on different FILES.
Phase 1: Analyze the scope
- Run the tool (ruff, ty, etc.) to get full issue list
- Auto-fix what you can
- Group remaining issues by module/directory
Phase 2: Fan-out fix agents (4-10 agents)
- One agent per module/directory
- Each gets: issue count by category, domain-specific guidance
- All foreground (need to verify each completes)
Phase 3: Verify and repeat
- Run the tool again to check remaining issues
- If issues remain, dispatch another wave
- Repeat until clean
Fix all [TOOL] issues in the [MODULE_NAME] directory ([PATH]).
Current issues ([COUNT] total):
- [RULE_CODE]: [description] ([count]) -- [domain-specific fix guidance]
- [RULE_CODE]: [description] ([count]) -- [domain-specific fix guidance]
Run `[TOOL_COMMAND] [PATH]` to see exact issues.
IMPORTANT for [DOMAIN] code:
[Domain-specific guidance, e.g., "GTK imports need GI.require_version() before gi.repository imports"]
After fixing, run `[TOOL_COMMAND] [PATH]` to verify zero issues remain.
Key rules:
Deploy multiple reviewers to examine the same code from different angles simultaneously. Each reviewer has a different focus lens.
Dispatch 6 parallel reviewers (ALL FOREGROUND):
1. Code quality & safety reviewer
2. Integration correctness reviewer
3. Spec completeness reviewer
4. Test coverage reviewer
5. Performance analyst
6. Security auditor
Wait for all to complete, then:
- Synthesize findings into prioritized action list
- Dispatch targeted fix agents for critical issues
- Re-review only the dimensions that had findings
[DIMENSION] review of [COMPONENT] implementation.
**Files to review:**
- [file1.ext]
- [file2.ext]
- [file3.ext]
**Analyze:**
1. [Specific question for this dimension]
2. [Specific question for this dimension]
3. [Specific question for this dimension]
**Report format:**
- Findings: numbered list with severity (Critical/Important/Minor)
- Assessment: Approved / Needs Changes
- Recommendations: prioritized action items
For greenfield projects, combine all strategies in sequence:
Session 1: RESEARCH (Research Swarm)
-> 30-60 background agents build knowledge corpus
-> Architecture planning agents synthesize findings
-> Output: docs/research/*.md + docs/plans/*.md
Session 2: BUILD (Epic Parallel Build)
-> Scout agent maps what exists
-> 30-60 background agents build features by epic
-> Monitor, handle git contention, track completions
-> Output: working codebase with commits
Session 3: ITERATE (Build-Review-Fix Pipeline)
-> Code review agents assess work
-> Fix agents address findings
-> Deep audit agents (foreground) assess each subsystem
-> Output: quality-assessed codebase
Session 4: HARDEN (Sequential Pipeline)
-> Integration boundary reviews (foreground, sequential)
-> Security fixes, race condition fixes
-> Test infrastructure setup
-> Output: production-ready codebase
Each session shifts orchestration strategy to match the work's nature. Parallel when possible, sequential when required.
digraph bg_fg {
"What is the agent producing?" [shape=diamond];
"Information (research, docs)" [shape=box];
"Code modifications" [shape=box];
"Does orchestrator need it NOW?" [shape=diamond];
"BACKGROUND" [shape=box style=filled fillcolor=lightgreen];
"FOREGROUND" [shape=box style=filled fillcolor=lightyellow];
"Does next task depend on this task's files?" [shape=diamond];
"FOREGROUND (sequential)" [shape=box style=filled fillcolor=lightyellow];
"FOREGROUND (parallel)" [shape=box style=filled fillcolor=lightyellow];
"What is the agent producing?" -> "Information (research, docs)";
"What is the agent producing?" -> "Code modifications";
"Information (research, docs)" -> "Does orchestrator need it NOW?";
"Does orchestrator need it NOW?" -> "FOREGROUND" [label="yes"];
"Does orchestrator need it NOW?" -> "BACKGROUND" [label="no - synthesize later"];
"Code modifications" -> "Does next task depend on this task's files?";
"Does next task depend on this task's files?" -> "FOREGROUND (sequential)" [label="yes"];
"Does next task depend on this task's files?" -> "FOREGROUND (parallel)" [label="no - different modules"];
}
Rules observed from 597+ dispatches:
You are researching [DOMAIN] to create comprehensive documentation for [PROJECT].
Your mission: Create an exhaustive reference document covering ALL [TOPIC] capabilities.
Cover these areas in depth:
1. **[Category]** -- specific items
2. **[Category]** -- specific items
...
Use WebSearch and WebFetch to find blog posts, GitHub repos, and official docs.
**Task: [TITLE]** (task\_[ID])
Work in /absolute/path/to/[directory]
## Context
[What exists, what to read, what infrastructure is available]
## Your Job
1. Create `path/to/file` with [description]
2. [Detailed implementation spec]
3. [Test requirements]
4. [Integration requirements]
## Git
Commit with: "feat([scope]): [message]"
Only stage YOUR files.
Comprehensive audit of [SCOPE] for [DIMENSION].
Look for:
1. [Specific thing #1]
2. [Specific thing #2]
...
3. [Specific thing #10]
[Scope boundaries -- which directories/files]
Report format:
- Findings: numbered with severity
- Assessment: Pass / Needs Work
- Action items: prioritized
**Task:** Fix [ISSUE] -- [SEVERITY]
**Problem:** [Description with file:line references]
**Location:** [Exact file path]
**Fix Required:**
1. [Specific change]
2. [Specific change]
**Verify:**
1. Run [command] to confirm fix
2. Run tests: [test command]
Agents can work independently BECAUSE the orchestrator pre-loads them with all context they need. Without this, agents would need to explore first, serializing the work.
Always inject:
src/auth/jwt.py")app.state.redis")For parallel agents, duplicate shared context:
When running 10+ background agents:
git log --oneline -20 for commitstail the agent output files for progress## Agent Swarm Status
**[N] agents deployed** | **[M] completed** | **[P] in progress**
### Completed:
- [Agent description] -- [Key result]
- [Agent description] -- [Key result]
### In Progress:
- [Agent description] -- [Status]
### Gaps Identified:
- [Missing area] -- deploying follow-up agent
DON'T: Dispatch agents that touch the same files -> merge conflicts DO: Partition by directory/module -- one agent per scope
DON'T: Run all agents foreground when they're independent -> sequential bottleneck DO: Use background for research, foreground for code that needs coordination
DON'T: Send 50 agents with vague "fix everything" prompts DO: Give each agent a specific scope, issue list, and domain guidance
DON'T: Skip the scout phase for build sprints DO: Always Explore first to map what exists and identify dependencies
DON'T: Keep full review ceremony for every task in a long session DO: Apply the trust gradient -- earn lighter reviews through consistency
DON'T: Let agents run git add . or git push DO: Explicit git hygiene in every build prompt
DON'T: Dispatch background agents for code that needs integration DO: Background is for research only. Code agents run foreground.
| Skill | Use With | When |
|---|---|---|
superpowers:subagent-driven-development | Sequential Pipeline | Single-task implement-review cycles |
superpowers:dispatching-parallel-agents | Parallel Sweep | Independent bug fixes |
superpowers:writing-plans | Full Lifecycle | Create the plan before Phase 2 |
superpowers:executing-plans | Sequential Pipeline | Batch execution in separate session |
superpowers:brainstorming |
Weekly Installs
137
Repository
GitHub Stars
2
First Seen
Feb 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code132
opencode16
gemini-cli16
github-copilot16
amp16
codex16
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
63,800 周安装
ASP.NET Core依赖注入模式:扩展方法组织服务注册,提升代码可维护性
136 周安装
Git Hooks 设置指南:Husky 配置、代码质量检查与自动化提交验证
136 周安装
压力测试指南:使用k6和JMeter进行系统极限与崩溃点测试
136 周安装
agent-browser 浏览器自动化测试工具 - 基于引用的 AI 友好型端到端测试 CLI
136 周安装
App Store Connect 参考指南:崩溃分析、TestFlight反馈与性能指标导出
136 周安装
iOS Apple Intelligence 路由器使用指南 - Foundation Models 与 AI 方法分流
136 周安装
| 6-9 |
| No (0%) |
| Same code, different review lenses |
| Full Lifecycle | New project from scratch | All above | Mixed | Research -> Plan -> Build -> Review -> Harden |
| Full Lifecycle |
| Before research phase |
superpowers:requesting-code-review | All strategies | Quality gates between phases |
superpowers:verification-before-completion | All strategies | Final validation |