npx skills add https://github.com/boshu2/agentops --skill swarm生成隔离的代理以并行执行任务。每个代理拥有全新的上下文(Ralph Wiggum 模式)。
集成模式:
/swarm/crank 从 beads 创建任务,为每个 wave 调用 /swarm需要多代理运行时。 Swarm 需要一个能够生成并行子代理的运行时。如果不可用,工作必须在当前会话中顺序执行。
市长(当前会话)
|
+-> 计划:创建带有依赖关系的任务
|
+-> 识别 wave:没有阻塞项的任务
|
+-> 选择生成后端(优先运行时原生:Claude 运行时中的 Claude 团队,Codex 运行时中的 Codex 子代理;如果不可用则回退到任务)
|
+-> 分配:TaskUpdate(taskId, owner="worker-<id>", status="in_progress")
|
+-> 通过选定的后端生成 worker
| Worker 接收预先分配的任务,原子性地执行
|
+-> 等待完成(wait() | SendMessage | TaskOutput)
|
+-> 验证:完成后审查更改
|
+-> 清理后端资源(close_agent | TeamDelete | none)
|
+-> 重复:如果需要更多工作,则新建团队 + 新计划
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
给定 /swarm:
使用运行时能力检测,而非硬编码的工具名称。Swarm 需要:
有关能力契约,请参阅 skills/shared/SKILL.md。
检测到您的后端后,请阅读相应的参考文档以获取具体的生成/等待/消息/清理示例:
../shared/references/claude-code-latest-features.md../shared/references/backend-claude-teams.md../shared/references/backend-codex-subagents.md../shared/references/backend-background-tasks.md../shared/references/backend-inline.md另请参阅 references/local-mode.md 以获取特定于 swarm 的执行详细信息(工作树、验证、git 提交策略、wave 重复)。
使用 TaskList 查看当前任务。如果没有,则创建它们:
TaskCreate(subject="实现功能 X", description="完整细节...",
metadata={"files": ["src/feature_x.py", "tests/test_feature_x.py"], "validation": {...}})
TaskUpdate(taskId="2", addBlockedBy=["1"]) # 创建后添加依赖关系
每个 TaskCreate 必须 包含一个 metadata.files 数组,列出该 worker 预期要修改的文件。这能在生成 wave 之前进行机械化的冲突检测。
references/local-mode.md 中的 worker 提示模板)。{
"files": ["cli/cmd/ao/goals.go", "cli/cmd/ao/goals_test.go"],
"validation": {
"tests": "go test ./cli/cmd/ao/...",
"files_exist": ["cli/cmd/ao/goals.go"]
}
}
if command -v ao &>/dev/null; then
ao context assemble --task='<swarm 目标或 wave 描述>'
fi
这将在 .agents/rpi/briefing-current.md 生成一个 5 部分的简报(目标、历史、情报、任务、协议),其中机密信息已被隐去。将简报路径包含在每个 worker 的 TaskCreate 描述中,以便 worker 从完整的项目上下文开始。
Worker 提示路标:
知识工件位于 .agents/。请参阅 .agents/AGENTS.md 进行导航。使用 \ao lookup --query "topic" 查询学习成果。.agents/ 文件访问。负责人应搜索 .agents/learnings/ 以获取相关材料,并将前 3 个结果直接内联到 worker 提示正文中。如果所有任务都已填充了 metadata.files 数组,请跳过此步骤。
如果任何任务缺少其文件清单,请在步骤 2 之前自动生成它:
Agent(subagent_type="Explore", model="haiku",
prompt="给定此任务:'<任务主题 + 描述>',识别所有需要创建或修改的文件。返回一个文件路径的 JSON 数组。")
TaskUpdate(taskId=task.id, metadata={"files": [explored_files]})
一旦所有任务都有了清单,就进行到步骤 2,其中预生成冲突检查将强制执行文件所有权。
查找满足以下条件的任务:
pending这些任务可以并行运行。
在生成 wave 之前,扫描所有 worker 文件清单以查找重叠文件:
wave_tasks = [状态为 pending 且无阻塞项的任务]
all_files = {}
for task in wave_tasks:
for f in task.metadata.files:
if f in all_files:
冲突:f 被 all_files[f] 和 task.id 同时声明
all_files[f] = task.id
检测到冲突时:
--worktrees)隔离它们,使每个 worker 在单独的分支上操作。不要将文件清单重叠的 worker 生成到同一个共享工作树的 wave 中。这是并行执行中构建中断和合并冲突的主要原因。
在生成前显示所有权表:
文件所有权映射(Wave N):
┌─────────────────────────────┬──────────┬──────────┐
│ 文件 │ 所有者 │ 冲突 │
├─────────────────────────────┼──────────┼──────────┤
│ src/auth/middleware.go │ task-1 │ │
│ src/auth/middleware_test.go │ task-1 │ │
│ src/api/routes.go │ task-2 │ │
│ src/config/settings.go │ task-1,3 │ 是 │
└─────────────────────────────┴──────────┴──────────┘
冲突:1(已解决:将 task-3 串行化到子 wave 2)
当执行 wave 2+(非第一个 wave)时,验证 worker 分支自最新的提交——而不是在前一个 wave 的更改提交之前的陈旧 SHA。
# 伪代码
# 在前一个 wave 的提交后捕获当前 HEAD
CURRENT_SHA=$(git rev-parse HEAD)
# 如果使用工作树,验证它们是最新的
if [[ -n "$WORKTREE_PATH" ]]; then
(cd "$WORKTREE_PATH" && git pull --rebase origin "$(git branch --show-current)" 2>/dev/null || true)
fi
将前一个 wave 的差异与当前 wave 的文件清单进行交叉引用:
# 伪代码
# 前一个 wave 中更改的文件
PRIOR_WAVE_FILES=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
# 检查与当前 wave 清单的重叠
for task in $WAVE_TASKS; do
TASK_FILES=$(echo "$task" | jq -r '.metadata.files[]')
OVERLAP=$(comm -12 <(echo "$PRIOR_WAVE_FILES" | sort) <(echo "$TASK_FILES" | sort))
if [[ -n "$OVERLAP" ]]; then
echo "警告:任务 $task 触及了前一个 wave 中修改的文件:$OVERLAP"
echo "Worker 必须读取最新版本(前一个 wave 提交后)"
fi
done
原因: 如果没有 base-SHA 刷新,wave 2+ 的 worker 可能会读取在 wave 1 更改提交之前的陈旧文件版本。这会导致 worker 覆盖前一个 wave 的编辑或基于过时的代码实现。有关 SHA 跟踪模式,请参阅 crank 步骤 5.7(wave 检查点)。
有关详细的本地模式执行(团队创建、worker 生成、竞态条件预防、git 提交策略、验证契约、清理和重复逻辑),请阅读 skills/swarm/references/local-mode.md。
平台陷阱: 在 worker 提示中为目标语言/平台包含
references/worker-pitfalls.md中的相关陷阱。例如,为 shell 脚本任务注入 Bash 部分,为 Go 任务注入 Go 部分等。这可以防止因已知的平台陷阱导致的常见 worker 故障。
市长:"我们来构建一个用户认证系统"
1. /plan -> 创建任务:
#1 [pending] 创建用户模型
#2 [pending] 添加密码哈希(blockedBy: #1)
#3 [pending] 创建登录端点(blockedBy: #1)
#4 [pending] 添加 JWT 令牌(blockedBy: #3)
#5 [pending] 编写测试(blockedBy: #2, #3, #4)
2. /swarm -> 为 #1 生成代理(仅未阻塞的任务)
3. 代理 #1 完成 -> #1 现在标记为完成
-> #2 和 #3 变为未阻塞
4. /swarm -> 为 #2 和 #3 并行生成代理
5. 继续直到 #5 完成
6. /vibe -> 验证所有内容
当 worker 发现超出其分配范围的工作时,他们不得修改其文件清单之外的文件。相反,应追加到 .agents/swarm/scope-escapes.jsonl:
{"worker": "<worker-id>", "finding": "<描述>", "suggested_files": ["path/to/file"], "timestamp": "<ISO8601>"}
负责人会在每个 wave 后审查范围越界情况,并根据需要创建后续任务。
.agents/swarm/results/<id>.json,协调器读取文件(不是任务返回或 SendMessage 内容)send_input(Codex)或 SendMessage(Claude)进行协调这与完整的工作流相关联:
/research -> 理解问题
/plan -> 分解为 beads 问题
/crank -> 自主史诗循环
+-- /swarm -> 并行执行每个 wave
/vibe -> 验证结果
/post-mortem -> 提取学习成果
直接使用(无 beads):
TaskCreate -> 定义任务
/swarm -> 并行执行
知识飞轮从每个代理中捕获学习成果。
# 列出所有任务
TaskList()
# 收到通知后将任务标记为完成
TaskUpdate(taskId="1", status="completed")
# 在任务之间添加依赖关系
TaskUpdate(taskId="2", addBlockedBy=["1"])
| 参数 | 描述 | 默认值 |
|---|---|---|
--max-workers=N | 最大并发 worker 数 | 5 |
--from-wave <json-file> | 从 OL 英雄狩猎输出加载 wave(参见 OL Wave 集成) | - |
--per-task-commits | 每个任务提交一次,而不是每个 wave(用于归属/审计) | 关闭(每个 wave) |
| 场景 | 使用 |
|---|---|
| 多个独立任务 | /swarm(并行) |
| 顺序依赖关系 | /swarm 配合 blockedBy |
| 两者混合 | /swarm 生成 wave,每个 wave 并行 |
遵循 Ralph Wiggum 模式:每个执行单元拥有全新的上下文。
Ralph 对齐来源:../shared/references/ralph-loop-contract.md。
当 /crank 调用 /swarm 时:Crank 将 beads 桥接到 TaskList,swarm 使用全新上下文的代理执行,crank 将结果同步回来。
| 您想要 | 使用 | 原因 |
|---|---|---|
| 全新上下文并行执行 | /swarm | 每个生成的代理都是干净的状态 |
| 自主史诗循环 | /crank | 通过 swarm 循环 wave 直到史诗关闭 |
| 仅 swarm,无 beads | 直接使用 /swarm | 仅 TaskList,跳过 beads |
| RPI 进度门 | /ratchet | 跟踪进度;不执行工作 |
当调用 /swarm --from-wave <json-file> 时,swarm 会从 OL 英雄狩猎输出文件中读取 wave 数据,并执行它,同时将完成情况回流到 OL。
# --from-wave 要求 PATH 上有 ol CLI
which ol >/dev/null 2>&1 || {
echo "错误:--from-wave 需要 ol CLI。请安装 ol 或不使用 wave 集成运行 swarm。"
exit 1
}
如果 ol 不在 PATH 上,立即退出并显示上述错误。不要回退到正常的 swarm 模式。
--from-wave JSON 文件包含 ol hero hunt 输出:
{
"wave": [
{"id": "ol-527.1", "title": "添加认证中间件", "spec_path": "quests/ol-527/specs/ol-527.1.md", "priority": 1},
{"id": "ol-527.2", "title": "修复速率限制", "spec_path": "quests/ol-527/specs/ol-527.2.md", "priority": 2}
],
"blocked": [
{"id": "ol-527.3", "title": "集成测试", "blocked_by": ["ol-527.1", "ol-527.2"]}
],
"completed": [
{"id": "ol-527.0", "title": "项目设置"}
]
}
wave 数组。TaskCreate):for each entry in wave:
TaskCreate(
subject="[{entry.id}] {entry.title}",
description="OL bead {entry.id}\n规格:{entry.spec_path}\n优先级:{entry.priority}\n\n完整需求请阅读 {entry.spec_path} 处的规格文件。",
metadata={
"ol_bead_id": entry.id,
"ol_spec_path": entry.spec_path,
"ol_priority": entry.priority
}
)
# 从 bead ID 中提取 quest ID(例如,ol-527.1 -> ol-527)
QUEST_ID=$(echo "$BEAD_ID" | sed 's/\.[^.]*$//')
ol hero ratchet "$BEAD_ID" --quest "$QUEST_ID"
Ratchet 结果处理:
| 退出码 | 含义 | 操作 |
|---|---|---|
| 0 | OL 中 bead 完成 | 将任务标记为完成,记录成功 |
| 1 | Ratchet 验证失败 | 将任务标记为失败,记录来自 stderr 的验证错误 |
/swarm --from-wave /tmp/wave-ol-527.json
# 读取 wave JSON -> 从 wave 条目创建 2 个任务
# 为 ol-527.1 和 ol-527.2 生成 worker
# 在 ol-527.1 完成后:
# ol hero ratchet ol-527.1 --quest ol-527 -> 退出码 0 -> bead 完成
# 在 ol-527.2 完成后:
# ol hero ratchet ol-527.2 --quest ol-527 -> 退出码 0 -> bead 完成
# Wave 完成:2/2 beads 在 OL 中 ratchet 完成
skills/swarm/references/local-mode.mdskills/swarm/references/validation-contract.md用户说: /swarm
发生的情况:
结果: 多 wave 执行,每个 wave 都有全新上下文的 worker,零竞态条件。
用户说: 为 API 重构创建三个任务,然后 /swarm
发生的情况:
/swarm,无需 beads 集成结果: 仅使用 TaskList 的独立任务并行执行。
用户说: /swarm --from-wave /tmp/wave-ol-527.json
发生的情况:
ol CLI 是否在 PATH 上(起飞前检查)ol hero ratchet <bead-id> --quest <quest-id>结果: OL beads 被执行,完成情况报告回 Olympus。
默认行为: 自动检测并优先使用运行时原生隔离。
在 Claude 运行时中,首先使用 claude agents 验证队友配置文件,并为写入密集的并行 wave 使用带有 isolation: worktree 的代理定义。如果原生隔离不可用,则使用下面的手动 git worktree 回退。
| 后端 | 隔离机制 | 工作原理 |
|---|---|---|
Claude 团队(带有 team_name 的 Task) | 代理定义中的 isolation: worktree | 运行时为每个队友创建一个隔离的 git 工作树;更改对其他代理和主树不可见,直到合并 |
后台任务(带有 run_in_background 的 Task) | 代理定义中的 isolation: worktree | 与团队相同的工作树隔离;每个后台代理获得自己的工作树 |
| 内联(无生成) | 无 | 直接在主工作树上操作;无法隔离 |
关键诊断: 当指定了 isolation: worktree 但 worker 的更改出现在主工作树中(Task 结果中没有单独的工作树路径)时,隔离未生效。这是一个静默失败——运行时接受了参数但没有创建工作树。
使用 isolation: worktree 生成 worker 后,负责人必须验证隔离是否生效:
worktreePath 字段。如果存在,则隔离已激活。worktreePath 不存在 但指定了 isolation: worktree:
git worktree 创建(见下文)或切换到串行内联执行。何时使用工作树: 在以下情况下激活工作树隔离:
git diff --name-only 检测)证据:4 个并行代理在共享工作树中产生了 1 次构建中断和 1 次算法重复(参见 .agents/evolve/dispatch-comparison.md)。工作树隔离从构造上防止了冲突。
# 启发式:多史诗 = 需要工作树
# 具有独立文件的单个史诗 = 共享工作树可以
# 检查任务是否跨越多个史诗
# 例如,任务主题包含不同的史诗 ID(ol-527, ol-531, ...)
# 如果是:使用工作树
# 如果否:使用默认的共享工作树进行
在生成 worker 之前,为每个史诗创建一个隔离的工作树:
# 对于 wave 中的每个史诗 ID:
git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
3 个史诗的示例:
git worktree add /tmp/swarm-ol-527 -b swarm/ol-527
git worktree add /tmp/swarm-ol-531 -b swarm/ol-531
git worktree add /tmp/swarm-ol-535 -b swarm/ol-535
每个工作树都从当前分支的 HEAD 开始。worker 分支(swarm/<epic-id>)是临时的——合并后删除。
在每个 worker 提示中将工作树路径作为工作目录传递:
工作目录:/tmp/swarm-<epic-id>
所有文件读取、写入和编辑必须使用以 /tmp/swarm-<epic-id> 为根目录的路径。
请勿直接在 /path/to/main/repo 上操作。
Worker 在隔离环境中运行——一个工作树中的更改不会与另一个冲突。
结果文件路径: Worker 仍然将结果写入主仓库的 .agents/swarm/results/:
# Worker 写入主仓库的结果路径(而非工作树)
RESULT_DIR=/path/to/main/repo/.agents/swarm/results
.agents/swarm/results/ 的协调器路径始终是主仓库,而不是工作树。
在 worker 的任务通过验证后,将工作树分支合并回主分支:
# 从主仓库(而非工作树)
git merge --no-ff swarm/<epic-id> -m "chore: merge swarm/<epic-id> (epic <epic-id>)"
合并顺序:尊重任务依赖关系。如果史诗 B 被史诗 A 阻塞,则在 B 之前合并 A。
合并仲裁协议:
用结构化的顺序变基替换手动冲突解决:
# 对于合并顺序中的每个分支:
git rebase main swarm/<epic-id>
合并状态:
┌────────────────────┬──────────┬────────────┬───────────┐
│ 分支 │ 状态 │ 冲突 │ 修复次数 │
├────────────────────┼──────────┼────────────┼───────────┤
│ swarm/task-1 │ 已合并 │ 0 │ 0 │
│ swarm/task-2 │ 已合并 │ 1 (自动) │ 0 │
│ swarm/task-3 │ 已合并 │ 1 (修复) │ 1 │
└────────────────────┴──────────┴────────────┴───────────┘
Worker 不得合并——仅负责人提交的策略仍然适用。
# 成功合并后:
git worktree remove /tmp/swarm-<epic-id>
git branch -d swarm/<epic-id>
即使在部分失败的情况下也要运行清理(与团队清理相同的收割者模式)。
1. 检测:这个 wave 需要工作树吗?(多史诗或文件重叠)
2. 对于每个史诗:
a. git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
3. 生成 worker,并将工作树路径注入到提示中
4. 等待完成(与共享模式相同)
5. 验证每个 worker 的更改(在工作树内运行测试)
6. 对于每个通过的史诗:
a. git merge --no-ff swarm/<epic-id>
b. git worktree remove /tmp/swarm-<epic-id>
c. git branch -d swarm/<epic-id>
7. 提交所有合并的更改(团队负责人,唯一提交者)
| 参数 | 描述 | 默认值 |
|---|---|---|
--worktrees | 为此 wave 强制使用工作树隔离 | 关闭(自动检测) |
--no-worktrees | 即使对于多史诗也强制使用共享工作树 | 关闭 |
原因:指定了 isolation: worktree 但 Task 结果中没有 worktreePath——worker 的更改落入了主树。解决方案:验证代理定义是否包含 isolation: worktree。如果运行时不支持声明式隔离,则回退到手动 git worktree add(参见工作树隔离部分)。对于重叠文件的 wave,中止并切换到串行执行。
原因:多个 worker 并行编辑同一个文件。解决方案:对于多史诗分派,使用工作树隔离(--worktrees)。对于单史诗 wave,使用 wave 分解按文件范围对 worker 进行分组。同质 wave(全是 Go,全是文档)可以防止冲突。
原因:前一个会话的陈旧团队未清理。解决方案:运行 rm -rf ~/.claude/teams/<team-name> 然后重试。
原因:codex CLI 未安装或 API 密钥未配置。解决方案:运行 which codex 验证安装。检查 ~/.codex/config.toml 中的 API 凭据。
原因:Worker 任务太大或被外部依赖阻塞。解决方案:将任务分解为更小的单元。为 worker 任务添加超时元数据。
原因:使用了 --from-wave 但 ol CLI 不在 PATH 上。解决方案:安装 Olympus CLI 或不使用 --from-wave 标志运行 swarm。
原因:后端选择失败或生成 API 不可用。解决方案:检查选择了哪个生成后端(查找 "Using: " 消息)。验证 Codex CLI(which codex)或原生团队 API 的可用性。
每周安装次数
255
仓库
GitHub 星标数
197
首次出现
2026年2月2日
安全审计
安装于
opencode248
gemini-cli245
codex243
github-copilot243
cursor236
kimi-cli231
Spawn isolated agents to execute tasks in parallel. Fresh context per agent (Ralph Wiggum pattern).
Integration modes:
/swarm/crank creates tasks from beads, invokes /swarm for each waveRequires multi-agent runtime. Swarm needs a runtime that can spawn parallel subagents. If unavailable, work must be done sequentially in the current session.
Mayor (this session)
|
+-> Plan: TaskCreate with dependencies
|
+-> Identify wave: tasks with no blockers
|
+-> Select spawn backend (runtime-native first: Claude teams in Claude runtime, Codex sub-agents in Codex runtime; fallback tasks if unavailable)
|
+-> Assign: TaskUpdate(taskId, owner="worker-<id>", status="in_progress")
|
+-> Spawn workers via selected backend
| Workers receive pre-assigned task, execute atomically
|
+-> Wait for completion (wait() | SendMessage | TaskOutput)
|
+-> Validate: Review changes when complete
|
+-> Cleanup backend resources (close_agent | TeamDelete | none)
|
+-> Repeat: New team + new plan if more work needed
Given /swarm:
Use runtime capability detection, not hardcoded tool names. Swarm requires:
See skills/shared/SKILL.md for the capability contract.
After detecting your backend, read the matching reference for concrete spawn/wait/message/cleanup examples:
../shared/references/claude-code-latest-features.md../shared/references/backend-claude-teams.md../shared/references/backend-codex-subagents.md../shared/references/backend-background-tasks.md../shared/references/backend-inline.mdSee also references/local-mode.md for swarm-specific execution details (worktrees, validation, git commit policy, wave repeat).
Use TaskList to see current tasks. If none, create them:
TaskCreate(subject="Implement feature X", description="Full details...",
metadata={"files": ["src/feature_x.py", "tests/test_feature_x.py"], "validation": {...}})
TaskUpdate(taskId="2", addBlockedBy=["1"]) # Add dependencies after creation
Every TaskCreate must include a metadata.files array listing the files that worker is expected to modify. This enables mechanical conflict detection before spawning a wave.
Pull file lists from the plan, issue description, or codebase exploration during planning.
If you cannot enumerate files yet, add a planning step to identify them before spawning workers. An empty or missing manifest signals the need for more planning, not unconstrained workers.
Workers receive the manifest in their prompt and are instructed to stay within it (see references/local-mode.md worker prompt template).
{ "files": ["cli/cmd/ao/goals.go", "cli/cmd/ao/goals_test.go"], "validation": { "tests": "go test ./cli/cmd/ao/...", "files_exist": ["cli/cmd/ao/goals.go"] } }
if command -v ao &>/dev/null; then
ao context assemble --task='<swarm objective or wave description>'
fi
This produces a 5-section briefing (GOALS, HISTORY, INTEL, TASK, PROTOCOL) at .agents/rpi/briefing-current.md with secrets redacted. Include the briefing path in each worker's TaskCreate description so workers start with full project context.
Worker prompt signpost:
Knowledge artifacts are in .agents/. See .agents/AGENTS.md for navigation. Use \ao lookup --query "topic" for learnings..agents/ file access in sandbox. The lead should search .agents/learnings/ for relevant material and inline the top 3 results directly in the worker prompt body.Skip this step if all tasks already have populatedmetadata.files arrays.
If any task is missing its file manifest, auto-generate it before Step 2:
Spawn haiku Explore agents (one per task missing manifests) to identify files:
Agent(subagent_type="Explore", model="haiku",
prompt="Given this task: '<task subject + description>', identify all files
that will need to be created or modified. Return a JSON array of file paths.")
Inject manifests back into tasks:
TaskUpdate(taskId=task.id, metadata={"files": [explored_files]})
Once all tasks have manifests, proceed to Step 2 where the Pre-Spawn Conflict Check enforces file ownership.
Find tasks that are:
pendingThese can run in parallel.
Before spawning a wave, scan all worker file manifests for overlapping files:
wave_tasks = [tasks with status=pending and no blockers]
all_files = {}
for task in wave_tasks:
for f in task.metadata.files:
if f in all_files:
CONFLICT: f is claimed by both all_files[f] and task.id
all_files[f] = task.id
On conflict detection:
--worktrees) so each operates on a separate branch.Do not spawn workers with overlapping file manifests into the same shared-worktree wave. This is the primary cause of build breaks and merge conflicts in parallel execution.
Display ownership table before spawning:
File Ownership Map (Wave N):
┌─────────────────────────────┬──────────┬──────────┐
│ File │ Owner │ Conflict │
├─────────────────────────────┼──────────┼──────────┤
│ src/auth/middleware.go │ task-1 │ │
│ src/auth/middleware_test.go │ task-1 │ │
│ src/api/routes.go │ task-2 │ │
│ src/config/settings.go │ task-1,3 │ YES │
└─────────────────────────────┴──────────┴──────────┘
Conflicts: 1 (resolved: serialized task-3 into sub-wave 2)
When executing wave 2+ (not the first wave), verify workers branch from the latest commit — not a stale SHA from before the prior wave's changes were committed.
# PSEUDO-CODE
# Capture current HEAD after prior wave's commit
CURRENT_SHA=$(git rev-parse HEAD)
# If using worktrees, verify they're up to date
if [[ -n "$WORKTREE_PATH" ]]; then
(cd "$WORKTREE_PATH" && git pull --rebase origin "$(git branch --show-current)" 2>/dev/null || true)
fi
Cross-reference prior wave diff against current wave file manifests:
# PSEUDO-CODE
# Files changed in prior wave
PRIOR_WAVE_FILES=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
# Check for overlap with current wave manifests
for task in $WAVE_TASKS; do
TASK_FILES=$(echo "$task" | jq -r '.metadata.files[]')
OVERLAP=$(comm -12 <(echo "$PRIOR_WAVE_FILES" | sort) <(echo "$TASK_FILES" | sort))
if [[ -n "$OVERLAP" ]]; then
echo "WARNING: Task $task touches files modified in prior wave: $OVERLAP"
echo "Workers MUST read the latest version (post-prior-wave commit)"
fi
done
Why: Without base-SHA refresh, wave 2+ workers may read stale file versions from before wave 1 changes were committed. This causes workers to overwrite prior wave edits or implement against outdated code. See crank Step 5.7 (wave checkpoint) for the SHA tracking pattern.
For detailed local mode execution (team creation, worker spawning, race condition prevention, git commit policy, validation contract, cleanup, and repeat logic), readskills/swarm/references/local-mode.md.
Platform pitfalls: Include relevant pitfalls from
references/worker-pitfalls.mdin worker prompts for the target language/platform. For example, inject the Bash section for shell script tasks, the Go section for Go tasks, etc. This prevents common worker failures from known platform gotchas.
Mayor: "Let's build a user auth system"
1. /plan -> Creates tasks:
#1 [pending] Create User model
#2 [pending] Add password hashing (blockedBy: #1)
#3 [pending] Create login endpoint (blockedBy: #1)
#4 [pending] Add JWT tokens (blockedBy: #3)
#5 [pending] Write tests (blockedBy: #2, #3, #4)
2. /swarm -> Spawns agent for #1 (only unblocked task)
3. Agent #1 completes -> #1 now completed
-> #2 and #3 become unblocked
4. /swarm -> Spawns agents for #2 and #3 in parallel
5. Continue until #5 completes
6. /vibe -> Validate everything
When a worker discovers work outside their assigned scope, they MUST NOT modify files outside their file manifest. Instead, append to .agents/swarm/scope-escapes.jsonl:
{"worker": "<worker-id>", "finding": "<description>", "suggested_files": ["path/to/file"], "timestamp": "<ISO8601>"}
The lead reviews scope escapes after each wave and creates follow-up tasks as needed.
.agents/swarm/results/<id>.json, orchestrator reads files (NOT Task returns or SendMessage content)send_input (Codex) or SendMessage (Claude) for coordination onlyThis ties into the full workflow:
/research -> Understand the problem
/plan -> Decompose into beads issues
/crank -> Autonomous epic loop
+-- /swarm -> Execute each wave in parallel
/vibe -> Validate results
/post-mortem -> Extract learnings
Direct use (no beads):
TaskCreate -> Define tasks
/swarm -> Execute in parallel
The knowledge flywheel captures learnings from each agent.
# List all tasks
TaskList()
# Mark task complete after notification
TaskUpdate(taskId="1", status="completed")
# Add dependency between tasks
TaskUpdate(taskId="2", addBlockedBy=["1"])
| Parameter | Description | Default |
|---|---|---|
--max-workers=N | Max concurrent workers | 5 |
--from-wave <json-file> | Load wave from OL hero hunt output (see OL Wave Integration) | - |
--per-task-commits | Commit per task instead of per wave (for attribution/audit) | Off (per-wave) |
| Scenario | Use |
|---|---|
| Multiple independent tasks | /swarm (parallel) |
| Sequential dependencies | /swarm with blockedBy |
| Mix of both | /swarm spawns waves, each wave parallel |
Follows the Ralph Wiggum Pattern: fresh context per execution unit.
Ralph alignment source: ../shared/references/ralph-loop-contract.md.
When /crank invokes /swarm: Crank bridges beads to TaskList, swarm executes with fresh-context agents, crank syncs results back.
| You Want | Use | Why |
|---|---|---|
| Fresh-context parallel execution | /swarm | Each spawned agent is a clean slate |
| Autonomous epic loop | /crank | Loops waves via swarm until epic closes |
| Just swarm, no beads | /swarm directly | TaskList only, skip beads |
| RPI progress gates | /ratchet | Tracks progress; does not execute work |
When /swarm --from-wave <json-file> is invoked, the swarm reads wave data from an OL hero hunt output file and executes it with completion backflow to OL.
# --from-wave requires ol CLI on PATH
which ol >/dev/null 2>&1 || {
echo "Error: ol CLI required for --from-wave. Install ol or use swarm without wave integration."
exit 1
}
If ol is not on PATH, exit immediately with the error above. Do not fall back to normal swarm mode.
The --from-wave JSON file contains ol hero hunt output:
{
"wave": [
{"id": "ol-527.1", "title": "Add auth middleware", "spec_path": "quests/ol-527/specs/ol-527.1.md", "priority": 1},
{"id": "ol-527.2", "title": "Fix rate limiting", "spec_path": "quests/ol-527/specs/ol-527.2.md", "priority": 2}
],
"blocked": [
{"id": "ol-527.3", "title": "Integration tests", "blocked_by": ["ol-527.1", "ol-527.2"]}
],
"completed": [
{"id": "ol-527.0", "title": "Project setup"}
]
}
Parse the JSON file and extract the wave array.
Create TaskList tasks from wave entries (one TaskCreate per entry):
for each entry in wave:
TaskCreate(
subject="[{entry.id}] {entry.title}",
description="OL bead {entry.id}\nSpec: {entry.spec_path}\nPriority: {entry.priority}\n\nRead the spec file at {entry.spec_path} for full requirements.",
metadata={
"ol_bead_id": entry.id,
"ol_spec_path": entry.spec_path,
"ol_priority": entry.priority
}
)
3. Execute swarm normally on those tasks (Step 2 onward from main execution flow). Tasks are ordered by priority (lower number = higher priority).
# Extract quest ID from bead ID (e.g., ol-527.1 -> ol-527)
QUEST_ID=$(echo "$BEAD_ID" | sed 's/\.[^.]*$//')
ol hero ratchet "$BEAD_ID" --quest "$QUEST_ID"
Ratchet result handling:
| Exit Code | Meaning | Action |
|---|---|---|
| 0 | Bead complete in OL | Mark task completed, log success |
| 1 | Ratchet validation failed | Mark task as failed, log the validation error from stderr |
/swarm --from-wave /tmp/wave-ol-527.json
# Reads wave JSON -> creates 2 tasks from wave entries
# Spawns workers for ol-527.1 and ol-527.2
# On completion of ol-527.1:
# ol hero ratchet ol-527.1 --quest ol-527 -> exit 0 -> bead complete
# On completion of ol-527.2:
# ol hero ratchet ol-527.2 --quest ol-527 -> exit 0 -> bead complete
# Wave done: 2/2 beads ratcheted in OL
skills/swarm/references/local-mode.mdskills/swarm/references/validation-contract.mdUser says: /swarm
What happens:
Result: Multi-wave execution with fresh-context workers per wave, zero race conditions.
User says: Create three tasks for API refactor, then /swarm
What happens:
/swarm without beads integrationResult: Parallel execution of independent tasks using TaskList only.
User says: /swarm --from-wave /tmp/wave-ol-527.json
What happens:
ol CLI is on PATH (pre-flight check)ol hero ratchet <bead-id> --quest <quest-id> for each beadResult: OL beads executed with completion reporting back to Olympus.
Default behavior: Auto-detect and prefer runtime-native isolation first.
In Claude runtime, first verify teammate profiles with claude agents and use agent definitions with isolation: worktree for write-heavy parallel waves. If native isolation is unavailable, use manual git worktree fallback below.
| Backend | Isolation Mechanism | How It Works |
|---|---|---|
Claude teams (Task with team_name) | isolation: worktree in agent definition | Runtime creates an isolated git worktree per teammate; changes are invisible to other agents and the main tree until merged |
Background tasks (Task with run_in_background) | isolation: worktree in agent definition | Same worktree isolation as teams; each background agent gets its own worktree |
| (no spawn) |
Key diagnostic: When isolation: worktree is specified but worker changes appear in the main working tree (no separate worktree path in the Task result), isolation did NOT engage. This is a silent failure — the runtime accepted the parameter but did not create a worktree.
After spawning workers with isolation: worktree, the lead MUST verify isolation engaged:
worktreePath field. If present, isolation is active.worktreePath is absent but isolation: worktree was specified:
git worktree creation (see below) or switch to serial inline execution.When to use worktrees: Activate worktree isolation when:
git diff --name-only)Evidence: 4 parallel agents in shared worktree produced 1 build break and 1 algorithm duplication (see .agents/evolve/dispatch-comparison.md). Worktree isolation prevents collisions by construction.
# Heuristic: multi-epic = worktrees needed
# Single epic with independent files = shared worktree OK
# Check if tasks span multiple epics
# e.g., task subjects contain different epic IDs (ol-527, ol-531, ...)
# If yes: use worktrees
# If no: proceed with default shared worktree
Before spawning workers, create an isolated worktree per epic:
# For each epic ID in the wave:
git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
Example for 3 epics:
git worktree add /tmp/swarm-ol-527 -b swarm/ol-527
git worktree add /tmp/swarm-ol-531 -b swarm/ol-531
git worktree add /tmp/swarm-ol-535 -b swarm/ol-535
Each worktree starts at HEAD of current branch. The worker branch (swarm/<epic-id>) is ephemeral — deleted after merge.
Pass the worktree path as the working directory in each worker prompt:
WORKING DIRECTORY: /tmp/swarm-<epic-id>
All file reads, writes, and edits MUST use paths rooted at /tmp/swarm-<epic-id>.
Do NOT operate on /path/to/main/repo directly.
Workers run in isolation — changes in one worktree cannot conflict with another.
Result file path: Workers still write results to the main repo's .agents/swarm/results/:
# Worker writes to main repo result path (not the worktree)
RESULT_DIR=/path/to/main/repo/.agents/swarm/results
The orchestrator path for .agents/swarm/results/ is always the main repo, not the worktree.
After a worker's task passes validation, merge the worktree branch back to main:
# From the main repo (not worktree)
git merge --no-ff swarm/<epic-id> -m "chore: merge swarm/<epic-id> (epic <epic-id>)"
Merge order: respect task dependencies. If epic B blocked by epic A, merge A before B.
Merge Arbiter Protocol:
Replace manual conflict resolution with a structured sequential rebase:
Merge order: Dependency-sorted (leaves first), then by task ID for ties
Sequential rebase (one branch at a time):
# For each branch in merge order:
git rebase main swarm/<epic-id>
On rebase conflict:
If tests fail after conflict resolution:
Display merge status table after all merges complete:
Merge Status:
┌────────────────────┬──────────┬────────────┬───────────┐
│ Branch │ Status │ Conflicts │ Fix-ups │
├────────────────────┼──────────┼────────────┼───────────┤
│ swarm/task-1 │ MERGED │ 0 │ 0 │
│ swarm/task-2 │ MERGED │ 1 (auto) │ 0 │
│ swarm/task-3 │ MERGED │ 1 (fixup) │ 1 │
└────────────────────┴──────────┴────────────┴───────────┘
Workers must not merge — lead-only commit policy still applies.
# After successful merge:
git worktree remove /tmp/swarm-<epic-id>
git branch -d swarm/<epic-id>
Run cleanup even on partial failures (same reaper pattern as team cleanup).
1. Detect: does this wave need worktrees? (multi-epic or file overlap)
2. For each epic:
a. git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
3. Spawn workers with worktree path injected into prompt
4. Wait for completion (same as shared mode)
5. Validate each worker's changes (run tests inside worktree)
6. For each passing epic:
a. git merge --no-ff swarm/<epic-id>
b. git worktree remove /tmp/swarm-<epic-id>
c. git branch -d swarm/<epic-id>
7. Commit all merged changes (team lead, sole committer)
| Parameter | Description | Default |
|---|---|---|
--worktrees | Force worktree isolation for this wave | Off (auto-detect) |
--no-worktrees | Force shared worktree even for multi-epic | Off |
Cause: isolation: worktree was specified but the Task result has no worktreePath — worker changes land in the main tree. Solution: Verify agent definitions include isolation: worktree. If the runtime does not support declarative isolation, fall back to manual git worktree add (see Worktree Isolation section). For overlapping-file waves, abort and switch to serial execution.
Cause: Multiple workers editing the same file in parallel. Solution: Use worktree isolation (--worktrees) for multi-epic dispatch. For single-epic waves, use wave decomposition to group workers by file scope. Homogeneous waves (all Go, all docs) prevent conflicts.
Cause: Stale team from prior session not cleaned up. Solution: Run rm -rf ~/.claude/teams/<team-name> then retry.
Cause: codex CLI not installed or API key not configured. Solution: Run which codex to verify installation. Check ~/.codex/config.toml for API credentials.
Cause: Worker task too large or blocked on external dependency. Solution: Break tasks into smaller units. Add timeout metadata to worker tasks.
Cause: --from-wave used but ol CLI not on PATH. Solution: Install Olympus CLI or run swarm without --from-wave flag.
Cause: Backend selection failed or spawning API unavailable. Solution: Check which spawn backend was selected (look for "Using: " message). Verify Codex CLI (which codex) or native team API availability.
Weekly Installs
255
Repository
GitHub Stars
197
First Seen
Feb 2, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode248
gemini-cli245
codex243
github-copilot243
cursor236
kimi-cli231
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
138,300 周安装
Axiom 仪表板构建指南:设计决策优先的监控仪表板与数据可视化
311 周安装
Google Ads Manager 技能:广告系列管理、关键词研究、出价优化与效果分析
311 周安装
Telegram机器人开发教程:构建AI助手、通知系统与群组自动化工具
311 周安装
AI图像生成提示词优化指南:DALL-E、Midjourney、Stable Diffusion提示工程技巧
311 周安装
AI协作头脑风暴工具 - 将想法转化为完整设计规范,支持代码模板与项目管理
311 周安装
解决 Docker 沙盒 npm 安装崩溃:sandbox-npm-install 技能详解与使用指南
311 周安装
| None |
| Operates directly on the main working tree; no isolation possible |