npx skills add https://github.com/boshu2/agentops --skill crank快速参考: 自主执行史诗。每波使用运行时原生生成的
/swarm。输出:已关闭的问题 + 最终氛围。
你必须执行此工作流。不要仅仅描述它。
自主执行:实现所有问题,直到史诗完成。
CLI 依赖项: bd(问题跟踪),ao(知识飞轮)。两者都是可选的——回退方案请参见 skills/shared/SKILL.md。如果 bd 不可用,则使用 TaskList 进行问题跟踪并跳过 beads 同步。如果 ao 不可用,则跳过知识注入/提取。
关于 Claude 运行时功能覆盖(代理/钩子/工作树/设置),共享的单一事实来源是 skills/shared/references/claude-code-latest-features.md,本地镜像在 references/claude-code-latest-features.md。
Beads 模式 (bd 可用):
Crank (编排器) Swarm (执行器)
| |
+-> bd ready (波次问题) |
| |
+-> 从 beads 创建 TaskCreate --->+-> 选择生成后端 (codex 子代理 | claude 团队 | 回退方案)
| |
+-> /swarm --->+-> 按后端生成工作器
| | (每波次使用全新上下文)
+-> 验证 + bd 更新 <---+-> 工作器通过后端通道报告
| |
+-> 循环直到史诗完成 <---+-> 波次后清理后端资源
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
TaskList 模式 (bd 不可用):
Crank (编排器,TaskList 模式) Swarm (执行器)
| |
+-> TaskList() (波次任务) |
| |
+-> /swarm --->+-> 按波次选择生成后端
| |
+-> 通过 TaskList() 验证 <---+-> 工作器通过后端通道报告
| |
+-> 循环直到全部完成 <---+-> 波次后清理后端资源
关注点分离:
Ralph 对齐来源:../shared/references/ralph-loop-contract.md(全新上下文,调度器/工作器分离,磁盘支持的状态,背压)。
| 标志 | 默认值 | 描述 |
|---|---|---|
--test-first | 关闭 | 启用规范优先的 TDD:SPEC WAVE 生成契约,TEST WAVE 生成失败的测试,IMPL WAVES 使测试通过 |
--per-task-commits | 关闭 | 选择按任务提交策略。当文件边界重叠时回退到波次批量提交。参见 references/commit-strategies.md。 |
MAX_EPIC_WAVES = 50(整个史诗的硬限制)
这可以防止在循环依赖或级联故障时出现无限循环。典型的史诗最多使用 5-10 波。
西西弗斯规则: 除非明确标记为完成,否则不算完成。
每波次之后,输出完成标记:
<promise>DONE</promise> - 史诗真正完成,所有问题已关闭<promise>BLOCKED</promise> - 无法继续(附原因)<promise>PARTIAL</promise> - 未完成(附剩余项)没有标记,切勿声称完成。
给定 /crank [epic-id | plan-file.md | "description"]:
在开始史诗前搜索相关知识:
# 如果 ao CLI 可用,拉取与此史诗相关的知识
if command -v ao &>/dev/null; then
# 拉取限定于此史诗范围的知识
ao lookup --query "<epic-title>" --limit 5 2>/dev/null || \
ao search "epic execution implementation patterns" 2>/dev/null | head -20
# 检查飞轮状态
ao metrics flywheel status 2>/dev/null
# 获取当前棘轮状态
ao ratchet status 2>/dev/null
fi
如果 ao 不可用,跳过此步骤并继续。知识飞轮能增强效果但不是必需的。
if command -v bd &>/dev/null; then
TRACKING_MODE="beads"
else
TRACKING_MODE="tasklist"
echo "注意:未找到 bd CLI。使用 TaskList 进行问题跟踪。"
fi
跟踪模式决定了工作流其余部分的单一事实来源:
| Beads 模式 | TaskList 模式
---|---|---
单一事实来源 | bd (beads 问题) | TaskList (Claude 原生)
查找工作 | bd ready | TaskList() → 待处理,未阻塞
获取详情 | bd show <id> | TaskGet(taskId)
标记完成 | bd update <id> --status closed | TaskUpdate(taskId, status="completed")
跟踪重试 | bd comments add | 任务描述更新
史诗跟踪 | bd update <epic-id> --append-notes | 内存中的波次计数器
Beads 模式:
如果提供了史诗 ID: 直接使用它。不要请求确认。
如果没有史诗 ID: 发现它:
bd list --type epic --status open 2>/dev/null | head -5
单史诗范围检查(警告): 如果 bd list --type epic --status open 返回多个史诗,记录警告:
警告:检测到多个打开的史诗。/crank 在单个史诗上操作。
使用 --allow-multi-epic 来抑制此警告。
如果找到多个史诗,询问用户选择哪个(警告,而非失败)。
TaskList 模式:
如果输入是史诗 ID → 错误:"beads 史诗跟踪需要 bd CLI。请安装 bd 或提供计划文件 / 任务列表。"
如果输入是计划文件路径 (.md):
TaskCreate)TaskUpdate(addBlockedBy) 设置依赖关系如果没有输入:
TaskList() 中现有的待处理任务如果输入是描述字符串:
TaskCreate)Beads 模式:
# 在史诗备注中初始化 crank 跟踪
bd update <epic-id> --append-notes "CRANK_START: wave=0 at $(date -Iseconds)" 2>/dev/null
TaskList 模式: 仅在内存中跟踪波次计数器。不需要外部状态。
在内存中跟踪:wave=0
# 检查 --test-first 标志
if [[ "$TEST_FIRST" == "true" ]]; then
# 按类型对问题进行分类
# 符合规范条件的:feature, bug, task → 适用 SPEC + TEST 波次
# 跳过:docs, chore, ci, epic → 仅标准实现波次
SPEC_ELIGIBLE=()
SPEC_SKIP=()
if [[ "$TRACKING_MODE" == "beads" ]]; then
for issue in $READY_ISSUES; do
ISSUE_TYPE=$(bd show "$issue" 2>/dev/null | grep "Type:" | head -1 | awk '{print tolower($NF)}')
case "$ISSUE_TYPE" in
feature|bug|task) SPEC_ELIGIBLE+=("$issue") ;;
docs|chore|ci|epic) SPEC_SKIP+=("$issue") ;;
*)
echo "警告:问题 $issue 具有未知类型 '$ISSUE_TYPE'。默认为符合规范条件。"
SPEC_ELIGIBLE+=("$issue")
;;
esac
done
else
# TaskList 模式:bd 不可用,默认所有都符合规范条件
SPEC_ELIGIBLE=($READY_ISSUES)
echo "TaskList 模式:所有 ${#SPEC_ELIGIBLE[@]} 个问题默认为符合规范条件(无 bd 类型信息)"
fi
echo "测试优先模式:${#SPEC_ELIGIBLE[@]} 个符合规范条件,${#SPEC_SKIP[@]} 个跳过(docs/chore/ci/epic)"
fi
如果 --test-first 未设置,则完全跳过步骤 3b 和 3c —— 行为不变。
Beads 模式:
bd show <epic-id> 2>/dev/null
TaskList 模式: TaskList() 以查看所有任务及其状态/依赖关系。
Beads 模式:
查找可以处理的问题(无阻塞项):
bd ready 2>/dev/null
bd ready 返回当前波次 —— 所有未阻塞的问题。这些可以并行执行,因为它们彼此之间没有依赖关系。
TaskList 模式:
TaskList() → 过滤状态为 pending,没有 blockedBy(或所有阻塞项已完成)。这些就是当前波次。
验证有待处理的问题:
如果找到 0 个就绪问题(beads 模式)或 0 个待处理未阻塞任务(TaskList 模式):
停止并返回错误:
"未找到此史诗的就绪问题。可能是:
- 所有问题都被阻塞(检查依赖关系)
- 史诗没有子问题(先运行 /plan)
- 所有问题已完成"
同时验证:史诗至少有一个子问题。没有子问题的史诗意味着没有运行 /plan。
不要继续处理空问题列表 —— 这会产生错误的"史诗完成"状态。
如果史诗有 3 个或更多子问题,在继续之前需要事前剖析证据。
# 计算子问题数量(beads 模式)
if [[ "$TRACKING_MODE" == "beads" ]]; then
CHILD_COUNT=$(bd show "$EPIC_ID" 2>/dev/null | grep -c "↳")
else
CHILD_COUNT=$(TaskList | grep -c "pending\|in_progress")
fi
if [[ "$CHILD_COUNT" -ge 3 ]]; then
# 在 .agents/council/ 中查找事前剖析报告
PRE_MORTEM=$(ls -t .agents/council/*pre-mortem* 2>/dev/null | head -1)
if [[ -z "$PRE_MORTEM" ]]; then
echo "停止:史诗有 $CHILD_COUNT 个问题,但未找到事前剖析证据。"
echo "在开始执行前先运行 '/pre-mortem' 以验证计划。"
echo "<promise>BLOCKED</promise>"
echo "原因:对于有 3+ 个问题的史诗需要事前剖析"
# 停止 - 不要继续
exit 1
fi
echo "找到事前剖析证据:$PRE_MORTEM"
fi
原因: 对于 3+ 个问题的史诗,事前剖析具有正投资回报率;成本(约 2 分钟)可以忽略不计。
在生成工作器之前,对计划中将要更改的每个字符串进行 grep。
这可以捕获计划遗漏的过时交叉引用。在整个代码库中对每个被修改的关键术语进行 grep。在计划文件集之外的匹配项表明范围存在差距 —— 将这些文件添加到史诗中或记录为技术债务。
如果 --test-first 未设置或没有符合规范条件的问题,则跳过。
对于每个符合规范条件的问题(feature/bug/task):
SPEC: <issue-title>skills/crank/references/contract-template.md),代码库访问(只读).agents/specs/contract-<issue-id>.md## Invariants 和 ## Test Casesskills/crank/references/wave1-spec-consistency-checklist.md。如果任何项失败,为受影响的问题重新运行 SPEC 工作器,并且不要继续到 TEST WAVE。关于 BLOCKED 恢复和完整工作器提示,请阅读 skills/crank/references/test-first-mode.md。
如果 --test-first 未设置或没有符合规范条件的问题,则跳过。
对于每个符合规范条件的问题:
TEST: <issue-title>关于 RED Gate 强制执行和重试逻辑,请阅读 skills/crank/references/test-first-mode.md。
总结: SPEC WAVE 从问题生成契约 → TEST WAVE 从契约生成失败测试 → RED Gate 验证所有新测试在继续前失败。Docs/chore/ci 问题绕过这两个波次。
if command -v ao &>/dev/null; then
ao context assemble --task='<epic title>: wave $wave'
fi
这会在 .agents/rpi/briefing-current.md 生成一个 5 部分的简报(目标、历史、情报、任务、协议),并隐藏秘密。将简报路径包含在每个工作器的 TaskCreate 描述中,以便工作器从完整的项目上下文开始。
工作器提示标志:
知识工件位于 .agents/。参见 .agents/AGENTS.md 以导航。使用 \ao lookup --query "topic" 获取学习成果。.agents/ 文件访问。领导应搜索 .agents/learnings/ 以获取相关材料,并将前 3 个结果直接内联到工作器提示正文中。GREEN 模式(仅限 --test-first): 如果设置了 --test-first 且 SPEC/TEST 波次已完成,修改符合规范条件问题的工作器提示:
"失败测试存在于 <test-file-paths>。使它们通过。不要修改测试文件。参见 /implement SKILL.md 中的 GREEN 模式规则。"/implement SKILL.md 中的 GREEN 模式规则问题类型 + 文件清单(必需): 在每个 TaskCreate 中包含 metadata.issue_type 和一个 metadata.files 数组。issue_type 用于激活约束适用性和验证策略;files 用于 swarm 的预生成冲突检测。在同一波次中声称拥有同一文件的两个工作器会自动被序列化或工作树隔离。从问题描述、计划或计划期间的代码库探索中推导出两者。这是预防棘轮的左移边缘:编译的发现结果针对问题类型和更改的文件,因此缺少 metadata.issue_type 会削弱执行力,使其退回到猜测。
Grep 查找现有函数(新函数问题必需): 当问题描述说"创建"、"添加"或"实现"一个新函数/工具时,包含带有函数名称模式的 metadata.grep_check。工作器在编写新代码之前必须对代码库进行 grep 以查找现有实现。这可以防止工具重复(例如,estimateTokens 在 context-orchestration-leverage 中被重复,因为没有指定 grep 检查)。
验证元数据策略(必需): 对于类型为 feature|bug|task 的实现任务,包含 metadata.validation.tests 加上至少一个结构检查(files_exist 或 content_check)。docs|chore|ci 使用显式的测试豁免路径,并且仍应包含适用的结构和/或命令/lint 检查。不要省略 metadata.issue_type 并期望任务验证稍后能推断出来。
语言标准注入(代码任务必需):
在生成工作器之前,检测项目语言并加载适用的标准:
检测: 检查仓库根目录的语言标记:
go.mod → 从标准技能加载 Go 标准(go.md,测试部分)pyproject.toml 或 setup.py → 加载 Python 标准(python.md,测试部分)Cargo.toml → 加载 Rust 标准(rust.md)package.json → 加载 TypeScript 标准(typescript.md)注入: 对于类型为 feature|bug|task 的问题,领导(而非工作器)读取标准文件,并将测试部分逐字包含在每个工作器的任务描述中。这是领导遵循的提示指令,而不是运行时检测逻辑。
测试特定规则: 对于创建或修改测试文件的问题,同时注入:
注意:这是建议性的 —— 领导代理遵循该指令。强制执行来自于工作器上下文中存在的标准内容。
验证块提取(beads 模式): 在构建 TaskCreate 调用之前,从每个问题的描述中提取验证元数据。/plan 将一致性检查作为围栏 validation 块嵌入到问题正文中:
# 对于当前波次中的每个问题,从 bd show 输出中提取验证 JSON
ISSUE_BODY=$(bd show "$ISSUE_ID" 2>/dev/null)
VALIDATION_JSON=$(echo "$ISSUE_BODY" | sed -n '/^```validation$/,/^```$/{ /^```/d; p }')
if [[ -n "$VALIDATION_JSON" ]]; then
# 使用提取的验证作为 TaskCreate 中的 metadata.validation
echo "为 $ISSUE_ID 提取了验证块"
else
# 回退:从描述中提到的文件生成默认验证
# 对问题正文中找到的任何文件路径使用 files_exist 检查
MENTIONED_FILES=$(echo "$ISSUE_BODY" | grep -oE '[a-zA-Z0-9_/.-]+\.(go|py|ts|sh|md|yaml|json)' | sort -u)
VALIDATION_JSON="{\"files_exist\": [$(echo "$MENTIONED_FILES" | sed 's/.*/"&"/' | paste -sd,)]}"
echo "警告:$ISSUE_ID 中没有验证块 —— 使用回退的 files_exist 检查"
fi
将提取的或回退的 VALIDATION_JSON 注入到每个工作器 TaskCreate 的 metadata.validation 字段中。这关闭了从计划到执行的验证管道:/plan 写入一致性检查 → bd 存储它们 → /crank 提取并强制执行它们。
TaskCreate(
subject="ag-1234: Add auth middleware",
description="...",
activeForm="Implementing ag-1234",
metadata={
"issue_type": "feature",
"files": ["src/middleware/auth.py", "tests/test_auth.py"],
"validation": {
"tests": "pytest tests/test_auth.py -v",
"files_exist": ["src/middleware/auth.py", "tests/test_auth.py"]
}
}
)
显示文件所有权表(来自 swarm 步骤 1.5):
在生成之前,验证所有权映射没有未解决的冲突:
文件所有权映射(波次 $wave):
┌─────────────────────────────┬──────────┬──────────┐
│ 文件 │ 所有者 │ 冲突 │
├─────────────────────────────┼──────────┼──────────┤
│ (由 swarm 填充) │ │ │
└─────────────────────────────┴──────────┴──────────┘
冲突:0
如果冲突 > 0: 不要调用 /swarm。通过将冲突任务序列化为子波次或在继续之前合并任务范围来解决。
每波次之前:
wave=$((wave + 1))
WAVE_START_SHA=$(git rev-parse HEAD)
if [[ "$TRACKING_MODE" == "beads" ]]; then
bd update <epic-id> --append-notes "CRANK_WAVE: $wave at $(date -Iseconds)" 2>/dev/null
fi
# 检查全局限制
if [[ $wave -ge 50 ]]; then
echo "<promise>BLOCKED</promise>"
echo "已达到全局波次限制(50)。"
# 停止 - 不要继续
fi
预生成:规范一致性门
防止工作器实现不一致或不完整的规范。硬性失败(缺少前言、错误结构、范围冲突)会阻止生成;警告级别的问题(术语、可实施性)不会。
if [ -d .agents/specs ] && ls .agents/specs/contract-*.md &>/dev/null 2>&1; then
bash scripts/spec-consistency-gate.sh .agents/specs/ || {
echo "⚠️ 规范一致性检查失败 —— 在生成工作器之前修复契约文件"
exit 1
}
fi
跨领域约束注入(SDD):
在生成工作器之前,检查跨领域约束:
# 伪代码
# 保护子句:如果计划没有边界则跳过(向后兼容)
PLAN_FILE=$(ls -t .agents/plans/*.md 2>/dev/null | head -1)
if [[ -n "$PLAN_FILE" ]] && grep -q "## Boundaries" "$PLAN_FILE"; then
# 提取"始终"边界并转换为 cross_cutting 检查
# 读取计划的 ## Cross-Cutting Constraints 部分或从 ## Boundaries 推导
# 注入到每个 TaskCreate 的 metadata.validation.cross_cutting 中
fi
# "先询问"边界:在自动模式下,仅记录为注释(不阻塞)
# 在 --interactive 模式下,继续前提示
为每个波次问题创建 TaskCreate 时,在元数据中包含跨领域约束:
{
"validation": {
"files_exist": [...],
"content_check": {...},
"cross_cutting": [
{"name": "...", "type": "content_check", "file": "...", "pattern": "..."}
]
}
}
关于波次执行详情(beads 同步、TaskList 桥接、swarm 调用),请阅读 skills/crank/references/team-coordination.md。
跨领域验证(SDD):
在每任务验证通过后,对波次中修改的所有文件运行跨领域检查:
# 仅当注入了 cross_cutting 约束时
if [[ -n "$CROSS_CUTTING_CHECKS" ]]; then
WAVE_FILES=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
for check in $CROSS_CUTTING_CHECKS; do
run_validation_check "$check" "$WAVE_FILES"
done
fi
Swarm 执行每任务验证(参见
skills/shared/validation-contract.md)。Crank 信任 swarm 验证,并专注于 beads 同步。
关于验证详情、重试逻辑和故障升级,请阅读 skills/crank/references/team-coordination.md 和 skills/crank/references/failure-recovery.md。
原则: 使用轻量级内联判断器验证每个波次是否满足验收标准。不调用技能 —— 防止编排器循环中的上下文爆炸。
关于验收检查详情(差异计算、内联判断器、裁决门控),请阅读 skills/crank/references/wave-patterns.md。
每波次完成后(氛围门之后,下一波次之前),写入检查点文件:
mkdir -p .agents/crank
cat > ".agents/crank/wave-${wave}-checkpoint.json" <<EOF
{
"schema_version": 1,
"wave": ${wave},
"timestamp": "$(date -Iseconds)",
"tasks_completed": $(echo "$COMPLETED_IDS" | jq -R 'split(" ")'),
"tasks_failed": $(echo "$FAILED_IDS" | jq -R 'split(" ")'),
"files_changed": $(git diff --name-only "${WAVE_START_SHA}..HEAD" | jq -R . | jq -s .),
"git_sha": "$(git rev-parse HEAD)",
"acceptance_verdict": "<PASS|WARN|FAIL>",
"commit_strategy": "<per-task|wave-batch|wave-batch-fallback>"
}
EOF
COMPLETED_IDS / FAILED_IDS:来自波次结果的空间分隔问题 ID。acceptance_verdict:来自波次验收检查(步骤 5.5)的裁决。用于最终验证,以跳过干净史诗上冗余的 /vibe。写入波次检查点后,为下游 /vibe 消费复制它:
mkdir -p .agents/vibe-context
cp ".agents/crank/wave-${wave}-checkpoint.json" .agents/vibe-context/latest-crank-wave.json
这为 /vibe 提供了一个稳定的路径来读取最新的执行状态,而无需扫描波次检查点文件。根据仓库约定使用文件复制(而非符号链接)。
每个波次检查点后,显示一个合并的状态表:
波次 $wave 状态:
┌────────┬──────────────────────────────┬───────────┬────────────┬──────────┐
│ 任务 │ 主题 │ 状态 │ 验证 │ 持续时间 │
├────────┼──────────────────────────────┼───────────┼────────────┼──────────┤
│ #1 │ 添加认证中间件 │ 已完成 │ 通过 │ 2分14秒 │
│ #2 │ 修复速率限制 │ 已完成 │ 通过 │ 1分47秒 │
│ #3 │ 更新配置模式 │ 失败 │ 失败 │ 3分02秒 │
└────────┴──────────────────────────────┴───────────┴────────────┴──────────┘
史诗进度:
已关闭问题:5/12(第 3 波,预计 5 波)
阻塞: 1(#8,等待 #7)
下一波: #6, #7(2 个任务,0 冲突)
此表是信息性的 —— 它不控制进展。步骤 6 处理循环决策。
在提交一个波次后,在生成下一波次之前刷新基础 SHA。 这可以防止跨波次文件冲突,即后一波次的工作树覆盖前一波次的修复。
# 波次提交完成后:
WAVE_COMMIT_SHA=$(git rev-parse HEAD)
# 验证提交已落地
if [[ "$WAVE_COMMIT_SHA" == "$WAVE_START_SHA" ]]; then
echo "警告:波次提交未推进 HEAD。检查提交失败。"
fi
# 下一波次的工作树必须从此 SHA 分支,而不是原始基础。
# swarm 预生成步骤使用 HEAD 作为工作树基础,因此如果波次提交发生在下一次 /swarm 调用之前,这是自动的。
echo "波次 $wave 提交于 $WAVE_COMMIT_SHA。下一波次从此处分支。"
跨波次共享文件检查:
在生成下一波次之前,交叉引用下一波次的文件清单与当前波次中更改的文件:
# 刚刚完成的波次修改的文件
WAVE_CHANGED=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
# 计划用于下一波次的文件(来自 TaskCreate metadata.files)
NEXT_WAVE_FILES=(<下一波次文件清单>)
# 检查重叠
OVERLAP=$(comm -12 <(echo "$WAVE_CHANGED" | sort) <(printf '%s\n' "${NEXT_WAVE_FILES[@]}" | sort))
if [[ -n "$OVERLAP" ]]; then
echo "检测到跨波次文件重叠:"
echo "$OVERLAP"
echo "这些文件在波次 $wave 中被修改,并计划用于波次 $((wave+1))。"
echo "工作树将包含波次 $wave 的更改(从 $WAVE_COMMIT_SHA 分支)。"
fi
原因: 在 na-vs9 中,波次 2 的工作树是从波次 1 之前的 SHA 创建的。一个波次 2 代理覆盖了波次 1 在 rpi_phased_test.go 中的 .md→.json 修复,因为它的工作树早于该修复。在波次之间刷新基础 SHA 消除了此类冲突。
完成一个波次后,检查新出现的未阻塞问题(beads:bd ready,TaskList:TaskList())。如果仍有工作,则循环回步骤 4;如果完成,则继续步骤 7。
关于详细的检查/重试逻辑,请阅读 skills/crank/references/team-coordination.md。
当所有问题完成后,对最近的更改运行一次全面的氛围检查。在完成前修复关键问题。
如果钩子或 lib/hook-helpers.sh 被修改,验证嵌入式副本是否同步:cd cli && make sync-hooks。
关于详细的验证步骤,请阅读 skills/crank/references/failure-recovery.md。
在提取学习成果之前,为下游 /post-mortem 消费编写阶段 2 总结:
mkdir -p .agents/rpi
cat > ".agents/rpi/phase-2-summary-$(date +%Y-%m-%d)-crank.md" <<PHASE2
# 阶段 2 总结:实施
- **史诗:** <epic-id>
- **完成的波次:** ${wave}
- **完成的问题:** <completed-count>/<total-count>
- **修改的文件:** $(git diff --name-only "${WAVE_START_SHA}..HEAD" | wc -l | tr -d ' ')
- **状态:** <DONE|PARTIAL|BLOCKED>
- **完成标记:** <步骤 9 中的 promise 标记>
- **时间戳:** $(date -Iseconds)
PHASE2
此总结由 /post-mortem 步骤 2.2 用于范围核对。
如果 ao CLI 可用:运行 ao forge transcript、ao flywheel close-loop --quiet、ao metrics flywheel status 和 ao pool list --status=pending 以提取和审查学习成果。如果 ao 不可用,跳过并建议手动运行 /post-mortem。
告诉用户:
/post-mortem 以审查和提升学习成果输出完成标记:
<promise>DONE</promise>
史诗:<epic-id>
完成的问题:N
迭代次数:M/50
飞轮:<来自 ao metrics flywheel status 的状态>
如果提前停止:
<promise>BLOCKED</promise>
原因:<达到全局限制 | 无法解决的阻塞项>
剩余问题:N
迭代次数:M/50
Crank 为每个波次遵循 FIRE(查找 → 启动 → 收获 → 氛围 → 升级)。循环直到所有问题关闭(beads)或所有任务完成(TaskList)。
关于 FIRE 循环详情、并行波次模型和波次验收检查,请阅读 skills/crank/references/wave-patterns.md。
bd;如果不存在则使用 TaskList/crank plan.md 自动将计划分解为任务模糊的动词会导致工作器
Quick Ref: Autonomous epic execution.
/swarmfor each wave with runtime-native spawning. Output: closed issues + final vibe.
YOU MUST EXECUTE THIS WORKFLOW. Do not just describe it.
Autonomous execution: implement all issues until the epic is DONE.
CLI dependencies: bd (issue tracking), ao (knowledge flywheel). Both optional — see skills/shared/SKILL.md for fallback table. If bd is unavailable, use TaskList for issue tracking and skip beads sync. If ao is unavailable, skip knowledge injection/extraction.
For Claude runtime feature coverage (agents/hooks/worktree/settings), the shared source of truth is skills/shared/references/claude-code-latest-features.md, mirrored locally at references/claude-code-latest-features.md.
Beads mode (bd available):
Crank (orchestrator) Swarm (executor)
| |
+-> bd ready (wave issues) |
| |
+-> TaskCreate from beads --->+-> Select spawn backend (codex sub-agents | claude teams | fallback)
| |
+-> /swarm --->+-> Spawn workers per backend
| | (fresh context per wave)
+-> Verify + bd update <---+-> Workers report via backend channel
| |
+-> Loop until epic DONE <---+-> Cleanup backend resources after wave
TaskList mode (bd unavailable):
Crank (orchestrator, TaskList mode) Swarm (executor)
| |
+-> TaskList() (wave tasks) |
| |
+-> /swarm --->+-> Select spawn backend per wave
| |
+-> Verify via TaskList() <---+-> Workers report via backend channel
| |
+-> Loop until all completed <---+-> Cleanup backend resources after wave
Separation of concerns:
Ralph alignment source: ../shared/references/ralph-loop-contract.md (fresh context, scheduler/worker split, disk-backed state, backpressure).
| Flag | Default | Description |
|---|---|---|
--test-first | off | Enable spec-first TDD: SPEC WAVE generates contracts, TEST WAVE generates failing tests, IMPL WAVES make tests pass |
--per-task-commits | off | Opt-in per-task commit strategy. Falls back to wave-batch when file boundaries overlap. See references/commit-strategies.md. |
MAX_EPIC_WAVES = 50 (hard limit across entire epic)
This prevents infinite loops on circular dependencies or cascading failures. Typical epics use 5–10 waves max.
THE SISYPHUS RULE: Not done until explicitly DONE.
After each wave, output completion marker:
<promise>DONE</promise> - Epic truly complete, all issues closed<promise>BLOCKED</promise> - Cannot proceed (with reason)<promise>PARTIAL</promise> - Incomplete (with remaining items)Never claim completion without the marker.
Given /crank [epic-id | plan-file.md | "description"]:
Search for relevant learnings before starting the epic:
# If ao CLI available, pull relevant knowledge for this epic
if command -v ao &>/dev/null; then
# Pull knowledge scoped to the epic
ao lookup --query "<epic-title>" --limit 5 2>/dev/null || \
ao search "epic execution implementation patterns" 2>/dev/null | head -20
# Check flywheel status
ao metrics flywheel status 2>/dev/null
# Get current ratchet state
ao ratchet status 2>/dev/null
fi
If ao not available, skip this step and proceed. The knowledge flywheel enhances but is not required.
if command -v bd &>/dev/null; then
TRACKING_MODE="beads"
else
TRACKING_MODE="tasklist"
echo "Note: bd CLI not found. Using TaskList for issue tracking."
fi
Tracking mode determines the source of truth for the rest of the workflow:
| Beads Mode | TaskList Mode
---|---|---
Source of truth | bd (beads issues) | TaskList (Claude-native)
Find work | bd ready | TaskList() → pending, unblocked
Get details | bd show <id> | TaskGet(taskId)
Mark complete | bd update <id> --status closed | TaskUpdate(taskId, status="completed")
Track retries | bd comments add | Task description update
Epic tracking | bd update <epic-id> --append-notes | In-memory wave counter
Beads mode:
If epic ID provided: Use it directly. Do NOT ask for confirmation.
If no epic ID: Discover it:
bd list --type epic --status open 2>/dev/null | head -5
Single-Epic Scope Check (WARN): If bd list --type epic --status open returns more than one epic, log a warning:
WARN: Multiple open epics detected. /crank operates on a single epic.
Use --allow-multi-epic to suppress this warning.
If multiple epics found, ask user which one (WARN, not FAIL).
TaskList mode:
If input is an epic ID → Error: "bd CLI required for beads epic tracking. Install bd or provide a plan file / task list."
If input is a plan file path (.md):
TaskCreate per distinct work item)TaskUpdate(addBlockedBy)If no input:
TaskList() for existing pending tasksIf input is a description string:
TaskCreate for each)Beads mode:
# Initialize crank tracking in epic notes
bd update <epic-id> --append-notes "CRANK_START: wave=0 at $(date -Iseconds)" 2>/dev/null
TaskList mode: Track wave counter in memory only. No external state needed.
Track in memory: wave=0
# Check for --test-first flag
if [[ "$TEST_FIRST" == "true" ]]; then
# Classify issues by type
# spec-eligible: feature, bug, task → SPEC + TEST waves apply
# skip: docs, chore, ci, epic → standard implementation waves only
SPEC_ELIGIBLE=()
SPEC_SKIP=()
if [[ "$TRACKING_MODE" == "beads" ]]; then
for issue in $READY_ISSUES; do
ISSUE_TYPE=$(bd show "$issue" 2>/dev/null | grep "Type:" | head -1 | awk '{print tolower($NF)}')
case "$ISSUE_TYPE" in
feature|bug|task) SPEC_ELIGIBLE+=("$issue") ;;
docs|chore|ci|epic) SPEC_SKIP+=("$issue") ;;
*)
echo "WARNING: Issue $issue has unknown type '$ISSUE_TYPE'. Defaulting to spec-eligible."
SPEC_ELIGIBLE+=("$issue")
;;
esac
done
else
# TaskList mode: no bd available, default all to spec-eligible
SPEC_ELIGIBLE=($READY_ISSUES)
echo "TaskList mode: all ${#SPEC_ELIGIBLE[@]} issues defaulted to spec-eligible (no bd type info)"
fi
echo "Test-first mode: ${#SPEC_ELIGIBLE[@]} spec-eligible, ${#SPEC_SKIP[@]} skipped (docs/chore/ci/epic)"
fi
If --test-first is NOT set, skip Steps 3b and 3c entirely — behavior is unchanged.
Beads mode:
bd show <epic-id> 2>/dev/null
TaskList mode: TaskList() to see all tasks and their status/dependencies.
Beads mode:
Find issues that can be worked on (no blockers):
bd ready 2>/dev/null
bd ready returns the current wave - all unblocked issues. These can be executed in parallel because they have no dependencies on each other.
TaskList mode:
TaskList() → filter for status=pending, no blockedBy (or all blockers completed). These are the current wave.
Verify there are issues to work on:
If 0 ready issues found (beads mode) or 0 pending unblocked tasks (TaskList mode):
STOP and return error:
"No ready issues found for this epic. Either:
- All issues are blocked (check dependencies)
- Epic has no child issues (run /plan first)
- All issues already completed"
Also verify: epic has at least 1 child issue total. An epic with 0 children means /plan was not run.
Do NOT proceed with empty issue list - this produces false "epic complete" status.
If the epic has 3 or more child issues, require pre-mortem evidence before proceeding.
# Count child issues (beads mode)
if [[ "$TRACKING_MODE" == "beads" ]]; then
CHILD_COUNT=$(bd show "$EPIC_ID" 2>/dev/null | grep -c "↳")
else
CHILD_COUNT=$(TaskList | grep -c "pending\|in_progress")
fi
if [[ "$CHILD_COUNT" -ge 3 ]]; then
# Look for pre-mortem report in .agents/council/
PRE_MORTEM=$(ls -t .agents/council/*pre-mortem* 2>/dev/null | head -1)
if [[ -z "$PRE_MORTEM" ]]; then
echo "STOP: Epic has $CHILD_COUNT issues but no pre-mortem evidence found."
echo "Run '/pre-mortem' first to validate the plan before cranking."
echo "<promise>BLOCKED</promise>"
echo "Reason: pre-mortem required for epics with 3+ issues"
# STOP - do not continue
exit 1
fi
echo "Pre-mortem evidence found: $PRE_MORTEM"
fi
Why: Pre-mortems have positive ROI for 3+ issue epics; cost (~2 min) is negligible.
Before spawning workers, grep for every string being changed by the plan.
This catches stale cross-references that the plan missed. Grep for each key term being modified across the codebase. Matches outside the planned file set indicate scope gaps — add those files to the epic or document as tech debt.
Skip if--test-first is NOT set or if no spec-eligible issues exist.
For each spec-eligible issue (feature/bug/task):
SPEC: <issue-title>skills/crank/references/contract-template.md), codebase access (read-only).agents/specs/contract-<issue-id>.md## Invariants AND ## Test Casesskills/crank/references/wave1-spec-consistency-checklist.md across all contracts in this wave. If any item fails, re-run SPEC workers for affected issues and do NOT proceed to TEST WAVE.For BLOCKED recovery and full worker prompt, read skills/crank/references/test-first-mode.md.
Skip if--test-first is NOT set or if no spec-eligible issues exist.
For each spec-eligible issue:
TEST: <issue-title>For RED Gate enforcement and retry logic, read skills/crank/references/test-first-mode.md.
Summary: SPEC WAVE generates contracts from issues → TEST WAVE generates failing tests from contracts → RED Gate verifies all new tests fail before proceeding. Docs/chore/ci issues bypass both waves.
if command -v ao &>/dev/null; then
ao context assemble --task='<epic title>: wave $wave'
fi
This produces a 5-section briefing (GOALS, HISTORY, INTEL, TASK, PROTOCOL) at .agents/rpi/briefing-current.md with secrets redacted. Include the briefing path in each worker's TaskCreate description so workers start with full project context.
Worker prompt signpost:
Knowledge artifacts are in .agents/. See .agents/AGENTS.md for navigation. Use \ao lookup --query "topic" for learnings..agents/ file access in sandbox. The lead should search .agents/learnings/ for relevant material and inline the top 3 results directly in the worker prompt body.GREEN mode (--test-first only): If --test-first is set and SPEC/TEST waves have completed, modify worker prompts for spec-eligible issues:
"Failing tests exist at <test-file-paths>. Make them pass. Do NOT modify test files. See GREEN Mode rules in /implement SKILL.md."/implement SKILL.mdIssue typing + file manifests (REQUIRED): Include metadata.issue_type plus a metadata.files array in every TaskCreate. issue_type feeds active constraint applicability and validation policy; files feed swarm's pre-spawn conflict detection. Two workers claiming the same file in the same wave get serialized or worktree-isolated automatically. Derive both from the issue description, plan, or codebase exploration during planning. This is the shift-left edge of the prevention ratchet: compiled findings target issue type plus changed files, so missing metadata.issue_type weakens enforcement back into guesswork.
Grep-for-existing-functions (REQUIRED for new function issues): When an issue description says "create", "add", or "implement" a new function/utility, include metadata.grep_check with the function name pattern. Workers MUST grep the codebase for existing implementations before writing new code. This prevents utility duplication (e.g., estimateTokens was duplicated in context-orchestration-leverage because no grep check was specified).
Validation metadata policy (REQUIRED): For implementation tasks typed feature|bug|task, include metadata.validation.tests plus at least one structural check (files_exist or content_check). docs|chore|ci use an explicit test-exempt path and should still include applicable structural and/or command/lint checks. Do not omit metadata.issue_type and hope task-validation can infer it later.
Language Standards Injection (REQUIRED for code tasks):
Before spawning workers, detect project language and load applicable standards:
Detection: Check repo root for language markers:
go.mod → Load Go standards from the standards skill (go.md, Testing section)pyproject.toml or setup.py → Load Python standards (python.md, Testing section)Cargo.toml → Load Rust standards (rust.md)package.json → Load TypeScript standards (typescript.md)Note: This is advisory — the lead agent follows the instruction. Enforcement comes from the standards content being in the worker's context.
Validation block extraction (beads mode): Before building TaskCreate calls, extract validation metadata from each issue's description. /plan embeds conformance checks as fenced validation blocks in issue bodies:
# For each issue in the current wave, extract validation JSON from bd show output
ISSUE_BODY=$(bd show "$ISSUE_ID" 2>/dev/null)
VALIDATION_JSON=$(echo "$ISSUE_BODY" | sed -n '/^```validation$/,/^```$/{ /^```/d; p }')
if [[ -n "$VALIDATION_JSON" ]]; then
# Use extracted validation as metadata.validation in TaskCreate
echo "Extracted validation block for $ISSUE_ID"
else
# Fallback: generate default validation from files mentioned in description
# Use files_exist check for any file paths found in the issue body
MENTIONED_FILES=$(echo "$ISSUE_BODY" | grep -oE '[a-zA-Z0-9_/.-]+\.(go|py|ts|sh|md|yaml|json)' | sort -u)
VALIDATION_JSON="{\"files_exist\": [$(echo "$MENTIONED_FILES" | sed 's/.*/"&"/' | paste -sd,)]}"
echo "WARNING: No validation block in $ISSUE_ID — using fallback files_exist check"
fi
Inject the extracted or fallback VALIDATION_JSON into the metadata.validation field of each worker's TaskCreate. This closes the plan-to-crank validation pipeline: /plan writes conformance checks → bd stores them → /crank extracts and enforces them.
TaskCreate(
subject="ag-1234: Add auth middleware",
description="...",
activeForm="Implementing ag-1234",
metadata={
"issue_type": "feature",
"files": ["src/middleware/auth.py", "tests/test_auth.py"],
"validation": {
"tests": "pytest tests/test_auth.py -v",
"files_exist": ["src/middleware/auth.py", "tests/test_auth.py"]
}
}
)
Display file-ownership table (from swarm Step 1.5):
Before spawning, verify the ownership map has zero unresolved conflicts:
File Ownership Map (Wave $wave):
┌─────────────────────────────┬──────────┬──────────┐
│ File │ Owner │ Conflict │
├─────────────────────────────┼──────────┼──────────┤
│ (populated by swarm) │ │ │
└─────────────────────────────┴──────────┴──────────┘
Conflicts: 0
If conflicts > 0: Do NOT invoke /swarm. Resolve by serializing conflicting tasks into sub-waves or merging task scope before proceeding.
BEFORE each wave:
wave=$((wave + 1))
WAVE_START_SHA=$(git rev-parse HEAD)
if [[ "$TRACKING_MODE" == "beads" ]]; then
bd update <epic-id> --append-notes "CRANK_WAVE: $wave at $(date -Iseconds)" 2>/dev/null
fi
# CHECK GLOBAL LIMIT
if [[ $wave -ge 50 ]]; then
echo "<promise>BLOCKED</promise>"
echo "Global wave limit (50) reached."
# STOP - do not continue
fi
Pre-Spawn: Spec Consistency Gate
Prevents workers from implementing inconsistent or incomplete specs. Hard failures (missing frontmatter, bad structure, scope conflicts) block spawn; WARN-level issues (terminology, implementability) do not.
if [ -d .agents/specs ] && ls .agents/specs/contract-*.md &>/dev/null 2>&1; then
bash scripts/spec-consistency-gate.sh .agents/specs/ || {
echo "⚠️ Spec consistency check failed — fix contract files before spawning workers"
exit 1
}
fi
Cross-cutting constraint injection (SDD):
Before spawning workers, check for cross-cutting constraints:
# PSEUDO-CODE
# Guard clause: skip if plan has no boundaries (backward compat)
PLAN_FILE=$(ls -t .agents/plans/*.md 2>/dev/null | head -1)
if [[ -n "$PLAN_FILE" ]] && grep -q "## Boundaries" "$PLAN_FILE"; then
# Extract "Always" boundaries and convert to cross_cutting checks
# Read the plan's ## Cross-Cutting Constraints section or derive from ## Boundaries
# Inject into every TaskCreate's metadata.validation.cross_cutting
fi
# "Ask First" boundaries: in auto mode, log as annotation only (no blocking)
# In --interactive mode, prompt before proceeding
When creating TaskCreate for each wave issue, include cross-cutting constraints in metadata:
{
"validation": {
"files_exist": [...],
"content_check": {...},
"cross_cutting": [
{"name": "...", "type": "content_check", "file": "...", "pattern": "..."}
]
}
}
For wave execution details (beads sync, TaskList bridging, swarm invocation), readskills/crank/references/team-coordination.md.
Cross-cutting validation (SDD):
After per-task validation passes, run cross-cutting checks across all files modified in the wave:
# Only if cross_cutting constraints were injected
if [[ -n "$CROSS_CUTTING_CHECKS" ]]; then
WAVE_FILES=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
for check in $CROSS_CUTTING_CHECKS; do
run_validation_check "$check" "$WAVE_FILES"
done
fi
Swarm executes per-task validation (see
skills/shared/validation-contract.md). Crank trusts swarm validation and focuses on beads sync.
For verification details, retry logic, and failure escalation, readskills/crank/references/team-coordination.md and skills/crank/references/failure-recovery.md.
Principle: Verify each wave meets acceptance criteria using lightweight inline judges. No skill invocations — prevents context explosion in the orchestrator loop.
For acceptance check details (diff computation, inline judges, verdict gating), readskills/crank/references/wave-patterns.md.
After each wave completes (post-vibe-gate, pre-next-wave), write a checkpoint file:
mkdir -p .agents/crank
cat > ".agents/crank/wave-${wave}-checkpoint.json" <<EOF
{
"schema_version": 1,
"wave": ${wave},
"timestamp": "$(date -Iseconds)",
"tasks_completed": $(echo "$COMPLETED_IDS" | jq -R 'split(" ")'),
"tasks_failed": $(echo "$FAILED_IDS" | jq -R 'split(" ")'),
"files_changed": $(git diff --name-only "${WAVE_START_SHA}..HEAD" | jq -R . | jq -s .),
"git_sha": "$(git rev-parse HEAD)",
"acceptance_verdict": "<PASS|WARN|FAIL>",
"commit_strategy": "<per-task|wave-batch|wave-batch-fallback>"
}
EOF
COMPLETED_IDS / FAILED_IDS: space-separated issue IDs from the wave results.acceptance_verdict: verdict from the Wave Acceptance Check (Step 5.5). Used by final validation to skip redundant /vibe on clean epics.After writing the wave checkpoint, copy it for downstream /vibe consumption:
mkdir -p .agents/vibe-context
cp ".agents/crank/wave-${wave}-checkpoint.json" .agents/vibe-context/latest-crank-wave.json
This provides /vibe a stable path to read the latest crank state without scanning wave checkpoint files. Uses file copy (not symlink) per repo conventions.
After each wave checkpoint, display a consolidated status table:
Wave $wave Status:
┌────────┬──────────────────────────────┬───────────┬────────────┬──────────┐
│ Task │ Subject │ Status │ Validation │ Duration │
├────────┼──────────────────────────────┼───────────┼────────────┼──────────┤
│ #1 │ Add auth middleware │ completed │ PASS │ 2m 14s │
│ #2 │ Fix rate limiting │ completed │ PASS │ 1m 47s │
│ #3 │ Update config schema │ failed │ FAIL │ 3m 02s │
└────────┴──────────────────────────────┴───────────┴────────────┴──────────┘
Epic Progress:
Issues closed: 5/12 (wave 3 of est. 5)
Blocked: 1 (#8, waiting on #7)
Next wave: #6, #7 (2 tasks, 0 conflicts)
This table is informational — it does not gate progression. Step 6 handles the loop decision.
After committing a wave, refresh the base SHA before spawning the next wave. This prevents cross-wave file collisions where later-wave worktrees overwrite earlier-wave fixes.
# After wave commit completes:
WAVE_COMMIT_SHA=$(git rev-parse HEAD)
# Verify the commit landed
if [[ "$WAVE_COMMIT_SHA" == "$WAVE_START_SHA" ]]; then
echo "WARNING: Wave commit did not advance HEAD. Check for commit failures."
fi
# Next wave's worktrees MUST branch from this SHA, not the original base.
# The swarm pre-spawn step uses HEAD as the worktree base, so this is
# automatic IF the wave commit happens BEFORE the next /swarm invocation.
echo "Wave $wave committed at $WAVE_COMMIT_SHA. Next wave branches from here."
Cross-wave shared file check:
Before spawning the next wave, cross-reference the next wave's file manifests against files changed in the current wave:
# Files modified by the just-completed wave
WAVE_CHANGED=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
# Files planned for next wave (from TaskCreate metadata.files)
NEXT_WAVE_FILES=(<next wave file manifests>)
# Check for overlap
OVERLAP=$(comm -12 <(echo "$WAVE_CHANGED" | sort) <(printf '%s\n' "${NEXT_WAVE_FILES[@]}" | sort))
if [[ -n "$OVERLAP" ]]; then
echo "Cross-wave file overlap detected:"
echo "$OVERLAP"
echo "These files were modified in Wave $wave and are planned for Wave $((wave+1))."
echo "Worktrees will include Wave $wave changes (branched from $WAVE_COMMIT_SHA)."
fi
Why: In na-vs9, Wave 2 worktrees were created from pre-Wave-1 SHA. A Wave 2 agent overwrote Wave 1's .md→.json fix in rpi_phased_test.go because its worktree predated the fix. Refreshing the base SHA between waves eliminates this class of collision.
After completing a wave, check for newly unblocked issues (beads: bd ready, TaskList: TaskList()). Loop back to Step 4 if work remains, or proceed to Step 7 when done.
For detailed check/retry logic, readskills/crank/references/team-coordination.md.
When all issues complete, run ONE comprehensive vibe on recent changes. Fix CRITICAL issues before completion.
If hooks or lib/hook-helpers.sh were modified, verify embedded copies are in sync: cd cli && make sync-hooks.
For detailed validation steps, readskills/crank/references/failure-recovery.md.
Before extracting learnings, write a phase-2 summary for downstream /post-mortem consumption:
mkdir -p .agents/rpi
cat > ".agents/rpi/phase-2-summary-$(date +%Y-%m-%d)-crank.md" <<PHASE2
# Phase 2 Summary: Implementation
- **Epic:** <epic-id>
- **Waves completed:** ${wave}
- **Issues completed:** <completed-count>/<total-count>
- **Files modified:** $(git diff --name-only "${WAVE_START_SHA}..HEAD" | wc -l | tr -d ' ')
- **Status:** <DONE|PARTIAL|BLOCKED>
- **Completion marker:** <promise marker from Step 9>
- **Timestamp:** $(date -Iseconds)
PHASE2
This summary is consumed by /post-mortem Step 2.2 for scope reconciliation.
If ao CLI available: run ao forge transcript, ao flywheel close-loop --quiet, ao metrics flywheel status, and ao pool list --status=pending to extract and review learnings. If ao unavailable, skip and recommend /post-mortem manually.
Tell the user:
/post-mortem to review and promote learningsOutput completion marker:
<promise>DONE</promise>
Epic: <epic-id>
Issues completed: N
Iterations: M/50
Flywheel: <status from ao metrics flywheel status>
If stopped early:
<promise>BLOCKED</promise>
Reason: <global limit reached | unresolvable blockers>
Issues remaining: N
Iterations: M/50
Crank follows FIRE (Find → Ignite → Reap → Vibe → Escalate) for each wave. Loop until all issues are CLOSED (beads) or all tasks are completed (TaskList).
For FIRE loop details, parallel wave models, and wave acceptance check, readskills/crank/references/wave-patterns.md.
bd at start; use TaskList if absent/crank plan.md decomposes plan into tasks automaticallyAmbiguous verbs cause workers to implement the wrong operation. Use explicit instructions:
| Verb | Clarified Instruction |
|---|---|
| "Extract" | "Remove from source AND write to new file. Source line count must decrease." |
| "Remove" | "Delete the content. Verify it no longer appears in the file." |
| "Update" | "Change [specific field] from [old] to [new]." |
| "Consolidate" | "Merge from [A, B] into [C]. Delete [A, B] after merge." |
Include wc -l assertions in task metadata when content moves between files.
User says: /crank ag-m0r
Loads learnings (ao lookup --query "<epic-title>"), gets epic details (bd show), finds unblocked issues (bd ready), creates TaskList, invokes /swarm per wave with runtime-native spawning. Workers execute in parallel; lead verifies, commits per wave. Loops until all issues closed, then batched vibe + ao forge transcript.
User says: /crank .agents/plans/auth-refactor.md
Reads plan file, decomposes into TaskList tasks with dependencies. Invokes /swarm per wave, lead verifies and commits. Loops until all tasks completed, then final vibe.
User says: /crank --test-first ag-xj9
Runs: classify issues → SPEC WAVE (contracts) → TEST WAVE (failing tests, no impl access) → RED Gate (tests must fail) → GREEN IMPL WAVES (make tests pass) → final vibe. See skills/crank/references/test-first-mode.md.
If all remaining issues are blocked (e.g., circular dependencies), crank outputs <promise>BLOCKED</promise> with the blocking chains and exits cleanly. See skills/crank/references/failure-recovery.md.
| Problem | Cause | Solution |
|---|---|---|
| "No ready issues found" | Epic has no children or all blocked | Run /plan first or check deps with bd show <id> |
| "Global wave limit (50) reached" | Excessive retries or circular deps | Review .agents/crank/wave-N-checkpoint.json, fix blockers manually |
| Wave vibe gate fails repeatedly | Workers producing non-conforming code | Check .agents/council/ vibe reports, refine constraints |
| Workers complete but files missing | Permission errors or wrong paths | Check swarm output files, verify write permissions |
| RED Gate passes (tests don't fail) |
See skills/crank/references/troubleshooting.md for extended troubleshooting.
skills/crank/references/wave-patterns.mdskills/crank/references/team-coordination.mdskills/crank/references/failure-recovery.mdreferences/failure-taxonomy.mdreferences/fire.mdWeekly Installs
231
Repository
GitHub Stars
204
First Seen
Feb 2, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykPass
Installed on
opencode228
codex225
github-copilot224
gemini-cli224
cursor219
amp216
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
106,200 周安装
Injection: For issues typed feature|bug|task, the lead (not the worker) Reads the standards file and includes the Testing section verbatim in each worker's task description. This is a prompt instruction the lead follows, not runtime detection logic.
Test-specific rules: For issues that create or modify test files, also inject:
| Test wave workers wrote implementation |
| Re-run TEST WAVE with no-implementation-access prompt |
| TaskList mode can't find epic | bd CLI required for beads tracking | Provide plan file (.md) instead, or install bd |