npx skills add https://github.com/boshu2/agentops --skill post-mortem目的: 总结已完成的工作——验证其是否正确交付,提取经验教训,处理知识待办事项,激活高价值见解,并淘汰过时知识。
六个阶段:
/post-mortem # 总结近期工作
/post-mortem epic-123 # 总结特定史诗
/post-mortem --quick "insight" # 快速捕获单个经验教训(无评审会)
/post-mortem --process-only # 跳过评审会+提取,仅对待办事项运行阶段 3-5
/post-mortem --skip-activate # 提取+处理但不写入 MEMORY.md
/post-mortem --deep recent # 彻底的评审会审查
/post-mortem --mixed epic-123 # 跨供应商(Claude + Codex)
/post-mortem --explorers=2 epic-123 # 在判断前进行深度调查
/post-mortem --debate epic-123 # 两轮对抗性评审
/post-mortem --skip-checkpoint-policy epic-123 # 跳过棘轮链验证
| 标志 |
|---|
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 默认值 |
|---|
| 描述 |
|---|
--quick "text" | 关闭 | 快速捕获单个经验教训,直接写入 .agents/learnings/,无需运行完整的事后复盘。先前由 /retro --quick 处理。 |
--process-only | 关闭 | 跳过评审会和提取(阶段 1-2)。仅对现有待办事项运行阶段 3-5。 |
--skip-activate | 关闭 | 提取和处理经验教训,但不写入 MEMORY.md(跳过阶段 4 的提升)。 |
--deep | 关闭 | 3 名评审员(事后复盘的默认值) |
--mixed | 关闭 | 跨供应商(Claude + Codex)评审员 |
--explorers=N | 关闭 | 每名评审员在判断前生成 N 个探索者 |
--debate | 关闭 | 两轮对抗性评审 |
--skip-checkpoint-policy | 关闭 | 跳过棘轮链验证 |
--skip-sweep | 关闭 | 跳过评审会前的深度审计扫描 |
给定 /post-mortem --quick "insight text":
根据内容创建 slug:第一个有意义的单词,小写,连字符,最多 50 个字符。
写入位置: .agents/learnings/YYYY-MM-DD-quick-<slug>.md
---
type: learning
source: post-mortem-quick
date: YYYY-MM-DD
---
# Learning: <简短标题>
**Category**: <自动分类:debugging|architecture|process|testing|security>
**Confidence**: medium
## What We Learned
<用户的见解文本>
## Source
Quick capture via `/post-mortem --quick`
这跳过了完整流程——直接写入经验教训,无评审会或待办事项处理。
Learned: <一行总结>
Saved to: .agents/learnings/YYYY-MM-DD-quick-<slug>.md
For deeper reflection, use `/post-mortem` without --quick.
完成。 确认后立即返回。
在继续之前,验证:
git rev-parse --git-dir 2>/dev/null — 如果不存在,错误:"Not in a git repository"git log --oneline -1 2>/dev/null — 如果为空,错误:"No commits found. Run /implement first."如果设置了--process-only: 跳过飞行前检查至步骤 3。直接跳转到阶段 3:处理待办事项。
在步骤 0.5 和步骤 2.5 之前,使用读取工具将所需的参考文档加载到上下文中:
REQUIRED_REFS=(
"skills/post-mortem/references/checkpoint-policy.md"
"skills/post-mortem/references/metadata-verification.md"
"skills/post-mortem/references/closure-integrity-audit.md"
)
对于每个参考文件,使用读取工具加载其内容并保持在上下文中,供后续步骤使用。不要仅用 [ -f ] 测试文件是否存在——实际读取内容,以便步骤 0.5 和 2.5 需要时可用。
如果参考文件不存在(读取返回错误),记录警告并将其作为检查点警告添加到评审会上下文中。仅在缺失的参考文件被有意推迟时继续。
阅读 references/checkpoint-policy.md 以获取完整的检查点策略飞行前检查程序。它验证棘轮链,检查工件可用性,并运行幂等性检查。对先前的 FAIL 裁决进行 BLOCK;对其他所有情况进行 WARN。
记录事后复盘开始时间以进行周期时间跟踪:
PM_START=$(date +%s)
如果提供了史诗/问题 ID: 直接使用。
如果未提供 ID: 查找近期完成的工作:
# 检查已关闭的 beads
bd list --status closed --since "7 days ago" 2>/dev/null | head -5
# 或检查最近的 git 活动
git log --oneline --since="7 days ago" | head -10
在调用评审会之前,加载原始计划以进行比较:
bd show <id> 以获取规范/描述ls .agents/plans/ | grep <target-keyword>git log --oneline | head -10 以查找相关的 bead 引用如果找到计划,将其包含在评审会数据包的 context.spec 字段中:
{
"spec": {
"source": "bead na-0042",
"content": "<原始计划/规范文本>"
}
}
在评审会和复盘综合之前,当存在时加载已编译的预防输出:
.agents/planning-rules/*.md.agents/pre-mortem-checks/*.md首先使用这些已编译的工件,仅当已编译输出缺失或不完整时,才回退到 .agents/findings/registry.jsonl。将匹配的发现 ID 作为 Applied findings / Known risks applied 上下文带入复盘,以便事后复盘可以判断飞轮是否实际防止了重新发现。
检查是否有曲柄生成的阶段 2 摘要:
PHASE2_SUMMARY=$(ls -t .agents/rpi/phase-2-summary-*-crank.md 2>/dev/null | head -1)
if [ -n "$PHASE2_SUMMARY" ]; then
echo "Phase-2 summary found: $PHASE2_SUMMARY"
# 使用读取工具读取摘要以获取实施上下文
fi
如果可用,使用阶段 2 摘要来了解实施了什么、运行了多少波次以及修改了哪些文件。
比较原始计划范围与实际交付内容:
.agents/plans/ 读取计划(最近的)bd children <epic-id>)阅读 references/closure-integrity-audit.md 以获取完整程序。机械地验证:
commit,然后是 staged,然后是 worktreebd list 中但未在 bd show 中链接到父项的 beads将结果包含在评审会数据包中作为 context.closure_integrity。对 1-2 个发现进行 WARN,对 3+ 个发现进行 FAIL。
如果关闭是仅证据的或在其实证提交存在之前关闭,则使用 bash skills/post-mortem/scripts/write-evidence-only-closure.sh 发出证明工件,并在评审会数据包中引用 .agents/releases/evidence-only-closures/<target-id>.json 处的持久跟踪副本。写入器还会在 .agents/council/evidence-only-closures/<target-id>.json 处发出本地评审会副本。数据包必须记录所选的 evidence_mode 以及区分暂存文件与更广泛工作树状态的仓库状态细节,以便活动会话审计保持机械可重放性。
阅读 references/metadata-verification.md 以获取完整的验证程序。机械地检查:计划与实际文件、提交中的文件存在性、文档中的交叉引用以及 ASCII 图表完整性。失败项包含在评审会数据包中作为 context.metadata_failures。
如果设置了--quick 或 --skip-sweep,则跳过。
在评审会运行之前,派遣深度审计扫描以系统性地发现所有更改文件中的问题。这使用与 /vibe --deep 相同的协议——有关完整规范,请参阅 vibe 技能(skills/vibe/)中的深度审计协议。
总结:
.agents/council/sweep-manifest.md 处的扫描清单中原因: 事后复盘评审会在审查整体文件集时表现出满足偏差——无论实际问题数量如何,他们会在约 10 个发现处停止。具有类别检查清单的每文件探索者会发现 3 倍多的问题,扫描清单为评审员提供了结构化输入以进行裁决,而不是从头开始发现。
跳过条件:
--quick 标志 -> 跳过(快速内联路径)--skip-sweep 标志 -> 跳过(旧行为:评审员进行纯发现)使用复盘预设运行 /council,始终为 3 名评审员:
/council --deep --preset=retrospective validate <epic-or-recent>
默认(3 名具有复盘视角的评审员):
plan-compliance:计划了什么 vs 交付了什么?缺少什么?增加了什么?tech-debt:采取了哪些捷径?什么会在以后困扰我们?需要清理什么?learnings:出现了哪些模式?应该提取哪些可重用知识?事后复盘始终使用 3 名评审员(--deep),因为已完成的工作值得彻底审查。
超时: 事后复盘继承评审会超时设置。如果评审员超时,评审会报告将注明部分结果。事后复盘将部分评审会报告视为完整报告——裁决基于可用评审员做出。
计划/规范内容被注入到评审会数据包上下文中,以便 plan-compliance 评审员可以比较计划与交付内容。
使用 --quick(内联,无生成):
/council --quick validate <epic-or-recent>
单代理结构化审查。无需生成的快速总结。
使用辩论模式:
/post-mortem --debate epic-123
启用对抗性两轮评审以进行实施后验证。用于错过的发现会产生生产后果的高风险已交付工作。有关完整的 --debate 详细信息,请参阅 /council 文档。
高级选项(传递给评审会):
--mixed — 跨供应商(Claude + Codex)具有复盘视角--preset=<name> — 使用不同角色覆盖(例如,--preset=ops 用于生产就绪性)--explorers=N — 每名评审员在判断前生成 N 个探索者以深入调查实施情况--debate — 两轮对抗性评审(评审员在最终裁决前互相批评发现)从已完成的工作中内联提取经验教训(先前委托给复盘技能)。
# 近期提交
git log --oneline -20 --since="7 days ago"
# 史诗子项(如果提供了史诗 ID)
bd children <epic-id> 2>/dev/null | head -20
# 近期计划和调研
ls -lt .agents/plans/ .agents/research/ 2>/dev/null | head -10
读取相关工件:调研文档、计划文档、提交消息、代码更改。使用读取工具和 git 命令来理解已完成的工作。
如果对史诗进行复盘: 从 references/context-gathering.md 运行关闭完整性快速检查(幽灵 Bead 检测 + 多波次回归扫描)。将任何警告包含在发现中。
询问以下问题:
哪些方面进展顺利?
哪些方面出了问题?
我们发现了什么?
对于每个经验教训,捕获:
写入位置: .agents/learnings/YYYY-MM-DD-<topic>.md
---
id: learning-YYYY-MM-DD-<slug>
type: learning
date: YYYY-MM-DD
category: <category>
confidence: <high|medium|low>
---
# Learning: <简短标题>
## What We Learned
<描述见解的 1-2 句话>
## Why It Matters
<关于影响/价值的 1 句话>
## Source
<此经验教训来自哪项工作>
---
# Learning: <下一个标题>
**ID**: L2
...
对于在步骤 EX.3 中提取的每个经验教训,进行分类:
问题: "此经验教训是否引用了此仓库中的特定文件、包或架构?或者它是一个有助于任何项目的可转移模式?"
.agents/learnings/(步骤 EX.3 的现有行为)。使用 git rev-parse --show-toplevel 解析仓库根目录——切勿相对于当前工作目录写入。将抽象版本写入 ~/.agents/learnings/YYYY-MM-DD-<slug>.md(非本地——仅一份副本)
运行抽象 lint 检查:
file="<path-to-written-global-file>"
grep -iEn '(internal/|cmd/|\.go:|/pkg/|/src/|AGENTS\.md|CLAUDE\.md)' "$file" 2>/dev/null
grep -En '[A-Z][a-z]+[A-Z][a-z]+\.(go|py|ts|rs)' "$file" 2>/dev/null
grep -En '\./[a-z]+/' "$file" 2>/dev/null
如果匹配:向用户发出警告并显示匹配行,询问是否继续或修订。切勿阻止写入。
注意: 每个经验教训仅写入一个位置(本地或全局)。当直接写入全局时,无需 promoted_to——没有本地副本需要标记。
抽象示例:
在待办事项处理之前,将可重用的评审会发现规范化为 .agents/findings/registry.jsonl。
使用 docs/contracts/finding-registry.md 中的跟踪合约:
dedup_key、来源、pattern、detection_question、checklist_item、applicable_when 和 confidenceapplicable_when 必须使用合约中的受控词汇表dedup_key 追加或合并此注册表是 v1 咨询性预防表面。它补充了经验教训和下一项工作;并不取代它们。
在注册表变更之后,立即刷新已编译的输出,以便同一会话可以从更新的预防集中受益。
如果 hooks/finding-compiler.sh 存在,则运行:
bash hooks/finding-compiler.sh --quiet 2>/dev/null || true
这将注册表行提升到 .agents/findings/*.md,刷新 .agents/planning-rules/*.md 和 .agents/pre-mortem-checks/*.md,并重写 .agents/constraints/ 下的草稿约束元数据。主动执行仍然依赖于约束索引生命周期和运行时钩子支持,但编译本身不再推迟。
对整个待办事项中的所有经验教训进行评分、去重和标记过时项。此阶段针对所有经验教训运行,而不仅仅是阶段 2 中提取的那些。
阅读 references/backlog-processing.md 以获取详细的评分公式、去重逻辑和过时标准。
MARKER=".agents/ao/last-processed"
mkdir -p .agents/ao
if [ ! -f "$MARKER" ]; then
date -v-30d +%Y-%m-%dT%H:%M:%S 2>/dev/null || date -d "30 days ago" --iso-8601=seconds > "$MARKER"
fi
LAST_PROCESSED=$(cat "$MARKER")
find .agents/learnings/ -name "*.md" -newer "$MARKER" -not -path "*/archive/*" -type f | sort
如果未找到文件:报告"待办事项为空——无未处理经验教训"并跳转到阶段 4。
对于每对未处理的经验教训:
# Learning: 标题merged_into: 指针归档重复项计算每个经验教训的综合分数:
| 因素 | 值 | 分数 |
|---|---|---|
| Confidence | high=3, medium=2, low=1 | 1-3 |
| Citations | default=1, 在 .agents/ao/citations.jsonl 中每引用一次 +1 | 1+ |
| Recency | <7d=3, <30d=2, else=1 | 1-3 |
分数 = confidence + citations + recency
超过 30 天且引用次数为零的经验教训被标记为在阶段 5 中淘汰。
# 标记但暂不归档——阶段 5 处理淘汰
if [ "$DAYS_OLD" -gt 30 ] && [ "$CITE_COUNT" -eq 0 ]; then
echo "STALE: $LEARNING_FILE (${DAYS_OLD}d old, 0 citations)"
fi
Phase 3 (Process Backlog) Summary:
- N learnings scanned
- N duplicates merged
- N scored (range: X-Y)
- N flagged stale
提升高价值经验教训并馈送到下游系统。阅读 references/activation-policy.md 以获取详细的提升阈值和程序。
如果设置了--skip-activate: 完全跳过此阶段。报告"阶段 4 已跳过(--skip-activate)。"
分数 >= 6 的经验教训被提升:
## Key Lessons## Key Lessons
- **<标题>** — <一行见解> (source: `.agents/learnings/<文件名>`)
重要: 仅追加。切勿覆盖 MEMORY.md。
如果在此事后复盘期间注册表行发生更改,则在馈送下一项工作之前重新运行编译器,以便下游会话读取最新的已编译预防输出:
bash hooks/finding-compiler.sh --quiet 2>/dev/null || true
在处理过程中识别的可操作改进 -> 使用 ../../.agents/rpi/next-work.schema.md 中的跟踪合约和 references/harvest-next-work.md 中的写入程序,将一个模式 v1.3 批次条目追加到 .agents/rpi/next-work.jsonl:
mkdir -p .agents/rpi
# 通过 references/harvest-next-work.md 中的模式验证流程构建 VALID_ITEMS
# 然后为每个事后复盘 / 史诗追加一个条目。
ENTRY_TIMESTAMP="$(date -Iseconds)"
SOURCE_EPIC="${EPIC_ID:-recent}"
VALID_ITEMS_JSON="${VALID_ITEMS_JSON:-[]}"
printf '%s\n' "$(jq -cn \
--arg source_epic "$SOURCE_EPIC" \
--arg timestamp "$ENTRY_TIMESTAMP" \
--argjson items "$VALID_ITEMS_JSON" \
'{
source_epic: $source_epic,
timestamp: $timestamp,
items: $items,
consumed: false,
claim_status: "available",
claimed_by: null,
claimed_at: null,
consumed_by: null,
consumed_at: null
}'
)" >> .agents/rpi/next-work.jsonl
date -Iseconds > .agents/ao/last-processed
这必须是阶段 4 中的最后一个操作。
Phase 4 (Activate) Summary:
- N promoted to MEMORY.md
- N duplicates merged
- N flagged for retirement
- N constraints compiled
- N improvements fed to next-work.jsonl
归档不再有价值的知识。
在阶段 3 中标记的经验教训(>30 天,零引用):
mkdir -p .agents/learnings/archive
for f in <stale-files>; do
mv "$f" .agents/learnings/archive/
echo "Archived: $f (stale: >30d, 0 citations)"
done
在阶段 3 去重期间合并的经验教训已使用 merged_into: 指针归档。验证指针是否有效:
for f in .agents/learnings/archive/*.md; do
[ -f "$f" ] || continue
MERGED_INTO=$(grep "^merged_into:" "$f" 2>/dev/null | awk '{print $2}')
if [ -n "$MERGED_INTO" ] && [ ! -f "$MERGED_INTO" ]; then
echo "WARN: $f points to missing file: $MERGED_INTO"
fi
done
如果任何已归档的经验教训先前已提升到 MEMORY.md,则移除这些条目:
for f in <archived-files>; do
BASENAME=$(basename "$f")
# 检查 MEMORY.md 是否引用了此文件
if grep -q "$BASENAME" MEMORY.md 2>/dev/null; then
echo "WARN: MEMORY.md references archived learning: $BASENAME — consider removing"
fi
done
注意: 不要自动删除 MEMORY.md 条目。警告用户并让他们决定。
Phase 5 (Retire) Summary:
- N stale learnings archived
- N superseded learnings archived
- N MEMORY.md references to review
写入位置: .agents/council/YYYY-MM-DD-post-mortem-<topic>.md
---
id: post-mortem-YYYY-MM-DD-<topic-slug>
type: post-mortem
date: YYYY-MM-DD
source: "[[.agents/plans/YYYY-MM-DD-<plan-slug>]]"
---
# Post-Mortem: <史诗/主题>
**Epic:** <epic-id or "recent">
**Duration:** <从 PM_START 到现在经过的时间>
**Cycle-Time Trend:** <与先前事后复盘比较——是更快还是更慢?检查 .agents/council/ 中的先前事后复盘 Duration 值>
## Council Verdict: PASS / WARN / FAIL
| Judge | Verdict | Key Finding |
|-------|---------|-------------|
| Plan-Compliance | ... | ... |
| Tech-Debt | ... | ... |
| Learnings | ... | ... |
### Implementation Assessment
<council summary>
### Concerns
<any issues found>
## Learnings (from Phase 2)
### What Went Well
- ...
### What Was Hard
- ...
### Do Differently Next Time
- ...
### Patterns to Reuse
- ...
### Anti-Patterns to Avoid
- ...
### Footgun Entries (Required)
List discovered footguns — common mistakes or surprising behaviors that cost time:
| Footgun | Impact | Prevention |
|---------|--------|-----------|
| description | how it wasted time | how to prevent |
These entries are promoted to `.agents/learnings/` and injected into future worker prompts to prevent recurrence. Zero-cycle lag between discovery and prevention.
## Knowledge Lifecycle
### Backlog Processing (Phase 3)
- Scanned: N learnings
- Merged: N duplicates
- Flagged stale: N
### Activation (Phase 4)
- Promoted to MEMORY.md: N
- Constraints compiled: N
- Next-work items fed: N
### Retirement (Phase 5)
- Archived: N learnings
## Proactive Improvement Agenda
| # | Area | Improvement | Priority | Horizon | Effort | Evidence |
|---|------|-------------|----------|---------|--------|----------|
| 1 | repo / execution / ci-automation | ... | P0/P1/P2 | now/next-cycle/later | S/M/L | ... |
## Prior Findings Resolution Tracking
| Metric | Value |
|---|---|
| Backlog entries analyzed | ... |
| Prior findings total | ... |
| Resolved findings | ... |
| Unresolved findings | ... |
| Resolution rate | ...% |
| Source Epic | Findings | Resolved | Unresolved | Resolution Rate |
|---|---:|---:|---:|---:|
| ... | ... | ... | ... | ...% |
## Command-Surface Parity Checklist
| Command File | Run-path Covered by Test? | Evidence (file:line or test name) | Intentionally Uncovered? | Reason |
|---|---|---|---|---|
| cli/cmd/ao/<command>.go | yes/no | ... | yes/no | ... |
## Next Work
| # | Title | Type | Severity | Source | Target Repo |
|---|-------|------|----------|--------|-------------|
| 1 | <title> | tech-debt / improvement / pattern-fix / process-improvement | high / medium / low | council-finding / retro-learning / retro-pattern | <repo-name or *> |
### Recommended Next /rpi
/rpi "<highest-value improvement>"
## Status
[ ] CLOSED - Work complete, learnings captured
[ ] FOLLOW-UP - Issues need addressing (create new beads)
在写入事后复盘报告后,分析提取 + 评审会上下文,并主动提出改进仓库质量和执行质量的建议。
阅读提取输出(来自阶段 2)和评审会报告(来自步骤 3)。对于每个经验教训,询问:
覆盖要求:
repo(代码/合约/文档质量)execution(规划/实施/审查工作流)ci-automation(验证/工具可靠性)使用类型 process-improvement(区别于 tech-debt 或 improvement)编写流程改进项。每个项必须具有:
title:命令形式,例如"添加预提交 lint 检查"area:要改进的开发流程部分description:描述变更以及为什么复盘证据支持它的 2-3 句话evidence:激励此改进的复盘发现或评审会发现priority:P0 / P1 / P2horizon:now / next-cycle / latereffort:S / M / L这些项与评审会发现一起直接馈送到步骤 5(收获下一项工作)。它们是飞轮的增长向量——每个周期都使系统更智能。
将此写入事后复盘报告的 ## Proactive Improvement Agenda 下。
示例输出:
## Proactive Improvement Agenda
| # | Area | Improvement | Priority | Horizon | Effort | Evidence |
|---|------|-------------|----------|---------|--------|----------|
| 1 | ci-automation | Add validation metadata requirement for Go tasks | P0 | now | S | Workers shipped untested code when metadata didn't require `go test` |
| 2 | execution | Add consistency-check finding category in review | P1 | next-cycle | M | Partial refactoring left stale references undetected |
在步骤 4.5 之后,从 .agents/rpi/next-work.jsonl 计算并包含先前发现解决跟踪。阅读 references/harvest-next-work.md 以获取计算总数和每个来源解决率的 jq 查询。将结果写入事后复盘报告的 ## Prior Findings Resolution Tracking 中。
在标记事后复盘完成之前,对修改的 CLI 命令强制执行命令表面奇偶性:
cli/cmd/ao/ 下修改的命令文件。## Command-Surface Parity Checklist 中记录至少一个经过测试的运行路径(单元/集成/e2e)。如果任何修改的命令文件既缺少覆盖证据又缺少有意未覆盖的理由,则事后复盘无法标记为完成。
扫描评审会报告和提取的经验教训以获取可操作的后续项:
process-improvement)。这些是飞轮的增长向量——每个周期都使开发更有效。Purpose: Wrap up completed work — validate it shipped correctly, extract learnings, process the knowledge backlog, activate high-value insights, and retire stale knowledge.
Six phases:
/post-mortem # wraps up recent work
/post-mortem epic-123 # wraps up specific epic
/post-mortem --quick "insight" # quick-capture single learning (no council)
/post-mortem --process-only # skip council+extraction, run Phase 3-5 on backlog
/post-mortem --skip-activate # extract + process but don't write MEMORY.md
/post-mortem --deep recent # thorough council review
/post-mortem --mixed epic-123 # cross-vendor (Claude + Codex)
/post-mortem --explorers=2 epic-123 # deep investigation before judging
/post-mortem --debate epic-123 # two-round adversarial review
/post-mortem --skip-checkpoint-policy epic-123 # skip ratchet chain validation
| Flag | Default | Description |
|---|---|---|
--quick "text" | off | Quick-capture a single learning directly to .agents/learnings/ without running a full post-mortem. Formerly handled by /retro --quick. |
--process-only | off | Skip council and extraction (Phase 1-2). Run Phase 3-5 on the existing backlog only. |
--skip-activate | off | Extract and process learnings but do not write to MEMORY.md (skip Phase 4 promotions). |
--deep |
Given /post-mortem --quick "insight text":
Create a slug from the content: first meaningful words, lowercase, hyphens, max 50 chars.
Write to: .agents/learnings/YYYY-MM-DD-quick-<slug>.md
---
type: learning
source: post-mortem-quick
date: YYYY-MM-DD
---
# Learning: <Short Title>
**Category**: <auto-classify: debugging|architecture|process|testing|security>
**Confidence**: medium
## What We Learned
<user's insight text>
## Source
Quick capture via `/post-mortem --quick`
This skips the full pipeline — writes directly to learnings, no council or backlog processing.
Learned: <one-line summary>
Saved to: .agents/learnings/YYYY-MM-DD-quick-<slug>.md
For deeper reflection, use `/post-mortem` without --quick.
Done. Return immediately after confirmation.
Before proceeding, verify:
git rev-parse --git-dir 2>/dev/null — if not, error: "Not in a git repository"git log --oneline -1 2>/dev/null — if empty, error: "No commits found. Run /implement first."If--process-only: Skip Pre-Flight Checks through Step 3. Jump directly to Phase 3: Process Backlog.
Before Step 0.5 and Step 2.5, load required reference docs into context using the Read tool:
REQUIRED_REFS=(
"skills/post-mortem/references/checkpoint-policy.md"
"skills/post-mortem/references/metadata-verification.md"
"skills/post-mortem/references/closure-integrity-audit.md"
)
For each reference file, use the Read tool to load its content and hold it in context for use in later steps. Do NOT just test file existence with [ -f ] -- actually read the content so it is available when Steps 0.5 and 2.5 need it.
If a reference file does not exist (Read returns an error), log a warning and add it as a checkpoint warning in the council context. Proceed only if the missing reference is intentionally deferred.
Read references/checkpoint-policy.md for the full checkpoint-policy preflight procedure. It validates the ratchet chain, checks artifact availability, and runs idempotency checks. BLOCK on prior FAIL verdicts; WARN on everything else.
Record the post-mortem start time for cycle-time tracking:
PM_START=$(date +%s)
If epic/issue ID provided: Use it directly.
If no ID: Find recently completed work:
# Check for closed beads
bd list --status closed --since "7 days ago" 2>/dev/null | head -5
# Or check recent git activity
git log --oneline --since="7 days ago" | head -10
Before invoking council, load the original plan for comparison:
bd show <id> to get the spec/descriptionls .agents/plans/ | grep <target-keyword>git log --oneline | head -10 to find the relevant bead referenceIf a plan is found, include it in the council packet's context.spec field:
{
"spec": {
"source": "bead na-0042",
"content": "<the original plan/spec text>"
}
}
Before council and retro synthesis, load compiled prevention outputs when they exist:
.agents/planning-rules/*.md.agents/pre-mortem-checks/*.mdUse these compiled artifacts first, then fall back to .agents/findings/registry.jsonl only when compiled outputs are missing or incomplete. Carry matched finding IDs into the retro as Applied findings / Known risks applied context so post-mortem can judge whether the flywheel actually prevented rediscovery.
Check for a crank-generated phase-2 summary:
PHASE2_SUMMARY=$(ls -t .agents/rpi/phase-2-summary-*-crank.md 2>/dev/null | head -1)
if [ -n "$PHASE2_SUMMARY" ]; then
echo "Phase-2 summary found: $PHASE2_SUMMARY"
# Read the summary with the Read tool for implementation context
fi
If available, use the phase-2 summary to understand what was implemented, how many waves ran, and which files were modified.
Compare the original plan scope against what was actually delivered:
.agents/plans/ (most recent)bd children <epic-id>)Read references/closure-integrity-audit.md for the full procedure. Mechanically verifies:
commit, then staged, then worktreebd list but not linked to parent in bd showInclude results in the council packet as context.closure_integrity. WARN on 1-2 findings, FAIL on 3+.
If a closure is evidence-only or closes before its proving commit exists, emit a proof artifact with bash skills/post-mortem/scripts/write-evidence-only-closure.sh and cite the durable tracked copy at .agents/releases/evidence-only-closures/<target-id>.json in the council packet. The writer also emits a local council copy at .agents/council/evidence-only-closures/<target-id>.json. The packet must record the selected evidence_mode plus repo-state detail that distinguishes staged files from broader worktree state so active-session audits stay mechanically replayable.
Read references/metadata-verification.md for the full verification procedure. Mechanically checks: plan vs actual files, file existence in commits, cross-references in docs, and ASCII diagram integrity. Failures are included in the council packet as context.metadata_failures.
Skip if--quick or --skip-sweep.
Before council runs, dispatch a deep audit sweep to systematically discover issues across all changed files. This uses the same protocol as /vibe --deep — see the deep audit protocol in the vibe skill (skills/vibe/) for the full specification.
In summary:
.agents/council/sweep-manifest.mdWhy: Post-mortem council judges exhibit satisfaction bias when reviewing monolithic file sets — they stop at ~10 findings regardless of actual issue count. Per-file explorers with category checklists find 3x more issues, and the sweep manifest gives judges structured input to adjudicate rather than discover from scratch.
Skip conditions:
--quick flag -> skip (fast inline path)--skip-sweep flag -> skip (old behavior: judges do pure discovery)Run /council with the retrospective preset and always 3 judges:
/council --deep --preset=retrospective validate <epic-or-recent>
Default (3 judges with retrospective perspectives):
plan-compliance: What was planned vs what was delivered? What's missing? What was added?tech-debt: What shortcuts were taken? What will bite us later? What needs cleanup?learnings: What patterns emerged? What should be extracted as reusable knowledge?Post-mortem always uses 3 judges (--deep) because completed work deserves thorough review.
Timeout: Post-mortem inherits council timeout settings. If judges time out, the council report will note partial results. Post-mortem treats a partial council report the same as a full report — the verdict stands with available judges.
The plan/spec content is injected into the council packet context so the plan-compliance judge can compare planned vs delivered.
With --quick (inline, no spawning):
/council --quick validate <epic-or-recent>
Single-agent structured review. Fast wrap-up without spawning.
With debate mode:
/post-mortem --debate epic-123
Enables adversarial two-round review for post-implementation validation. Use for high-stakes shipped work where missed findings have production consequences. See /council docs for full --debate details.
Advanced options (passed through to council):
--mixed — Cross-vendor (Claude + Codex) with retrospective perspectives--preset=<name> — Override with different personas (e.g., --preset=ops for production readiness)--explorers=N — Each judge spawns N explorers to investigate the implementation deeply before judging--debate — Two-round adversarial review (judges critique each other's findings before final verdict)Inline extraction of learnings from the completed work (formerly delegated to the retro skill).
# Recent commits
git log --oneline -20 --since="7 days ago"
# Epic children (if epic ID provided)
bd children <epic-id> 2>/dev/null | head -20
# Recent plans and research
ls -lt .agents/plans/ .agents/research/ 2>/dev/null | head -10
Read relevant artifacts: research documents, plan documents, commit messages, code changes. Use the Read tool and git commands to understand what was done.
If retrospecting an epic: Run the closure integrity quick-check from references/context-gathering.md (Phantom Bead Detection + Multi-Wave Regression Scan). Include any warnings in findings.
Ask these questions:
What went well?
What went wrong?
What did we discover?
For each learning, capture:
Write to: .agents/learnings/YYYY-MM-DD-<topic>.md
---
id: learning-YYYY-MM-DD-<slug>
type: learning
date: YYYY-MM-DD
category: <category>
confidence: <high|medium|low>
---
# Learning: <Short Title>
## What We Learned
<1-2 sentences describing the insight>
## Why It Matters
<1 sentence on impact/value>
## Source
<What work this came from>
---
# Learning: <Next Title>
**ID**: L2
...
For each learning extracted in Step EX.3, classify:
Question: "Does this learning reference specific files, packages, or architecture in THIS repo? Or is it a transferable pattern that helps any project?"
.agents/learnings/ (existing behavior from Step EX.3). Use git rev-parse --show-toplevel to resolve repo root — never write relative to cwd.Write abstracted version to ~/.agents/learnings/YYYY-MM-DD-<slug>.md (NOT local — one copy only)
Run abstraction lint check:
file="<path-to-written-global-file>"
grep -iEn '(internal/|cmd/|\.go:|/pkg/|/src/|AGENTS\.md|CLAUDE\.md)' "$file" 2>/dev/null
grep -En '[A-Z][a-z]+[A-Z][a-z]+\.(go|py|ts|rs)' "$file" 2>/dev/null
grep -En '\./[a-z]+/' "$file" 2>/dev/null
If matches: WARN user with matched lines, ask to proceed or revise. Never block the write.
Note: Each learning goes to ONE location (local or global). No promoted_to needed — there's no local copy to mark when writing directly to global.
Example abstraction:
Before backlog processing, normalize reusable council findings into .agents/findings/registry.jsonl.
Use the tracked contract in docs/contracts/finding-registry.md:
dedup_key, provenance, pattern, detection_question, checklist_item, applicable_when, and confidenceapplicable_when must use the controlled vocabulary from the contractdedup_keyThis registry is the v1 advisory prevention surface. It complements learnings and next-work; it does not replace them.
After the registry mutation, refresh compiled outputs immediately so the same session can benefit from the updated prevention set.
If hooks/finding-compiler.sh exists, run:
bash hooks/finding-compiler.sh --quiet 2>/dev/null || true
This promotes registry rows into .agents/findings/*.md, refreshes .agents/planning-rules/*.md and .agents/pre-mortem-checks/*.md, and rewrites draft constraint metadata under .agents/constraints/. Active enforcement still depends on the constraint index lifecycle and runtime hook support, but compilation itself is no longer deferred.
Score, deduplicate, and flag stale learnings across the full backlog. This phase runs on ALL learnings, not just those extracted in Phase 2.
Read references/backlog-processing.md for detailed scoring formulas, deduplication logic, and staleness criteria.
MARKER=".agents/ao/last-processed"
mkdir -p .agents/ao
if [ ! -f "$MARKER" ]; then
date -v-30d +%Y-%m-%dT%H:%M:%S 2>/dev/null || date -d "30 days ago" --iso-8601=seconds > "$MARKER"
fi
LAST_PROCESSED=$(cat "$MARKER")
find .agents/learnings/ -name "*.md" -newer "$MARKER" -not -path "*/archive/*" -type f | sort
If zero files found: report "Backlog empty — no unprocessed learnings" and skip to Phase 4.
For each pair of unprocessed learnings:
# Learning: titlemerged_into: pointerCompute composite score for each learning:
| Factor | Values | Points |
|---|---|---|
| Confidence | high=3, medium=2, low=1 | 1-3 |
| Citations | default=1, +1 per cite in .agents/ao/citations.jsonl | 1+ |
| Recency | <7d=3, <30d=2, else=1 | 1-3 |
Score = confidence + citations + recency
Learnings that are >30 days old AND have zero citations are flagged for retirement in Phase 5.
# Flag but do not archive yet — Phase 5 handles retirement
if [ "$DAYS_OLD" -gt 30 ] && [ "$CITE_COUNT" -eq 0 ]; then
echo "STALE: $LEARNING_FILE (${DAYS_OLD}d old, 0 citations)"
fi
Phase 3 (Process Backlog) Summary:
- N learnings scanned
- N duplicates merged
- N scored (range: X-Y)
- N flagged stale
Promote high-value learnings and feed downstream systems. Read references/activation-policy.md for detailed promotion thresholds and procedures.
If--skip-activate is set: Skip this phase entirely. Report "Phase 4 skipped (--skip-activate)."
Learnings with score >= 6 are promoted:
## Key Lessons in MEMORY.md## Key Lessons
- **<Title>** — <one-line insight> (source: `.agents/learnings/<filename>`)
Important: Append only. Never overwrite MEMORY.md.
If registry rows changed during this post-mortem, rerun the compiler before feeding next-work so downstream sessions read the freshest compiled prevention outputs:
bash hooks/finding-compiler.sh --quiet 2>/dev/null || true
Actionable improvements identified during processing -> append one schema v1.3 batch entry to .agents/rpi/next-work.jsonl using the tracked contract in ../../.agents/rpi/next-work.schema.md and the write procedure in references/harvest-next-work.md:
mkdir -p .agents/rpi
# Build VALID_ITEMS via the schema-validation flow in references/harvest-next-work.md
# Then append one entry per post-mortem / epic.
ENTRY_TIMESTAMP="$(date -Iseconds)"
SOURCE_EPIC="${EPIC_ID:-recent}"
VALID_ITEMS_JSON="${VALID_ITEMS_JSON:-[]}"
printf '%s\n' "$(jq -cn \
--arg source_epic "$SOURCE_EPIC" \
--arg timestamp "$ENTRY_TIMESTAMP" \
--argjson items "$VALID_ITEMS_JSON" \
'{
source_epic: $source_epic,
timestamp: $timestamp,
items: $items,
consumed: false,
claim_status: "available",
claimed_by: null,
claimed_at: null,
consumed_by: null,
consumed_at: null
}'
)" >> .agents/rpi/next-work.jsonl
date -Iseconds > .agents/ao/last-processed
This must be the LAST action in Phase 4.
Phase 4 (Activate) Summary:
- N promoted to MEMORY.md
- N duplicates merged
- N flagged for retirement
- N constraints compiled
- N improvements fed to next-work.jsonl
Archive learnings that are no longer earning their keep.
Learnings flagged in Phase 3 (>30d old, zero citations):
mkdir -p .agents/learnings/archive
for f in <stale-files>; do
mv "$f" .agents/learnings/archive/
echo "Archived: $f (stale: >30d, 0 citations)"
done
Learnings merged during Phase 3 deduplication were already archived with merged_into: pointers. Verify the pointers are valid:
for f in .agents/learnings/archive/*.md; do
[ -f "$f" ] || continue
MERGED_INTO=$(grep "^merged_into:" "$f" 2>/dev/null | awk '{print $2}')
if [ -n "$MERGED_INTO" ] && [ ! -f "$MERGED_INTO" ]; then
echo "WARN: $f points to missing file: $MERGED_INTO"
fi
done
If any archived learning was previously promoted to MEMORY.md, remove those entries:
for f in <archived-files>; do
BASENAME=$(basename "$f")
# Check if MEMORY.md references this file
if grep -q "$BASENAME" MEMORY.md 2>/dev/null; then
echo "WARN: MEMORY.md references archived learning: $BASENAME — consider removing"
fi
done
Note: Do not auto-delete MEMORY.md entries. WARN the user and let them decide.
Phase 5 (Retire) Summary:
- N stale learnings archived
- N superseded learnings archived
- N MEMORY.md references to review
Write to: .agents/council/YYYY-MM-DD-post-mortem-<topic>.md
---
id: post-mortem-YYYY-MM-DD-<topic-slug>
type: post-mortem
date: YYYY-MM-DD
source: "[[.agents/plans/YYYY-MM-DD-<plan-slug>]]"
---
# Post-Mortem: <Epic/Topic>
**Epic:** <epic-id or "recent">
**Duration:** <elapsed time from PM_START to now>
**Cycle-Time Trend:** <compare against prior post-mortems — is this faster or slower? Check .agents/council/ for prior post-mortem Duration values>
## Council Verdict: PASS / WARN / FAIL
| Judge | Verdict | Key Finding |
|-------|---------|-------------|
| Plan-Compliance | ... | ... |
| Tech-Debt | ... | ... |
| Learnings | ... | ... |
### Implementation Assessment
<council summary>
### Concerns
<any issues found>
## Learnings (from Phase 2)
### What Went Well
- ...
### What Was Hard
- ...
### Do Differently Next Time
- ...
### Patterns to Reuse
- ...
### Anti-Patterns to Avoid
- ...
### Footgun Entries (Required)
List discovered footguns — common mistakes or surprising behaviors that cost time:
| Footgun | Impact | Prevention |
|---------|--------|-----------|
| description | how it wasted time | how to prevent |
These entries are promoted to `.agents/learnings/` and injected into future worker prompts to prevent recurrence. Zero-cycle lag between discovery and prevention.
## Knowledge Lifecycle
### Backlog Processing (Phase 3)
- Scanned: N learnings
- Merged: N duplicates
- Flagged stale: N
### Activation (Phase 4)
- Promoted to MEMORY.md: N
- Constraints compiled: N
- Next-work items fed: N
### Retirement (Phase 5)
- Archived: N learnings
## Proactive Improvement Agenda
| # | Area | Improvement | Priority | Horizon | Effort | Evidence |
|---|------|-------------|----------|---------|--------|----------|
| 1 | repo / execution / ci-automation | ... | P0/P1/P2 | now/next-cycle/later | S/M/L | ... |
## Prior Findings Resolution Tracking
| Metric | Value |
|---|---|
| Backlog entries analyzed | ... |
| Prior findings total | ... |
| Resolved findings | ... |
| Unresolved findings | ... |
| Resolution rate | ...% |
| Source Epic | Findings | Resolved | Unresolved | Resolution Rate |
|---|---:|---:|---:|---:|
| ... | ... | ... | ... | ...% |
## Command-Surface Parity Checklist
| Command File | Run-path Covered by Test? | Evidence (file:line or test name) | Intentionally Uncovered? | Reason |
|---|---|---|---|---|
| cli/cmd/ao/<command>.go | yes/no | ... | yes/no | ... |
## Next Work
| # | Title | Type | Severity | Source | Target Repo |
|---|-------|------|----------|--------|-------------|
| 1 | <title> | tech-debt / improvement / pattern-fix / process-improvement | high / medium / low | council-finding / retro-learning / retro-pattern | <repo-name or *> |
### Recommended Next /rpi
/rpi "<highest-value improvement>"
## Status
[ ] CLOSED - Work complete, learnings captured
[ ] FOLLOW-UP - Issues need addressing (create new beads)
After writing the post-mortem report, analyze extraction + council context and proactively propose improvements to repo quality and execution quality.
Read the extraction output (from Phase 2) and the council report (from Step 3). For each learning, ask:
Coverage requirements:
repo (code/contracts/docs quality)execution (planning/implementation/review workflow)ci-automation (validation/tooling reliability)Write process improvement items with type process-improvement (distinct from tech-debt or improvement). Each item must have:
title: imperative form, e.g. "Add pre-commit lint check"area: which part of the development process to improvedescription: 2-3 sentences describing the change and why retro evidence supports itevidence: which retro finding or council finding motivates thispriority: P0 / P1 / P2horizon: now / next-cycle / latereffort: S / M / LThese items feed directly into Step 5 (Harvest Next Work) alongside council findings. They are the flywheel's growth vector — each cycle makes the system smarter.
Write this into the post-mortem report under ## Proactive Improvement Agenda.
Example output:
## Proactive Improvement Agenda
| # | Area | Improvement | Priority | Horizon | Effort | Evidence |
|---|------|-------------|----------|---------|--------|----------|
| 1 | ci-automation | Add validation metadata requirement for Go tasks | P0 | now | S | Workers shipped untested code when metadata didn't require `go test` |
| 2 | execution | Add consistency-check finding category in review | P1 | next-cycle | M | Partial refactoring left stale references undetected |
After Step 4.5, compute and include prior-findings resolution tracking from .agents/rpi/next-work.jsonl. Read references/harvest-next-work.md for the jq queries that compute totals and per-source resolution rates. Write results into ## Prior Findings Resolution Tracking in the post-mortem report.
Before marking post-mortem complete, enforce command-surface parity for modified CLI commands:
cli/cmd/ao/ from the reviewed scope.## Command-Surface Parity Checklist.If any modified command file is missing both coverage evidence and an intentional-uncovered rationale, post-mortem cannot be marked complete.
Scan the council report and extracted learnings for actionable follow-up items:
process-improvement). These are the flywheel's growth vector — each cycle makes development more effective.pattern-fix with source retro-learning. If a footgun was discovered this cycle, it must appear in this harvest — do not defer.## Next Work section to the post-mortem report:## Next Work
| # | Title | Type | Severity | Source | Target Repo |
|---|-------|------|----------|--------|-------------|
| 1 | <title> | tech-debt / improvement / pattern-fix / process-improvement | high / medium / low | council-finding / retro-learning / retro-pattern | <repo-name or *> |
6. SCHEMA VALIDATION (MANDATORY): Before writing, validate each harvested item against the tracked contract in .agents/rpi/next-work.schema.md. Read references/harvest-next-work.md for the validation function and write procedure. Drop invalid items; do NOT block the entire harvest.
Write to next-work.jsonl (canonical path: .agents/rpi/next-work.jsonl). Read references/harvest-next-work.md for the write procedure (target_repo assignment, claim/finalize lifecycle, JSONL format, required fields).
Do NOT auto-create bd issues. Report the items and suggest: "Run /rpi --spawn-next to create an epic from these items."
If no actionable items found, write: "No follow-up items identified. Flywheel stable."
Post-mortem automatically feeds learnings into the flywheel:
if command -v ao &>/dev/null; then
ao forge markdown .agents/learnings/*.md 2>/dev/null
echo "Learnings indexed in knowledge flywheel"
# Validate and lock artifacts that passed council review
ao temper validate --min-feedback 0 .agents/learnings/YYYY-MM-DD-*.md 2>/dev/null || true
echo "Artifacts validated for tempering"
# Close session and trigger full flywheel close-loop (includes adaptive feedback)
ao session close 2>/dev/null || true
ao flywheel close-loop --quiet 2>/dev/null || true
echo "Session closed, flywheel loop triggered"
else
# Learnings are already in .agents/learnings/ from Phase 2.
# Without ao CLI, grep-based search in /research and /inject
# will find them directly — no copy to pending needed.
# Feedback-loop fallback: update confidence for cited learnings
mkdir -p .agents/ao
if [ -f .agents/ao/citations.jsonl ]; then
echo "Processing citation feedback (ao-free fallback)..."
# Read cited learning files and boost confidence notation
while IFS= read -r line; do
CITED_FILE=$(echo "$line" | grep -o '"learning_file":"[^"]*"' | cut -d'"' -f4)
if [ -f "$CITED_FILE" ]; then
# Note: confidence boost tracked via citation count, not file modification
echo "Cited: $CITED_FILE"
fi
done < .agents/ao/citations.jsonl
fi
# Session-outcome fallback: record this session's outcome
EPIC_ID="<epic-id>"
echo "{\"epic\": \"$EPIC_ID\", \"verdict\": \"<council-verdict>\", \"cycle_time_minutes\": 0, \"timestamp\": \"$(date -Iseconds)\"}" >> .agents/ao/outcomes.jsonl
# Skip ao temper validate (no fallback needed — tempering is an optimization)
echo "Flywheel fed locally (ao CLI not available — learnings searchable via grep)"
fi
Tell the user:
/rpi command from the harvested ## Next Work section (ALWAYS — this is how the flywheel spins itself)The next/rpi suggestion is MANDATORY, not opt-in. After every post-mortem, present the highest-severity harvested item as a ready-to-copy command:
## Flywheel: Next Cycle
Based on this post-mortem, the highest-priority follow-up is:
> **<title>** (<type>, <severity>)
> <1-line description>
Ready to run:
/rpi ""
Or see all N harvested items in `.agents/rpi/next-work.jsonl`.
If no items were harvested, write: "Flywheel stable — no follow-up items identified."
/plan epic-123
|
v
/pre-mortem (council on plan)
|
v
/implement
|
v
/vibe (council on code)
|
v
Ship it
|
v
/post-mortem <-- You are here
|
|-- Phase 1: Council validates implementation
|-- Phase 2: Extract learnings (inline)
|-- Phase 3: Process backlog (score, dedup, flag stale)
|-- Phase 4: Activate (promote to MEMORY.md, compile constraints)
|-- Phase 5: Retire stale learnings
|-- Phase 6: Harvest next work
|-- Suggest next /rpi --------------------+
|
+----------------------------------------+
| (flywheel: learnings become next work)
v
/rpi "<highest-priority enhancement>"
User says: /post-mortem
What happens:
/council --deep --preset=retrospective validate recent.agents/rpi/next-work.jsonlao forgeResult: Post-mortem report with learnings, tech debt identified, knowledge lifecycle stats, and suggested next /rpi command.
User says: /post-mortem ag-5k2
What happens:
bd show ag-5k2Result: Epic-specific post-mortem with 3 harvested follow-up items, 2 promoted learnings, 1 new constraint.
User says: /post-mortem --quick "always use O_CREATE|O_EXCL for atomic file creation when racing"
What happens:
atomic-file-creation-racing.agents/learnings/2026-03-03-quick-atomic-file-creation-racing.mdResult: Learning captured in 5 seconds, no council or backlog processing.
User says: /post-mortem --process-only
What happens:
Result: Knowledge backlog cleaned up without running a new post-mortem.
User says: /post-mortem --mixed ag-3b7
What happens:
Result: Higher confidence validation with cross-vendor review before closing epic.
| Problem | Cause | Solution |
|---|---|---|
| Council times out | Epic too large or too many files changed | Split post-mortem into smaller reviews or increase timeout |
| No next-work items harvested | Council found no tech debt or improvements | Flywheel stable — write entry with empty items array to next-work.jsonl |
| Schema validation failed | Harvested item missing required field or has invalid enum value | Drop invalid item, log error, proceed with valid items only |
| Checkpoint-policy preflight blocks | Prior FAIL verdict in ratchet chain without fix | Resolve prior failure (fix + re-vibe) or skip checkpoint-policy via --skip-checkpoint-policy |
| Metadata verification fails | Plan vs actual files mismatch or missing cross-references | Include failures in council packet as context.metadata_failures — judges assess severity |
| Phase 3 finds zero learnings | last-processed marker is very recent or no learnings exist |
skills/council/SKILL.md — Multi-model validation councilskills/vibe/SKILL.md — Council validates code (/vibe after coding)skills/pre-mortem/SKILL.md — Council validates plans (before implementation)Weekly Installs
230
Repository
GitHub Stars
204
First Seen
Feb 2, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
opencode226
codex223
github-copilot221
gemini-cli221
cursor217
amp216
开源项目教练指南 - 诊断问题、制定行动计划、优化开源项目运营
27,600 周安装
| off |
| 3 judges (default for post-mortem) |
--mixed | off | Cross-vendor (Claude + Codex) judges |
--explorers=N | off | Each judge spawns N explorers before judging |
--debate | off | Two-round adversarial review |
--skip-checkpoint-policy | off | Skip ratchet chain validation |
--skip-sweep | off | Skip pre-council deep audit sweep |
Reset marker: date -v-30d +%Y-%m-%dT%H:%M:%S > .agents/ao/last-processed |
| Phase 4 promotion duplicates | MEMORY.md already has the insight | Grep-based dedup should catch this; if not, manually deduplicate MEMORY.md |
| Phase 5 archives too aggressively | 30-day window too short for slow-cadence projects | Adjust the staleness threshold in references/backlog-processing.md |