premortem by parcadei/continuous-claude-v3
npx skills add https://github.com/parcadei/continuous-claude-v3 --skill premortem在失败发生前,通过系统性地质疑计划、设计和实现来识别故障模式。基于 Gary Klein 的技术,由 Shreyas Doshi (Stripe) 推广。
/premortem # 自动检测上下文,选择深度
/premortem quick # 强制快速分析(计划、PR)
/premortem deep # 强制深度分析(在实现之前)
/premortem <file> # 分析特定计划或代码
"想象一下,现在是 3 个月后,这个项目已经惨烈地失败了。为什么会失败?"
| 类别 | 符号 | 含义 |
|---|---|---|
| 老虎 | [TIGER] | 明确的威胁,如果不解决会伤害我们 |
| 纸老虎 | [PAPER] | 看起来有威胁但可能没问题 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
[ELEPHANT] |
| 没人愿意谈论的事情 |
切勿仅基于模式匹配来标记风险。 每一个潜在的老虎都必须经过验证。
导致误报老虎的常见错误:
if exists(): 回退在标记任何老虎之前,请验证:
potential_finding:
what: "第 42 行的硬编码路径"
verification:
context_read: true # 我是否阅读了发现点周围 ±20 行的内容?
fallback_check: true # 是否有 try/except、if exists() 或 else 分支?
scope_check: true # 这甚至在这段代码的范围内吗?
dev_only_check: true # 这是在 __main__、tests/ 或仅开发代码中吗?
result: tiger | paper_tiger | false_alarm
如果任何验证检查是"否"或"未知",则不要标记为老虎。
每个老虎必须包含:
tiger:
risk: "<描述>"
location: "file.py:42"
severity: high|medium
# 必需 - 检查了什么缓解措施但未找到:
mitigation_checked: "没有 exists() 检查,没有 try/except,没有回退分支"
如果你无法用具体证据填写 mitigation_checked,那么它就不是一个经过验证的老虎。
# 基于上下文自动检测
if in_plan_creation:
depth = "quick" # 局部范围
elif before_implementation:
depth = "deep" # 全局范围
elif pr_review:
depth = "quick" # 局部范围
else:
# 询问用户
AskUserQuestion(
question="需要什么深度的预验尸分析?",
header="深度",
options=[
{"label": "快速 (2-3 分钟)", "description": "计划、PR、局部变更"},
{"label": "深度 (5-10 分钟)", "description": "实现前,全局范围"}
]
)
在脑海中过一遍这些,记下任何适用的:
核心问题:
输出格式:
premortem:
mode: quick
context: "<正在分析的计划/PR>"
# 两遍流程:首先收集潜在风险,然后验证每一个
potential_risks: # 第 1 遍:模式匹配发现
- "第 42 行的硬编码路径"
- "缺少对 X 的错误处理"
# 第 2 遍:验证后
tigers:
- risk: "<描述>"
location: "file.py:42"
severity: high|medium
category: dependency|integration|requirements|testing
mitigation_checked: "<未找到什么>" # 必需
elephants:
- risk: "<未明说的担忧>"
severity: medium
paper_tigers:
- risk: "<看起来很吓人但没问题>"
reason: "<为什么没问题 - 存在什么缓解措施>"
location: "file.py:42-48" # 显示缓解措施位置
false_alarms: # 结果证明是虚惊一场的发现
- finding: "<最初标记的内容>"
reason: "<为什么不是风险>"
系统地处理每个类别:
技术风险:
集成风险:
流程风险:
测试风险:
输出格式:
premortem:
mode: deep
context: "<正在分析的实现>"
# 两遍流程
potential_risks: # 第 1 遍:初始扫描发现
- "外部 API 没有熔断器"
- "硬编码的超时值"
# 第 2 遍:验证后(阅读上下文,检查缓解措施)
tigers:
- risk: "<描述>"
location: "file.py:42"
severity: high|medium
category: scalability|dependency|data|security|integration|testing
mitigation_checked: "<寻找了什么缓解措施但未找到>"
suggested_fix: "<如何解决>"
elephants:
- risk: "<未明说的担忧>"
severity: medium|high
suggested_fix: "<建议的方法>"
paper_tigers:
- risk: "<看起来很吓人>"
reason: "<为什么实际上没问题 - 引用缓解代码>"
location: "file.py:45-52"
false_alarms:
- finding: "<最初的担忧>"
reason: "<为什么验证显示它不是风险>"
checklist_gaps:
- category: "<哪个检查清单部分>"
items_failed: ["<项目1>", "<项目2>"]
阻塞: 呈现发现并要求用户决策。
# 构建风险摘要
risk_summary = format_risks(tigers, elephants)
AskUserQuestion(
question=f"""预验尸分析识别出 {len(tigers)} 个老虎,{len(elephants)} 个大象:
{risk_summary}
您希望如何继续?""",
header="风险",
options=[
{
"label": "接受风险并继续",
"description": "已确认但不阻塞"
},
{
"label": "将缓解措施添加到计划中(推荐)",
"description": "在继续之前,用风险缓解措施更新计划"
},
{
"label": "研究缓解方案",
"description": "我不知道如何缓解 - 帮我找到解决方案"
},
{
"label": "讨论特定风险",
"description": "讨论具体的担忧点"
}
]
)
# 记录接受情况以供审计跟踪
print("风险已确认。继续执行实现。")
# 继续到下一个工作流程步骤
# 用户提供缓解方法
# 用缓解措施部分更新计划文件
# 重新运行快速预验尸以验证缓解措施解决了风险
# 为每个高严重性老虎并行启动研究
for tiger in high_severity_tigers:
# 内部:代码库以前如何处理过这个问题?
Task(
subagent_type="scout",
prompt=f"""
查找此代码库以前如何处理:{tiger.category}
特别寻找与以下相关的模式:{tiger.risk}
返回:
- 类似解决方案的文件:行引用
- 使用的模式
- 可用的库/工具
"""
)
# 外部:最佳实践是什么?
Task(
subagent_type="oracle",
prompt=f"""
研究以下内容的最佳实践:{tiger.risk}
上下文:{tech_stack} 代码库中的 {tiger.category}
返回:
- 推荐的方法(排名)
- 库选项
- 要避免的常见陷阱
"""
)
# 等待研究完成
# 综合选项
# 通过 AskUserQuestion 呈现 2-4 个缓解方案
# 询问要讨论哪个风险
AskUserQuestion(
question="您想讨论哪个风险?",
header="风险",
options=[format_risk_option(r) for r in all_risks[:4]]
)
# 然后就该特定风险进行对话
如果用户添加了缓解措施,则附加到计划中:
## 风险缓解措施 (预验尸分析)
### 已解决的老虎:
1. **{risk}** (严重性: {severity})
- 缓解措施: {user_or_researched_mitigation}
- 添加到阶段: {phase_number}
### 已接受的风险:
1. **{risk}** - 接受原因: {reason}
### 预验尸分析运行:
- 日期: {timestamp}
- 模式: {quick|deep}
- 老虎: {count}
- 大象: {count}
计划结构批准后,在 ExitPlanMode 之前:
# 运行快速预验尸
/premortem quick
# 如果发现高风险,则阻塞直到解决
# 如果只有中/低风险,则通知并继续
# 对整个计划运行深度预验尸
/premortem deep thoughts/shared/plans/YYYY-MM-DD-feature.md
# 阻塞直到所有高风险老虎被解决
# 对差异范围运行快速预验尸
/premortem quick
# 通知发现的任何风险
| 严重性 | 是否阻塞? | 所需行动 |
|---|---|---|
| 高 | 是 | 必须解决或明确接受 |
| 中 | 否 | 通知用户,建议解决 |
| 低 | 否 | 记录以供知晓 |
User: /premortem deep thoughts/shared/plans/2026-01-08-api-rate-limiting.md
Claude: 正在对 API 速率限制计划运行深度预验尸分析...
[根据深度检查清单审查计划]
预验尸分析完成。发现 2 个老虎,1 个大象:
**老虎:**
1. [高] 外部支付 API 没有熔断器
- 类别: 依赖项
- 如果支付 API 缓慢/宕机,请求将堆积
2. [高] 未定义回滚策略
- 类别: 集成
- 如果速率限制破坏了认证流程,没有快速修复路径
**大象:**
1. [中] 团队以前没有使用过 Redis
- 我们正在引入 Redis 用于速率限制计数器
- 学习曲线可能减慢实现速度
**纸老虎:**
1. 数据库迁移大小 - 只添加一个索引,<1 秒迁移
[呈现 AskUserQuestion]
User: 研究缓解方案
Claude: 正在研究缓解措施...
[并行启动 scout + oracle]
为熔断器找到 3 个选项:
1. **使用现有的重试工具** (推荐)
- 位于 src/utils/retry.ts:45
- 已经处理指数退避
- 只需要添加熔断器状态
2. **添加 resilience4j-lite**
- 此模式的流行库
- 更多功能但增加新依赖
3. **自定义实现**
- 完全控制但需要维护更多代码
熔断器采用哪种方法?
User: 选项 1 - 使用现有的重试工具
Claude: 已添加到计划:
## 风险缓解措施 (预验尸分析)
### 已解决的老虎:
1. **没有熔断器** (高)
- 缓解措施: 使用熔断器状态扩展 src/utils/retry.ts
- 添加到阶段 1
我也要解决回滚策略老虎吗?
每周安装数
210
代码仓库
GitHub 星标数
3.6K
首次出现
2026 年 1 月 22 日
安全审计
安装于
opencode204
codex201
gemini-cli200
cursor197
github-copilot197
amp192
Identify failure modes before they occur by systematically questioning plans, designs, and implementations. Based on Gary Klein's technique, popularized by Shreyas Doshi (Stripe).
/premortem # Auto-detect context, choose depth
/premortem quick # Force quick analysis (plans, PRs)
/premortem deep # Force deep analysis (before implementation)
/premortem <file> # Analyze specific plan or code
"Imagine it's 3 months from now and this project has failed spectacularly. Why did it fail?"
| Category | Symbol | Meaning |
|---|---|---|
| Tiger | [TIGER] | Clear threat that will hurt us if not addressed |
| Paper Tiger | [PAPER] | Looks threatening but probably fine |
| Elephant | [ELEPHANT] | Thing nobody wants to talk about |
Do NOT flag risks based on pattern-matching alone. Every potential tiger MUST go through verification.
Common mistakes that create false tigers:
if exists(): fallbackBefore flagging ANY tiger, verify:
potential_finding:
what: "Hardcoded path at line 42"
verification:
context_read: true # Did I read ±20 lines around the finding?
fallback_check: true # Is there try/except, if exists(), or else branch?
scope_check: true # Is this even in scope for this code?
dev_only_check: true # Is this in __main__, tests/, or dev-only code?
result: tiger | paper_tiger | false_alarm
If ANY verification check is "no" or "unknown", DO NOT flag as tiger.
Every tiger MUST include:
tiger:
risk: "<description>"
location: "file.py:42"
severity: high|medium
# REQUIRED - what mitigation was checked and NOT found:
mitigation_checked: "No exists() check, no try/except, no fallback branch"
If you cannot fill in mitigation_checked with specific evidence, it's not a verified tiger.
# Auto-detect based on context
if in_plan_creation:
depth = "quick" # Localized scope
elif before_implementation:
depth = "deep" # Global scope
elif pr_review:
depth = "quick" # Localized scope
else:
# Ask user
AskUserQuestion(
question="What depth of pre-mortem analysis?",
header="Depth",
options=[
{"label": "Quick (2-3 min)", "description": "Plans, PRs, localized changes"},
{"label": "Deep (5-10 min)", "description": "Before implementation, global scope"}
]
)
Run through these mentally, note any that apply:
Core Questions:
Output Format:
premortem:
mode: quick
context: "<plan/PR being analyzed>"
# Two-pass process: first gather potential risks, then verify each one
potential_risks: # Pass 1: Pattern-matching findings
- "hardcoded path at line 42"
- "missing error handling for X"
# Pass 2: After verification
tigers:
- risk: "<description>"
location: "file.py:42"
severity: high|medium
category: dependency|integration|requirements|testing
mitigation_checked: "<what was NOT found>" # REQUIRED
elephants:
- risk: "<unspoken concern>"
severity: medium
paper_tigers:
- risk: "<looks scary but ok>"
reason: "<why it's fine - what mitigation EXISTS>"
location: "file.py:42-48" # Show the mitigation location
false_alarms: # Findings that turned out to be nothing
- finding: "<what was initially flagged>"
reason: "<why it's not a risk>"
Work through each category systematically:
Technical Risks:
Integration Risks:
Process Risks:
Testing Risks:
Output Format:
premortem:
mode: deep
context: "<implementation being analyzed>"
# Two-pass process
potential_risks: # Pass 1: Initial scan findings
- "no circuit breaker for external API"
- "hardcoded timeout value"
# Pass 2: After verification (read context, check for mitigations)
tigers:
- risk: "<description>"
location: "file.py:42"
severity: high|medium
category: scalability|dependency|data|security|integration|testing
mitigation_checked: "<what mitigations were looked for and NOT found>"
suggested_fix: "<how to address>"
elephants:
- risk: "<unspoken concern>"
severity: medium|high
suggested_fix: "<suggested approach>"
paper_tigers:
- risk: "<looks scary>"
reason: "<why it's actually ok - cite the mitigation code>"
location: "file.py:45-52"
false_alarms:
- finding: "<initial concern>"
reason: "<why verification showed it's not a risk>"
checklist_gaps:
- category: "<which checklist section>"
items_failed: ["<item1>", "<item2>"]
BLOCKING: Present findings and require user decision.
# Build risk summary
risk_summary = format_risks(tigers, elephants)
AskUserQuestion(
question=f"""Pre-Mortem identified {len(tigers)} tigers, {len(elephants)} elephants:
{risk_summary}
How would you like to proceed?""",
header="Risks",
options=[
{
"label": "Accept risks and proceed",
"description": "Acknowledged but not blocking"
},
{
"label": "Add mitigations to plan (Recommended)",
"description": "Update plan with risk mitigations before proceeding"
},
{
"label": "Research mitigation options",
"description": "I don't know how to mitigate - help me find solutions"
},
{
"label": "Discuss specific risks",
"description": "Talk through particular concerns"
}
]
)
# Log acceptance for audit trail
print("Risks acknowledged. Proceeding with implementation.")
# Continue to next workflow step
# User provides mitigation approach
# Update plan file with mitigations section
# Re-run quick premortem to verify mitigations address risks
# Spawn parallel research for each HIGH severity tiger
for tiger in high_severity_tigers:
# Internal: How has codebase handled this before?
Task(
subagent_type="scout",
prompt=f"""
Find how this codebase has previously handled: {tiger.category}
Specifically looking for patterns related to: {tiger.risk}
Return:
- File:line references to similar solutions
- Patterns used
- Libraries/utilities available
"""
)
# External: What are best practices?
Task(
subagent_type="oracle",
prompt=f"""
Research best practices for: {tiger.risk}
Context: {tiger.category} in a {tech_stack} codebase
Return:
- Recommended approaches (ranked)
- Library options
- Common pitfalls to avoid
"""
)
# Wait for research to complete
# Synthesize options
# Present via AskUserQuestion with 2-4 mitigation options
# Ask which risk to discuss
AskUserQuestion(
question="Which risk would you like to discuss?",
header="Risk",
options=[format_risk_option(r) for r in all_risks[:4]]
)
# Then have conversation about that specific risk
If user added mitigations, append to the plan:
## Risk Mitigations (Pre-Mortem)
### Tigers Addressed:
1. **{risk}** (severity: {severity})
- Mitigation: {user_or_researched_mitigation}
- Added to phase: {phase_number}
### Accepted Risks:
1. **{risk}** - Accepted because: {reason}
### Pre-Mortem Run:
- Date: {timestamp}
- Mode: {quick|deep}
- Tigers: {count}
- Elephants: {count}
After plan structure is approved, before ExitPlanMode:
# Run quick premortem
/premortem quick
# If HIGH risks found, block until addressed
# If only MEDIUM/LOW, inform and proceed
# Run deep premortem on full plan
/premortem deep thoughts/shared/plans/YYYY-MM-DD-feature.md
# Block until all HIGH tigers addressed
# Run quick premortem on diff scope
/premortem quick
# Inform of any risks found
| Severity | Blocking? | Action Required |
|---|---|---|
| HIGH | Yes | Must address or explicitly accept |
| MEDIUM | No | Inform user, recommend addressing |
| LOW | No | Note for awareness |
User: /premortem deep thoughts/shared/plans/2026-01-08-api-rate-limiting.md
Claude: Running deep pre-mortem on API rate limiting plan...
[Reviews plan against deep checklist]
Pre-mortem complete. Found 2 tigers, 1 elephant:
**TIGERS:**
1. [HIGH] No circuit breaker for external payment API
- Category: dependency
- If payment API is slow/down, requests will pile up
2. [HIGH] No rollback strategy defined
- Category: integration
- If rate limiting breaks auth flow, no quick fix path
**ELEPHANTS:**
1. [MEDIUM] Team hasn't used Redis before
- We're introducing Redis for rate limit counters
- Learning curve could slow implementation
**PAPER TIGERS:**
1. Database migration size - Only adds one index, <1s migration
[AskUserQuestion presented]
User: Research mitigation options
Claude: Researching mitigations...
[Spawns scout + oracle in parallel]
Found 3 options for circuit breaker:
1. **Use existing retry util** (Recommended)
- Found at src/utils/retry.ts:45
- Already handles exponential backoff
- Just need to add circuit breaker state
2. **Add resilience4j-lite**
- Popular library for this pattern
- More features but new dependency
3. **Custom implementation**
- Full control but more code to maintain
Which approach for the circuit breaker?
User: Option 1 - use existing retry util
Claude: Added to plan:
## Risk Mitigations (Pre-Mortem)
### Tigers Addressed:
1. **No circuit breaker** (HIGH)
- Mitigation: Extend src/utils/retry.ts with circuit breaker state
- Added to Phase 1
Shall I also address the rollback strategy tiger?
Weekly Installs
210
Repository
GitHub Stars
3.6K
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode204
codex201
gemini-cli200
cursor197
github-copilot197
amp192
代码审查最佳实践指南:完整流程、安全与性能审查清单
12,400 周安装