stress-test by alirezarezvani/claude-skills
npx skills add https://github.com/alirezarezvani/claude-skills --skill stress-test命令: /em:stress-test <assumption>
对任何商业假设进行压力测试,抢在市场之前发现其脆弱之处。无论是收入预测、市场规模、竞争壁垒,还是招聘速度、客户留存率。
创始人生来就是乐观主义者。这本身是一种特质——你需要乐观才能从无到有地开创事业。但当商业模式中的假设被这种创业之初的乐观情绪所夸大时,它就变成了一种负担。
最危险的假设是那些所有人都认同的假设。
当整个团队都相信那个 5000 万美元的市场是真实存在的,当每次与投资者的通话都进展顺利以至于你确信融资轮次将成功关闭,当你的模型显示到 12 月 ARR 将达到 200 万美元且无人质疑时——这正是你最脆弱的时候。
压力测试不是悲观主义。它是校准。
清晰地陈述它。不是“我们的市场很大”,而是“德国中小企业 B2B 支出管理软件的总可寻址市场规模为 23 亿欧元。”
假设越具体,就越可测试。模糊的假设是不可证伪的——因此也是无用的。
常见的假设类型:
对于每一个假设,主动寻找证明它是错误的证据。
自问:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
反面证据的来源:
目标不是找到停止的理由——而是揭示你所不知道的事情。
大多数计划只模拟基准情况和上行情况。压力测试意味着明确地模拟下行情况。
对于定量假设(收入、增长、转化率):
| 情景 | 假设值 | 概率 | 影响 |
|---|---|---|---|
| 基准情况 | [原始值] | ? | |
| 熊市情况 | -30% | ? | |
| 压力情况 | -50% | ? | |
| 灾难情况 | -80% | ? |
每个级别的关键问题:业务能否存活?计划是否合理?
对于定性假设(护城河、产品市场契合度、团队能力):
有些假设比其他假设更重要。敏感性分析回答:如果这个假设发生变化,结果会改变多少?
例如:
高敏感性 = 该假设是关键杠杆。错误 = 大问题。
对于每一个高风险假设,都应该有一个对冲方案:
常见失败:
压力测试问题:
测试: 根据历史赢率而非期望的赢率来构建收入模型。
常见失败:
压力测试问题:
测试: 建立一个目标客户列表。数一数。乘以 ACV。这就是你的 SAM。
常见失败:
压力测试问题:
测试: 询问流失的客户他们离开的原因,以及竞争对手是否有可能留住他们。
常见失败:
压力测试问题:
测试: 模拟净新增员工为 0 的计划。哪些部分仍然有效?
常见失败:
压力测试问题:
假设:[精确陈述]
来源:[该假设的来源——模型、投资者推介、团队直觉]
反面证据
• [挑战此假设的具体证据]
• [可比较的失败案例]
• [与该假设相矛盾的数据点]
下行风险模型
• 熊市情况 (-30%):[对计划的影响]
• 压力情况 (-50%):[对计划的影响]
• 灾难情况 (-80%):[对计划的影响——业务能否存活?]
敏感性
此假设具有 [高 / 中 / 低] 敏感性。
10% 的变化 → 结果 [X] 的变化。
对冲方案
• 验证:[在下注之前如何测试此假设]
• 应急:[如果假设错误,备选方案是什么]
• 预警:[需要关注的领先指标——以及在什么阈值下采取行动]
每周安装量
106
代码仓库
GitHub 星标数
6.7K
首次出现
6 天前
安全审计
安装于
opencode102
amp101
gemini-cli101
codex101
kimi-cli101
cursor101
Command: /em:stress-test <assumption>
Take any business assumption and break it before the market does. Revenue projections. Market size. Competitive moat. Hiring velocity. Customer retention.
Founders are optimists by nature. That's a feature — you need optimism to start something from nothing. But it becomes a liability when assumptions in business models get inflated by the same optimism that got you started.
The most dangerous assumptions are the ones everyone agrees on.
When the whole team believes the $50M market is real, when every investor call goes well so you assume the round will close, when your model shows $2M ARR by December and nobody questions it — that's when you're most exposed.
Stress testing isn't pessimism. It's calibration.
State it explicitly. Not "our market is large" but "the total addressable market for B2B spend management software in German SMEs is €2.3B."
The more specific the assumption, the more testable it is. Vague assumptions are unfalsifiable — and therefore useless.
Common assumption types:
For every assumption, actively search for evidence that it's wrong.
Ask:
Sources of counter-evidence:
The goal isn't to find a reason to stop — it's to surface what you don't know.
Most plans model the base case and the upside. Stress testing means modeling the downside explicitly.
For quantitative assumptions (revenue, growth, conversion):
| Scenario | Assumption Value | Probability | Impact |
|---|---|---|---|
| Base case | [Original value] | ? | |
| Bear case | -30% | ? | |
| Stress case | -50% | ? | |
| Catastrophic | -80% | ? |
Key question at each level: Does the business survive? Does the plan make sense?
For qualitative assumptions (moat, product-market fit, team capability):
Some assumptions matter more than others. Sensitivity analysis answers: if this one assumption changes, how much does the outcome change?
Example:
High sensitivity = the assumption is a key lever. Wrong = big problem.
For every high-risk assumption, there should be a hedge:
Common failures:
Stress questions:
Test: Build the revenue model from historical win rates, not hoped-for ones.
Common failures:
Stress questions:
Test: Build a list of target accounts. Count them. Multiply by ACV. That's your SAM.
Common failures:
Stress questions:
Test: Ask churned customers why they left and whether a competitor could have kept them.
Common failures:
Stress questions:
Test: Model the plan with 0 net new hires. What still works?
Common failures:
Stress questions:
ASSUMPTION: [Exact statement]
SOURCE: [Where this came from — model, investor pitch, team gut feel]
COUNTER-EVIDENCE
• [Specific evidence that challenges this assumption]
• [Comparable failure case]
• [Data point that contradicts the assumption]
DOWNSIDE MODEL
• Bear case (-30%): [Impact on plan]
• Stress case (-50%): [Impact on plan]
• Catastrophic (-80%): [Impact on plan — does the business survive?]
SENSITIVITY
This assumption has [HIGH / MEDIUM / LOW] sensitivity.
A 10% change → [X] change in outcome.
HEDGE
• Validation: [How to test this before betting on it]
• Contingency: [Plan B if it's wrong]
• Early warning: [Leading indicator to watch — and at what threshold to act]
Weekly Installs
106
Repository
GitHub Stars
6.7K
First Seen
6 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode102
amp101
gemini-cli101
codex101
kimi-cli101
cursor101
如何制定北极星指标:27位产品专家框架,定义产品成功关键指标
819 周安装