opportunity-solution-tree by deanpeters/product-manager-skills
npx skills add https://github.com/deanpeters/product-manager-skills --skill opportunity-solution-tree指导产品经理通过从利益相关者请求中提取目标成果、生成机会选项(待解决的问题)、映射潜在解决方案,并根据可行性、影响力和市场契合度选择最佳概念验证(POC),来创建机会解决方案树(OST)。使用此方法,将模糊的产品请求转变为结构化的探索过程,确保团队在仓促转向解决方案之前先解决正确的问题——避免“功能工厂”综合症和过早收敛于某个想法。
这不是一个路线图生成器——它是一个结构化的探索过程,输出的是经过验证的机会和可测试的解决方案假设。
OST 是一个可视化框架(源自 Teresa Torres 的《持续发现习惯》),它连接了:
结构:
期望成果 (1)
|
+-----------+-----------+
| | |
机会 1 机会 2 机会 3 (3)
| | |
+-+-+ +-+-+ +-+-+
| | | | | | | | |
方案1 方案2 方案3 ... (总共 9 个解决方案)
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用 workshop-facilitation 作为此技能的默认交互协议。
它定义了:
其他(请说明))此文件定义了特定领域的评估内容。如果存在冲突,请遵循此文件的领域逻辑。
使用 template.md 获取完整的填充结构。
此交互式技能遵循 两阶段流程:
阶段 1: 生成 OST(提取成果、识别机会、映射解决方案) 阶段 2: 选择 POC(评估解决方案、推荐最佳起点)
代理建议:
在我们创建你的机会解决方案树之前,先收集一些背景信息:
利益相关者请求或产品计划:
产品背景(如有):
你可以直接粘贴此内容,或简要描述该请求。
代理提问: “此计划的期望成果是什么?(你试图推动哪些业务或产品指标?)”
提供 4 个编号选项:
或者描述你特定的期望成果(需可衡量:例如,“将试用转付费转化率从 15% 提高到 25%”)。
用户响应: [选择或自定义]
代理提取并确认:
代理根据期望成果和提供的背景信息生成 3 个机会。
代理说: “根据你的期望成果([来自 Q1])和你提供的背景信息,以下是可能推动此成果的 3 个机会(客户问题或需求):”
示例(如果成果 = 提高试用转付费转化率):
机会 1:用户在试用期间未体验到价值 — “新用户注册但未完成引导,从未达到‘顿悟时刻’,在看到核心价值前就放弃了”
机会 2:定价不明确或错位 — “用户不确定付费计划是否物有所值;不了解付费能获得什么;定价页面令人困惑”
机会 3:免费计划‘足够好’ — “用户无限期停留在免费层级,因为它满足了他们的需求;没有升级的充分理由”
代理提问: “哪个机会感觉最需要优先探索,或者你想修改/添加机会吗?”
用户响应: [选择或自定义]
代理为选定的机会生成 3 个解决方案想法。
代理说: “对于 机会 [X]:[名称],以下是 3 个潜在的解决方案 可供测试:”
示例(如果选定机会 1:用户在试用期间未体验到价值):
解决方案 1:引导式入门清单 — “交互式清单,逐步引导用户完成核心工作流程(例如,‘创建你的第一个项目’、‘邀请队友’、‘完成任务’)”
解决方案 2:价值实现时间触发器 — “当用户卡住时自动提示(例如,‘你尚未创建项目——这里有一个模板可以帮助你开始’)”
解决方案 3:人工辅助引导 — “为高价值试用用户(企业、团队计划)提供 15 分钟客户成功经理引导通话”
代理提问: “你想探索另一个机会的解决方案,还是转向 POC 选择?”
用户响应: [选择或自定义]
生成解决方案后,代理帮助选择首先测试哪个解决方案。
代理说: “让我们评估你的解决方案,以选择最佳起点。我将根据 可行性(构建难度)、影响力(可能推动成果的程度)和 市场契合度(与客户需求的匹配程度)为每个解决方案打分。”
代理生成评估表:
| 解决方案 | 可行性 (1-5) | 影响力 (1-5) | 市场契合度 (1-5) | 总分 | 理由 |
|---|---|---|---|---|---|
| 解决方案 1:引导式入门清单 | 4 | 4 | 5 | 13 | 可行性高(UI 模式),影响力已验证(行业标准),市场契合度强(用户期望引导) |
| 解决方案 2:价值实现时间触发器 | 3 | 3 | 4 | 10 | 可行性中等(需要分析集成),影响力中等(取决于触发器质量),市场契合度良好 |
| 解决方案 3:人工辅助引导 | 5 | 5 | 3 | 13 | 可行性高(无需开发工作),影响力高(个性化接触),市场契合度较低(无法规模化,仅限高接触) |
评分标准:
代理推荐:
推荐的 POC:解决方案 1(引导式入门清单)
选择此 POC 的原因:
备选 POC: 解决方案 3(人工辅助引导)
不建议作为 POC: 解决方案 2(价值实现时间触发器)
代理提问: “你将如何测试这个解决方案?实验是什么?”
提供 3 个编号选项:
或者描述你的实验方法。
用户响应: [选择或自定义]
完成流程后,代理输出:
# 机会解决方案树 + POC 计划
## 期望成果
**成果:** [来自 Q1]
**目标指标:** [具体、可衡量的目标]
**重要性:** [理由]
---
## 机会映射
### 机会 1:[名称]
**问题:** [描述]
**证据:** [来自背景]
**解决方案:**
1. [解决方案 A]
2. [解决方案 B]
3. [解决方案 C]
---
### 机会 2:[名称]
**问题:** [描述]
**证据:** [来自背景]
**解决方案:**
1. [解决方案 A]
2. [解决方案 B]
3. [解决方案 C]
---
### 机会 3:[名称]
**问题:** [描述]
**证据:** [来自背景]
**解决方案:**
1. [解决方案 A]
2. [解决方案 B]
3. [解决方案 C]
---
## 选定的 POC
**机会:** [选定的机会]
**解决方案:** [选定的解决方案]
**假设:**
- “如果我们 [实施解决方案],那么 [成果指标] 将从 [X] [增加/减少] 到 [Y],因为 [理由]。”
**实验:**
- **类型:** [A/B 测试 / 原型测试 / 礼宾测试]
- **参与者:** [用户数量、细分群体]
- **持续时间:** [时间线]
- **成功标准:** [什么能验证假设]
**可行性评分:** [1-5]
**影响力评分:** [1-5]
**市场契合度评分:** [1-5]
**总分:** [总和]
**选择此 POC 的原因:**
- [理由 1]
- [理由 2]
- [理由 3]
---
## 后续步骤
1. **构建实验:** [具体行动,例如,“创建入门清单线框图”]
2. **运行实验:** [具体行动,例如,“部署给 50% 的试用用户,为期 2 周”]
3. **测量结果:** [具体指标,例如,“比较激活率:清单组 vs. 对照组”]
4. **决策:** [如果成功 → 推广;如果失败 → 尝试下一个解决方案]
---
**准备好构建实验了吗?如果你希望完善假设或探索替代解决方案,请告诉我。**
查看 examples/sample.md 获取完整的 OST 示例。
迷你示例摘录:
**期望成果:** 将试用转付费转化率从 15% 提高到 25%
**机会:** 用户在试用期间未达到“顿悟时刻”
**解决方案:** 引导式入门清单
症状: “机会:我们需要一个移动应用”
后果: 你已经在未探索问题的情况下收敛到了一个解决方案。
解决方法: 将机会重新定义为客户问题:“移动优先用户无法在移动中访问产品。”
症状: “我们知道解决方案是 [X],只需要构建它”
后果: 错过更好的替代方案,没有学习过程。
解决方法: 为每个机会至少生成 3 个解决方案。在收敛之前强制发散。
症状: “期望成果:改善用户体验”
后果: 无法衡量成功,无法确定机会的优先级。
解决方法: 使成果可衡量:“将 NPS 从 30 提高到 50”或“将引导流失率从 60% 降低到 40%。”
症状: 选择一个解决方案并直接进入路线图
后果: 没有验证,构建错误东西的风险很高。
解决方法: 每个解决方案都必须映射到一个实验。没有实验 = 没有 OST。
症状: 生成 20 个机会,50 个解决方案,永远不选择一个
后果: 团队困在探索阶段,没有进展。
解决方法: 限制为 3 个机会,每个机会 3 个解决方案(总共 9 个)。选择 POC,运行实验,学习,迭代。
skills/problem-statement/SKILL.md — 将机会框定为客户问题skills/jobs-to-be-done/SKILL.md — 帮助从待完成工作研究中识别机会skills/epic-hypothesis/SKILL.md — 将经过验证的解决方案转化为可测试的史诗skills/user-story/SKILL.md — 将实验分解为可交付的用户故事skills/discovery-interview-prep/SKILL.md — 通过客户访谈验证机会技能类型: 交互式
建议文件名: opportunity-solution-tree.md
建议放置位置: /skills/interactive/
依赖项: 使用 skills/problem-statement/SKILL.md、skills/jobs-to-be-done/SKILL.md、skills/epic-hypothesis/SKILL.md、skills/user-story/SKILL.md
每周安装量
250
仓库
GitHub Stars
1.5K
首次出现
2026年2月12日
安全审计
安装于
codex222
opencode219
gemini-cli215
github-copilot214
cursor212
kimi-cli211
Guide product managers through creating an Opportunity Solution Tree (OST) by extracting target outcomes from stakeholder requests, generating opportunity options (problems to solve), mapping potential solutions, and selecting the best proof-of-concept (POC) based on feasibility, impact, and market fit. Use this to move from vague product requests to structured discovery, ensuring teams solve the right problems before jumping to solutions—avoiding "feature factory" syndrome and premature convergence on ideas.
This is not a roadmap generator—it's a structured discovery process that outputs validated opportunities with testable solution hypotheses.
An OST is a visual framework (Teresa Torres, Continuous Discovery Habits) that connects:
Structure:
Desired Outcome (1)
|
+-----------+-----------+
| | |
Opportunity Opportunity Opportunity (3)
| | |
+-+-+ +-+-+ +-+-+
| | | | | | | | |
S1 S2 S3 S1 S2 S3 S1 S2 S3 (9 total solutions)
Use workshop-facilitation as the default interaction protocol for this skill.
It defines:
Other (specify) when useful)This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
Use template.md for the full fill-in structure.
This interactive skill follows a two-phase process :
Phase 1: Generate OST (extract outcome, identify opportunities, map solutions) Phase 2: Select POC (evaluate solutions, recommend best starting point)
Agent suggests:
Before we create your Opportunity Solution Tree, let's gather context:
Stakeholder Request or Product Initiative:
Product Context (if available):
You can paste this content directly, or describe the request briefly.
Agent asks: "What's the desired outcome for this initiative? (What business or product metric are you trying to move?)"
Offer 4 enumerated options:
Or describe your specific desired outcome (be measurable: e.g., "Increase trial-to-paid conversion from 15% to 25%").
User response: [Selection or custom]
Agent extracts and confirms:
Agent generates 3 opportunities based on the desired outcome and context provided.
Agent says: "Based on your desired outcome ([from Q1]) and the context you provided, here are 3 opportunities (customer problems or needs) that could drive this outcome:"
Example (if Outcome = Increase trial-to-paid conversion):
Opportunity 1: Users don't experience value during trial — "New users sign up but don't complete onboarding, never reach 'aha moment,' abandon before seeing core value"
Opportunity 2: Pricing is unclear or misaligned — "Users unsure if paid plan is worth it; don't understand what they get for the price; pricing page confusing"
Opportunity 3: Free plan is 'good enough' — "Users stay on free tier indefinitely because it meets their needs; no compelling reason to upgrade"
Agent asks: "Which opportunity feels most critical to explore first, or would you like to modify/add opportunities?"
User response: [Selection or custom]
Agent generates 3 solution ideas for the selected opportunity.
Agent says: "For Opportunity [X]: [Name] , here are 3 potential solutions to test:"
Example (if Opportunity 1 selected: Users don't experience value during trial):
Solution 1: Guided onboarding checklist — "Interactive checklist that walks users through core workflows step-by-step (e.g., 'Create your first project,' 'Invite a teammate,' 'Complete a task')"
Solution 2: Time-to-value triggers — "Automatically prompt users when they're stuck (e.g., 'You haven't created a project yet—here's a template to get started')"
Solution 3: Human-assisted onboarding — "Offer 15-min onboarding call with CSM for high-value trial users (enterprise, team plans)"
Agent asks: "Would you like to explore solutions for another opportunity, or move to POC selection?"
User response: [Selection or custom]
After generating solutions, the agent helps select which solution to test first.
Agent says: "Let's evaluate your solutions to pick the best starting point. I'll score each solution on Feasibility (how hard to build), Impact (how much it could move the outcome), and Market Fit (how well it aligns with customer needs)."
Agent generates evaluation table:
| Solution | Feasibility (1-5) | Impact (1-5) | Market Fit (1-5) | Total Score | Rationale |
|---|---|---|---|---|---|
| Solution 1: Guided onboarding checklist | 4 | 4 | 5 | 13 | High feasibility (UI pattern), proven impact (industry standard), strong market fit (users expect onboarding) |
| Solution 2: Time-to-value triggers | 3 | 3 | 4 | 10 | Medium feasibility (requires analytics integration), moderate impact (depends on trigger quality), good market fit |
| Solution 3: Human-assisted onboarding | 5 | 5 | 3 | 13 | High feasibility (no dev work), high impact (personal touch), lower market fit (doesn't scale, high-touch only) |
Scoring criteria:
Agent recommends:
Recommended POC: Solution 1 (Guided onboarding checklist)
Why this POC:
Alternative POC: Solution 3 (Human-assisted onboarding)
Not recommended for POC: Solution 2 (Time-to-value triggers)
Agent asks: "How will you test this solution? What's the experiment?"
Offer 3 enumerated options:
Or describe your experiment approach.
User response: [Selection or custom]
After completing the flow, the agent outputs:
# Opportunity Solution Tree + POC Plan
## Desired Outcome
**Outcome:** [From Q1]
**Target Metric:** [Specific, measurable goal]
**Why it matters:** [Rationale]
---
## Opportunity Map
### Opportunity 1: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
### Opportunity 2: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
### Opportunity 3: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
## Selected POC
**Opportunity:** [Selected opportunity]
**Solution:** [Selected solution]
**Hypothesis:**
- "If we [implement solution], then [outcome metric] will [increase/decrease] from [X] to [Y] because [rationale]."
**Experiment:**
- **Type:** [A/B test / Prototype test / Concierge test]
- **Participants:** [Number of users, segment]
- **Duration:** [Timeline]
- **Success criteria:** [What validates the hypothesis]
**Feasibility Score:** [1-5]
**Impact Score:** [1-5]
**Market Fit Score:** [1-5]
**Total:** [Sum]
**Why this POC:**
- [Rationale 1]
- [Rationale 2]
- [Rationale 3]
---
## Next Steps
1. **Build experiment:** [Specific action, e.g., "Create onboarding checklist wireframes"]
2. **Run experiment:** [Specific action, e.g., "Deploy to 50% of trial users for 2 weeks"]
3. **Measure results:** [Specific metric, e.g., "Compare activation rate: checklist vs. control"]
4. **Decide:** [If successful → scale; if failed → try next solution]
---
**Ready to build the experiment? Let me know if you'd like to refine the hypothesis or explore alternative solutions.**
See examples/sample.md for full OST examples.
Mini example excerpt:
**Desired Outcome:** Increase trial-to-paid conversion from 15% to 25%
**Opportunity:** Users don’t reach "aha" moment during trial
**Solution:** Guided onboarding checklist
Symptom: "Opportunity: We need a mobile app"
Consequence: You've already converged on a solution without exploring the problem.
Fix: Reframe opportunities as customer problems: "Mobile-first users can't access product on the go."
Symptom: "We know the solution is [X], just need to build it"
Consequence: Miss better alternatives, no learning.
Fix: Generate at least 3 solutions per opportunity. Force divergence before convergence.
Symptom: "Desired Outcome: Improve user experience"
Consequence: Can't measure success, can't prioritize opportunities.
Fix: Make outcomes measurable: "Increase NPS from 30 to 50" or "Reduce onboarding drop-off from 60% to 40%."
Symptom: Picking a solution and moving straight to roadmap
Consequence: No validation, high risk of building wrong thing.
Fix: Every solution must map to an experiment. No experiments = no OST.
Symptom: Generating 20 opportunities, 50 solutions, never picking one
Consequence: Team stuck in discovery, no progress.
Fix: Limit to 3 opportunities, 3 solutions each (9 total). Pick POC, run experiment, learn, iterate.
skills/problem-statement/SKILL.md — Frames opportunities as customer problemsskills/jobs-to-be-done/SKILL.md — Helps identify opportunities from JTBD researchskills/epic-hypothesis/SKILL.md — Turns validated solutions into testable epicsskills/user-story/SKILL.md — Breaks experiments into deliverable storiesskills/discovery-interview-prep/SKILL.md — Validates opportunities through customer interviewsSkill type: Interactive Suggested filename: opportunity-solution-tree.md Suggested placement: /skills/interactive/ Dependencies: Uses skills/problem-statement/SKILL.md, skills/jobs-to-be-done/SKILL.md, skills/epic-hypothesis/SKILL.md, skills/user-story/SKILL.md
Weekly Installs
250
Repository
GitHub Stars
1.5K
First Seen
Feb 12, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykWarn
Installed on
codex222
opencode219
gemini-cli215
github-copilot214
cursor212
kimi-cli211
飞书日程待办摘要工作流:AI自动生成每日/每周开工报告,提升个人生产力
3,100 周安装
MCP Builder快速构建Claude工具服务器 - FastMCP Python/TypeScript开发指南
416 周安装
策略对比脚本 - 多策略回测分析与可视化工具,优化交易决策
416 周安装
Spring Boot 3.x OpenAPI 文档生成指南 - SpringDoc集成与Swagger UI配置
416 周安装
React Native 移动端 UI 设计规范与无障碍开发指南 | 最佳实践
417 周安装
CRM自动化工作流:HubSpot/Salesforce/Pipedrive潜在客户管理、交易跟踪与多CRM同步
417 周安装
敏捷产品负责人工具包 - 自动生成用户故事、冲刺规划与优先级排序
417 周安装