prioritization-advisor by deanpeters/product-manager-skills
npx skills add https://github.com/deanpeters/product-manager-skills --skill prioritization-advisor通过询问关于产品阶段、团队背景、决策需求和利益相关者动态的适应性问题,指导产品经理选择合适的优先级排序框架。以此避免“框架切换疲劳”(不断切换框架)或应用错误的框架(例如,使用 RICE 处理战略性押注,或使用 ICE 进行数据驱动决策)。输出一个推荐的框架,并提供针对您具体情况的实施指导。
这不是一个评分计算器——它是一个决策指南,将优先级排序框架与您的具体情况相匹配。
常见框架及其使用时机:
评分框架:
战略框架:
情境框架:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用 workshop-facilitation 作为此技能的默认交互协议。
它定义了:
其他(请说明))此文件定义了特定领域的评估内容。如果存在冲突,请遵循此文件的领域逻辑。
此交互式技能会提出 最多 4 个适应性问题,在每个步骤提供 3-4 个编号选项。
代理询问: “您的产品处于哪个阶段?”
提供 4 个编号选项:
或描述您的产品阶段(新想法、增长模式、已确立等)。
用户响应: [选择或自定义]
代理询问: “您的团队和利益相关者环境是怎样的?”
提供 4 个编号选项:
或描述您的团队/利益相关者背景。
用户响应: [选择或自定义]
代理询问: “您希望通过优先级排序解决的主要挑战是什么?”
提供 4 个编号选项:
或描述您的具体挑战。
用户响应: [选择或自定义]
代理询问: “您有多少数据可以用于指导优先级排序?”
提供 3 个编号选项:
或描述您的数据情况。
用户响应: [选择或自定义]
收集响应后,代理推荐一个框架:
# 优先级排序框架推荐
**基于您的情境:**
- **产品阶段:** [来自 Q1]
- **团队背景:** [来自 Q2]
- **决策需求:** [来自 Q3]
- **数据可用性:** [来自 Q4]
---
## 推荐框架:[框架名称]
**为什么此框架适合:**
- [基于 Q1-Q4 的理由 1]
- [理由 2]
- [理由 3]
**何时使用它:**
- [此框架表现出色的情境]
**何时“不”使用它:**
- [其局限性或失效的情境]
---
## 如何实施
### 步骤 1:[第一个实施步骤]
- [详细指导]
- [示例:“定义评分标准:Reach, Impact, Confidence, Effort”]
### 步骤 2:[第二个步骤]
- [详细指导]
- [示例:“按 1-10 分制对每个功能评分”]
### 步骤 3:[第三个步骤]
- [详细指导]
- [示例:“计算 RICE 分数:(Reach × Impact × Confidence) / Effort”]
### 步骤 4:[第四个步骤]
- [详细指导]
- [示例:“按分数排序;与利益相关者审查前 10 项”]
---
## 示例评分模板
[提供如何使用该框架的具体示例]
**示例(如果是 RICE):**
| 功能 | Reach (用户/月) | Impact (1-3) | Confidence (%) | Effort (人月) | RICE 分数 |
|---------|---------------------|--------------|----------------|------------------------|------------|
| 功能 A | 10,000 | 3 (巨大) | 80% | 2 | 12,000 |
| 功能 B | 5,000 | 2 (高) | 70% | 1 | 7,000 |
| 功能 C | 2,000 | 1 (中等) | 50% | 0.5 | 2,000 |
**优先级:** 功能 A > 功能 B > 功能 C
---
## 备选框架(第二选择)
**如果推荐的框架不合适,请考虑:** [备选框架名称]
**为什么这可能有效:**
- [理由]
**权衡:**
- [您得到什么 vs. 失去什么]
---
## 使用此框架的常见陷阱
1. **[陷阱 1]** — [描述及如何避免]
2. **[陷阱 2]** — [描述及如何避免]
3. **[陷阱 3]** — [描述及如何避免]
---
## 何时重新评估
- 产品阶段发生变化(例如,PMF → 扩展)
- 团队增长或重组
- 利益相关者动态发生变化
- 当前框架感觉失效(例如,太慢、忽略了重要因素)
---
**您是否需要此框架的实施模板或示例?**
Q1 响应: “早期 PMF,扩展中 — 已找到初步 PMF;快速增长;增加功能以保留/扩展”
Q2 响应: “跨职能团队,目标一致 — 产品、设计、工程目标一致;目标明确”
Q3 响应: “缺乏数据驱动的决策 — 凭直觉确定优先级;想要基于指标的流程”
Q4 响应: “有一些数据 — 基本分析、客户反馈,但没有严格的数据收集”
推荐框架:RICE (Reach, Impact, Confidence, Effort)
为什么适合:
何时使用它:
何时“不”使用它:
实施:
(Reach × Impact × Confidence) / Effort示例评分:
| 功能 | Reach | Impact | Confidence | Effort | RICE 分数 |
|---|---|---|---|---|---|
| 邮件提醒 | 5,000 | 2 | 70% | 1 | 7,000 |
| 移动应用 | 10,000 | 3 | 60% | 6 | 3,000 |
| 深色模式 | 8,000 | 1 | 90% | 0.5 | 14,400 |
优先级: 深色模式 > 邮件提醒 > 移动应用(尽管移动应用具有高 Reach/Impact,但 Effort 太高)
备选框架:ICE (Impact, Confidence, Ease)
为什么这可能有效:
权衡:
常见陷阱:
Q1 响应: “产品前/市场契合前 — 正在寻找 PMF;快速实验”
Q2 响应: “小团队,资源有限 — 3 名工程师,1 名 PM”
Q3 响应: “想法太多,不清楚该追求哪个”
Q4 响应: “数据极少 — 新产品,没有使用指标”
推荐框架:ICE (Impact, Confidence, Ease) 或 价值/投入矩阵
为什么“不”用 RICE:
为什么用 ICE:
或用 价值/投入矩阵:
症状: 产品前 PMF 的初创公司使用带有 10 个标准的加权评分
后果: 开销扼杀了速度。您需要的是实验,而不是严格的评分。
解决方法: 使框架与阶段匹配。产品前 PMF = ICE 或 价值/投入。扩展中 = RICE。成熟 = 机会评分 或 Kano。
症状: 每个季度都切换框架
后果: 团队困惑,浪费时间,缺乏一致性。
解决方法: 坚持使用一个框架 6-12 个月。仅在阶段/情境发生变化时重新评估。
症状: “功能 A 得了 8,000 分,功能 B 得了 7,999 分,所以 A 胜出”
后果: 忽略了战略背景、判断力和愿景。
解决方法: 将框架用作输入,而非自动化。在需要时,PM 的判断应优先于分数。
症状: PM 单独对功能评分,然后呈现给团队
后果: 缺乏认同感,工程/设计不信任分数。
解决方法: 协作评分会议。PM、设计、工程一起评分。
症状: “我们根据谁喊得最大声来确定优先级”
后果: HiPPO(最高薪者的意见)胜出,而不是数据或战略。
解决方法: 选择“任何”一个框架。即使是不完美的结构也比混乱好。
user-story.md — 已确定优先级的功能变为用户故事epic-hypothesis.md — 已确定优先级的史诗通过实验验证recommendation-canvas.md — 业务成果指导优先级排序技能类型: 交互式 建议文件名: prioritization-advisor.md 建议放置位置: /skills/interactive/ 依赖项: 无(独立,但为路线图和待办事项决策提供信息)
每周安装次数
237
仓库
GitHub Stars
1.5K
首次出现
2026年2月12日
安全审计
安装于
codex209
opencode207
gemini-cli203
github-copilot203
cursor200
kimi-cli199
Guide product managers in choosing the right prioritization framework by asking adaptive questions about product stage, team context, decision-making needs, and stakeholder dynamics. Use this to avoid "framework whiplash" (switching frameworks constantly) or applying the wrong framework (e.g., using RICE for strategic bets or ICE for data-driven decisions). Outputs a recommended framework with implementation guidance tailored to your context.
This is not a scoring calculator—it's a decision guide that matches prioritization frameworks to your specific situation.
Common frameworks and when to use them:
Scoring frameworks:
Strategic frameworks:
Contextual frameworks:
Use workshop-facilitation as the default interaction protocol for this skill.
It defines:
Other (specify) when useful)This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
This interactive skill asks up to 4 adaptive questions , offering 3-4 enumerated options at each step.
Agent asks: "What stage is your product in?"
Offer 4 enumerated options:
Or describe your product stage (new idea, growth mode, established, etc.).
User response: [Selection or custom]
Agent asks: "What's your team and stakeholder environment like?"
Offer 4 enumerated options:
Or describe your team/stakeholder context.
User response: [Selection or custom]
Agent asks: "What's the primary challenge you're trying to solve with prioritization?"
Offer 4 enumerated options:
Or describe your specific challenge.
User response: [Selection or custom]
Agent asks: "How much data do you have to inform prioritization?"
Offer 3 enumerated options:
Or describe your data situation.
User response: [Selection or custom]
After collecting responses, the agent recommends a framework:
# Prioritization Framework Recommendation
**Based on your context:**
- **Product Stage:** [From Q1]
- **Team Context:** [From Q2]
- **Decision-Making Need:** [From Q3]
- **Data Availability:** [From Q4]
---
## Recommended Framework: [Framework Name]
**Why this framework fits:**
- [Rationale 1 based on Q1-Q4]
- [Rationale 2]
- [Rationale 3]
**When to use it:**
- [Context where this framework excels]
**When NOT to use it:**
- [Limitations or contexts where it fails]
---
## How to Implement
### Step 1: [First implementation step]
- [Detailed guidance]
- [Example: "Define scoring criteria: Reach, Impact, Confidence, Effort"]
### Step 2: [Second step]
- [Detailed guidance]
- [Example: "Score each feature on 1-10 scale"]
### Step 3: [Third step]
- [Detailed guidance]
- [Example: "Calculate RICE score: (Reach × Impact × Confidence) / Effort"]
### Step 4: [Fourth step]
- [Detailed guidance]
- [Example: "Rank by score; review top 10 with stakeholders"]
---
## Example Scoring Template
[Provide a concrete example of how to use the framework]
**Example (if RICE):**
| Feature | Reach (users/month) | Impact (1-3) | Confidence (%) | Effort (person-months) | RICE Score |
|---------|---------------------|--------------|----------------|------------------------|------------|
| Feature A | 10,000 | 3 (massive) | 80% | 2 | 12,000 |
| Feature B | 5,000 | 2 (high) | 70% | 1 | 7,000 |
| Feature C | 2,000 | 1 (medium) | 50% | 0.5 | 2,000 |
**Priority:** Feature A > Feature B > Feature C
---
## Alternative Framework (Second Choice)
**If the recommended framework doesn't fit, consider:** [Alternative framework name]
**Why this might work:**
- [Rationale]
**Tradeoffs:**
- [What you gain vs. what you lose]
---
## Common Pitfalls with This Framework
1. **[Pitfall 1]** — [Description and how to avoid]
2. **[Pitfall 2]** — [Description and how to avoid]
3. **[Pitfall 3]** — [Description and how to avoid]
---
## Reassess When
- Product stage changes (e.g., PMF → scaling)
- Team grows or reorganizes
- Stakeholder dynamics shift
- Current framework feels broken (e.g., too slow, ignoring important factors)
---
**Would you like implementation templates or examples for this framework?**
Q1 Response: "Early PMF, scaling — Found initial PMF; growing fast; adding features to retain/expand"
Q2 Response: "Cross-functional team, aligned — Product, design, engineering aligned; clear goals"
Q3 Response: "Lack of data-driven decisions — Prioritizing by gut feel; want metrics-based process"
Q4 Response: "Some data — Basic analytics, customer feedback, but no rigorous data collection"
Recommended Framework: RICE (Reach, Impact, Confidence, Effort)
Why this fits:
When to use it:
When NOT to use it:
Implementation:
(Reach × Impact × Confidence) / EffortExample Scoring:
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| Email reminders | 5,000 | 2 | 70% | 1 | 7,000 |
| Mobile app | 10,000 | 3 | 60% | 6 | 3,000 |
| Dark mode | 8,000 | 1 | 90% | 0.5 | 14,400 |
Priority: Dark mode > Email reminders > Mobile app (despite mobile app having high Reach/Impact, Effort is too high)
Alternative Framework: ICE (Impact, Confidence, Ease)
Why this might work:
Tradeoffs:
Common Pitfalls:
Q1 Response: "Pre-product/market fit — Searching for PMF; experimenting rapidly"
Q2 Response: "Small team, limited resources — 3 engineers, 1 PM"
Q3 Response: "Too many ideas, unclear which to pursue"
Q4 Response: "Minimal data — New product, no usage metrics"
Recommended Framework: ICE (Impact, Confidence, Ease) or Value/Effort Matrix
Why NOT RICE:
Why ICE instead:
Or Value/Effort Matrix:
Symptom: Pre-PMF startup using weighted scoring with 10 criteria
Consequence: Overhead kills speed. You need experiments, not rigorous scoring.
Fix: Match framework to stage. Pre-PMF = ICE or Value/Effort. Scaling = RICE. Mature = Opportunity Scoring or Kano.
Symptom: Switching frameworks every quarter
Consequence: Team confusion, lost time, no consistency.
Fix: Stick with one framework for 6-12 months. Reassess only when stage/context changes.
Symptom: "Feature A scored 8,000, Feature B scored 7,999, so A wins"
Consequence: Ignores strategic context, judgment, and vision.
Fix: Use frameworks as input, not automation. PM judgment overrides scores when needed.
Symptom: PM scores features alone, presents to team
Consequence: Lack of buy-in, engineering/design don't trust scores.
Fix: Collaborative scoring sessions. PM, design, engineering score together.
Symptom: "We prioritize by who shouts loudest"
Consequence: HiPPO (Highest Paid Person's Opinion) wins, not data or strategy.
Fix: Pick any framework. Even imperfect structure beats chaos.
user-story.md — Prioritized features become user storiesepic-hypothesis.md — Prioritized epics validated with experimentsrecommendation-canvas.md — Business outcomes inform prioritizationSkill type: Interactive Suggested filename: prioritization-advisor.md Suggested placement: /skills/interactive/ Dependencies: None (standalone, but informs roadmap and backlog decisions)
Weekly Installs
237
Repository
GitHub Stars
1.5K
First Seen
Feb 12, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykPass
Installed on
codex209
opencode207
gemini-cli203
github-copilot203
cursor200
kimi-cli199
飞书日程待办摘要工作流:AI自动生成每日/每周开工报告,提升个人生产力
3,100 周安装