discovery-interview-prep by deanpeters/product-manager-skills
npx skills add https://github.com/deanpeters/product-manager-skills --skill discovery-interview-prep指导产品经理通过提出关于研究目标、客户细分、约束条件和方法论的适应性问题,为客户探索访谈做准备。使用本指南来设计有效的访谈计划,制定有针对性的问题,避免常见偏见,并最大化从有限的客户接触中学习——确保探索访谈能产生可操作的见解,而非确认偏见或表面反馈。
这不是一个脚本生成器——它是一个战略准备流程,输出一个定制的访谈计划,包括方法论、问题框架和成功标准。
一个互动过程,包括:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
使用 workshop-facilitation 作为此技能的默认交互协议。
它定义了:
其他(请说明))此文件定义了特定领域的评估内容。如果存在冲突,请遵循此文件的领域逻辑。
此互动技能会提出最多 4 个适应性问题,在每个步骤提供3-4 个枚举选项。
代理建议:
在我们设计你的访谈计划之前,让我们先收集一些背景信息:
针对你自己的产品(现有或计划中):
针对调查现有问题:
如果探索新问题空间:
你可以直接粘贴此内容,或者我们可以根据简要描述继续。
代理提问: "这些探索访谈的主要目标是什么?(你需要了解什么?)"
提供 4 个枚举选项:
或者描述你自己的研究目标(具体说明:你试图回答什么问题?)。
用户响应: [选择或自定义]
代理提问: "你要访谈谁?(尽可能具体。)"
提供 4 个枚举选项(根据 Q1 调整):
示例(如果 Q1 = 问题验证):
或者描述你的具体目标细分(角色、公司规模、行为、人口统计)。
调整提示: 使用提供的材料中的人物角色或客户细分。
用户响应: [选择或自定义]
代理提问: "进行这些访谈时,你面临哪些约束条件?"
提供 4 个枚举选项:
或者描述你的具体约束条件(预算、时间、接触渠道、团队能力)。
用户响应: [选择或自定义]
代理提问: "根据你的目标([Q1])、目标细分([Q2])和约束条件([Q3]),以下是推荐的访谈方法论:"
提供 3-4 个枚举选项(根据 Q1-Q3 进行情境感知调整):
示例(如果 Q1 = 问题验证,Q2 = 经常遇到问题的人,Q3 = 接触有限):
选择一个编号,组合方法(例如,'1 & 2'),或描述你自己的方法论。
调整示例:
用户响应: [选择或自定义]
收集响应后,代理生成定制的访谈计划:
# 探索访谈计划
**研究目标:** [来自 Q1]
**目标细分:** [来自 Q2]
**约束条件:** [来自 Q3]
**方法论:** [来自 Q4]
---
## 访谈框架
### 开场(5 分钟)
- **建立融洽关系:** "感谢您抽出时间。我是 [姓名],我正在研究 [问题领域]。这不是销售电话——我是来向您学习的。"
- **设定期望:** "我会询问您关于 [主题] 的经历。没有正确答案。请坦诚相告——批评性反馈最有帮助。"
- **获取同意:** "我记笔记/录音可以吗?"
---
### 核心问题(30-40 分钟)
**根据你的方法论([Q4]),以下是建议的问题:**
#### [方法论名称] 问题:
1. **[问题 1]** — [询问此问题的理由]
- **跟进问题:** [通过...深入挖掘]
- **避免:** [不要问像...这样的引导性问题]
2. **[问题 2]** — [理由]
- **跟进问题:** [...]
- **避免:** [...]
3. **[问题 3]** — [理由]
- **跟进问题:** [...]
- **避免:** [...]
4. **[问题 4]** — [理由]
- **跟进问题:** [...]
- **避免:** [...]
5. **[问题 5]** — [理由]
- **跟进问题:** [...]
- **避免:** [...]
**示例(如果方法论 = 问题验证 - 妈妈测试风格):**
1. **"告诉我你上次[遇到这个问题]是什么时候。"** — 获取具体、最近的行为(非假设)
- **跟进问题:** "你当时想完成什么?是什么让它变得困难?你尝试了什么?"
- **避免:** "你会使用解决这个问题的工具吗?"(引导性、假设性)
2. **"你目前如何处理[这个问题]?"** — 揭示变通方法、替代方案、痛苦强度
- **跟进问题:** "这需要多少时间/金钱?其中哪些方面让你感到沮丧?"
- **避免:** "你不觉得那效率低下吗?"(引导性)
3. **"你能一步步告诉我你做了什么吗?"** — 发现细节、边缘情况、背景
- **跟进问题:** "接下来发生了什么?你在哪里卡住了?"
- **避免:** "难吗?"(是/否问题,无用)
4. **"你尝试过其他解决方案吗?"** — 揭示竞争格局、未满足需求
- **跟进问题:** "你喜欢/不喜欢什么?为什么停止使用它?"
- **避免:** "你会为更好的解决方案付费吗?"(假设性)
5. **"如果你有魔法棒,你会改变什么?"** — 为理想结果打开空间(但要持怀疑态度——关注过去行为,而非愿望)
- **跟进问题:** "这对你为什么重要?那将实现什么?"
- **避免:** 照字面理解功能请求
---
### 结束(5 分钟)
- **总结:** "简单回顾一下,我听到 [关键见解]。我理解得对吗?"
- **请求推荐:** "您认识其他遇到这个问题的人吗?可以介绍给我吗?"
- **感谢:** "这非常有帮助。非常感谢您的时间。"
---
## 需避免的偏见
1. **确认偏见:** 不要问"你不觉得 X 是个问题吗?" → 问"告诉我你关于 X 的经历。"
2. **引导性问题:** 不要问"你会用这个吗?" → 问"你尝试过什么?为什么有效/失败?"
3. **假设性问题:** 不要问"如果我们构建 Y,你会付费吗?" → 问"你目前为什么付费?"
4. **伪装成研究的推销:** 不要说"我们正在构建 Z 来解决 X" → 说"我正在研究 X。告诉我你的经历。"
5. **是/否问题:** 不要问"开发票难吗?" → 问"带我走一遍你的开发票流程。"
---
## 成功标准
如果满足以下条件,你就知道这些访谈是成功的:
✅ **你听到具体的故事,而非泛泛的抱怨** — "上周二,我花了 3 小时..." vs. "开发票很烦人"
✅ **你发现过去的行为,而非假设的愿望** — "我试过 Zapier,但两周后就放弃了" vs. "我可能会用自动化"
✅ **你在 3+ 次访谈中识别出模式** — 相同的痛点独立出现
✅ **你被某些事情惊讶到** — 如果一切都证实了你的假设,说明你在问引导性问题
✅ **你能逐字引用客户的话** — 实际语言 = 真实见解
---
## 访谈后勤
**招募:**
- [根据 Q3 约束条件,建议招募渠道]
- **示例(如果 Q3 = 接触有限):** "联系 20-30 人以获得 5-10 次访谈(33% 的响应率是典型的)"
- **示例(如果 Q3 = 现有客户):** "向 50 位客户发送邮件,提供 50 美元亚马逊礼品卡激励"
**安排:**
- 每次访谈 45-60 分钟(30-40 分钟对话 + 缓冲时间)
- 如果可能则录音(需同意),或做详细笔记
- 每天最多安排 2-3 次(你需要时间进行综合)
**综合:**
- 每次访谈后,立即写下关键见解(记忆消退很快)
- 5 次访谈后,寻找模式(共同的痛点、任务、变通方法)
- 使用 `problem-statement.md` 来构建发现
---
**准备好开始招募和访谈了吗?如果你想细化计划的任何部分,请告诉我。**
步骤 0 - 背景: 用户分享假设:"自由职业者浪费时间手动追讨逾期付款。"
Q1 响应: "问题验证 — 确认追讨逾期付款是否足够痛苦值得解决"
Q2 响应: "经常遇到问题的人 — 每月为 5+ 个客户开发票的自由职业者"
Q3 响应: "需要冷启动接触 — 没有现有客户;需要通过 LinkedIn、Reddit、自由职业者社区招募"
Q4 响应: "问题验证访谈(妈妈测试风格) — 关注过去行为,而非假设"
生成的计划: 包含 5 个妈妈测试风格的问题(上次追讨逾期付款是什么时候,你目前如何处理,你尝试过什么等),需避免的偏见(引导性问题、假设性问题),以及成功标准(具体故事、过去行为、3+ 次访谈中的模式)。
为什么这有效:
症状: "你希望我们构建什么功能?"
后果: 你得到功能请求,而非问题。客户不知道解决方案。
解决方法: 询问过去行为:"告诉我你上次在 X 上遇到困难是什么时候。"
症状: 花 20 分钟解释你的产品想法
后果: 客户觉得有义务说好话。没有诚实的反馈。
解决方法: 不要提及你的解决方案,直到最后 5 分钟(如果需要的话)。专注于他们的问题。
症状: 访谈朋友、家人或不遇到问题的人
后果: 礼貌的反馈,而非真实的见解。
解决方法: 访谈经常且最近遇到问题的人。
症状: "我们和 2 个人聊过,他们喜欢,开始构建吧!"
后果: 小样本 = 确认偏见。
解决方法: 至少访谈 5-10 人。寻找模式,而非一次性反馈。
症状: 访谈后依赖记忆
后果: 丢失细节、记错引述、无法发现模式。
解决方法: 录音(需同意)或做详细笔记。每次访谈后立即进行综合。
problem-statement.md — 使用访谈见解构建问题陈述proto-persona.md — 定义访谈目标细分jobs-to-be-done.md — 用于访谈的 JTBD 方法论技能类型: 互动式 建议文件名: discovery-interview-prep.md 建议放置位置: /skills/interactive/ 依赖项: 使用 problem-statement.md, proto-persona.md, jobs-to-be-done.md
每周安装次数
248
仓库
GitHub 星标
1.5K
首次出现
2026年2月12日
安全审计
安装于
codex219
opencode217
gemini-cli214
github-copilot213
cursor213
kimi-cli209
Guide product managers through preparing for customer discovery interviews by asking adaptive questions about research goals, customer segments, constraints, and methodologies. Use this to design effective interview plans, craft targeted questions, avoid common biases, and maximize learning from limited customer access—ensuring discovery interviews yield actionable insights rather than confirmation bias or surface-level feedback.
This is not a script generator—it's a strategic prep process that outputs a tailored interview plan with methodology, question framework, and success criteria.
An interactive process that:
Use workshop-facilitation as the default interaction protocol for this skill.
It defines:
Other (specify) when useful)This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
This interactive skill asks up to 4 adaptive questions , offering 3-4 enumerated options at each step.
Agent suggests:
Before we design your interview plan, let's gather context:
For Your Own Product (Existing or Planned):
For Investigating an Existing Problem:
If Exploring a New Problem Space:
You can paste this content directly, or we can proceed with a brief description.
Agent asks: "What's the primary goal of these discovery interviews? (What do you need to learn?)"
Offer 4 enumerated options:
Or describe your own research goal (be specific: what question are you trying to answer?).
User response: [Selection or custom]
Agent asks: "Who are you interviewing? (Be as specific as possible.)"
Offer 4 enumerated options (adapted based on Q1):
Example (if Q1 = Problem validation):
Or describe your specific target segment (role, company size, behaviors, demographics).
Adaptation tip: Use personas or customer segments from provided materials.
User response: [Selection or custom]
Agent asks: "What constraints are you working with for these interviews?"
Offer 4 enumerated options:
Or describe your specific constraints (budget, time, access, team capacity).
User response: [Selection or custom]
Agent asks: "Based on your goal ([Q1]), target segment ([Q2]), and constraints ([Q3]), here are recommended interview methodologies:"
Offer 3-4 enumerated options (context-aware based on Q1-Q3):
Example (if Q1 = Problem validation, Q2 = People who experience problem regularly, Q3 = Limited access):
Problem validation interviews (Mom Test style) — Ask about past behavior, not hypotheticals. Focus on: "Tell me about the last time you [experienced the problem]. What did you try? What happened?" (Best for: Validating if problem is real and painful)
Jobs-to-be-Done (JTBD) interviews — Focus on what customers are trying to accomplish, not what they want. Ask: "What were you trying to get done? What alternatives did you consider? What made you choose X?" (Best for: Understanding motivations and switching behavior)
Switch interviews — Interview customers who recently switched from a competitor or alternative. Ask: "What prompted you to look for a new solution? What was the 'push' away from the old tool? What 'pulled' you to try ours?" (Best for: Understanding competitive positioning and unmet needs)
Timeline/journey mapping interviews — Walk through their entire experience chronologically. Ask: "Walk me through the first time you encountered this problem. What happened next? How did you try to solve it?" (Best for: Uncovering full context and pain points)
Choose a number, combine approaches (e.g., '1 & 2'), or describe your own methodology.
Adaptation examples:
User response: [Selection or custom]
After collecting responses, the agent generates a tailored interview plan:
# Discovery Interview Plan
**Research Goal:** [From Q1]
**Target Segment:** [From Q2]
**Constraints:** [From Q3]
**Methodology:** [From Q4]
---
## Interview Framework
### Opening (5 minutes)
- **Build rapport:** "Thanks for taking the time. I'm [name], and I'm researching [problem space]. This isn't a sales call—I'm here to learn from your experience."
- **Set expectations:** "I'll ask about your experiences with [topic]. There are no right answers. Feel free to be honest—critical feedback is most helpful."
- **Get consent:** "Is it okay if I take notes / record this conversation?"
---
### Core Questions (30-40 minutes)
**Based on your methodology ([Q4]), here are suggested questions:**
#### [Methodology Name] Questions:
1. **[Question 1]** — [Rationale for asking this]
- **Follow-up:** [Dig deeper with...]
- **Avoid:** [Don't ask leading version like...]
2. **[Question 2]** — [Rationale]
- **Follow-up:** [...]
- **Avoid:** [...]
3. **[Question 3]** — [Rationale]
- **Follow-up:** [...]
- **Avoid:** [...]
4. **[Question 4]** — [Rationale]
- **Follow-up:** [...]
- **Avoid:** [...]
5. **[Question 5]** — [Rationale]
- **Follow-up:** [...]
- **Avoid:** [...]
**Example (if Methodology = Problem validation - Mom Test style):**
1. **"Tell me about the last time you [experienced this problem]."** — Gets specific, recent behavior (not hypothetical)
- **Follow-up:** "What were you trying to accomplish? What made it hard? What did you try?"
- **Avoid:** "Would you use a tool that solves this?" (leading, hypothetical)
2. **"How do you currently handle [this problem]?"** — Reveals workarounds, alternatives, pain intensity
- **Follow-up:** "How much time/money does that take? What's frustrating about it?"
- **Avoid:** "Don't you think that's inefficient?" (leading)
3. **"Can you walk me through what you did step-by-step?"** — Uncovers details, edge cases, context
- **Follow-up:** "What happened next? Where did you get stuck?"
- **Avoid:** "Was it hard?" (yes/no question, not useful)
4. **"Have you tried other solutions for this?"** — Reveals competitive landscape, unmet needs
- **Follow-up:** "What did you like/dislike? Why did you stop using it?"
- **Avoid:** "Would you pay for a better solution?" (hypothetical)
5. **"If you had a magic wand, what would change?"** — Opens space for ideal outcomes (but treat with skepticism—focus on past behavior, not wishes)
- **Follow-up:** "Why does that matter to you? What would that enable?"
- **Avoid:** Taking feature requests literally
---
### Closing (5 minutes)
- **Summarize:** "Just to recap, I heard that [key insights]. Did I get that right?"
- **Ask for referrals:** "Do you know anyone else who experiences this problem? Could you introduce me?"
- **Thank them:** "This was incredibly helpful. I really appreciate your time."
---
## Biases to Avoid
1. **Confirmation bias:** Don't ask "Don't you think X is a problem?" → Ask "Tell me about your experience with X."
2. **Leading questions:** Don't ask "Would you use this?" → Ask "What have you tried? Why did it work/fail?"
3. **Hypothetical questions:** Don't ask "If we built Y, would you pay?" → Ask "What do you currently pay for? Why?"
4. **Pitching disguised as research:** Don't say "We're building Z to solve X" → Say "I'm researching X. Tell me about your experience."
5. **Yes/no questions:** Don't ask "Is invoicing hard?" → Ask "Walk me through your invoicing process."
---
## Success Criteria
You'll know these interviews are successful if:
✅ **You hear specific stories, not generic complaints** — "Last Tuesday, I spent 3 hours..." vs. "Invoicing is annoying"
✅ **You uncover past behavior, not hypothetical wishes** — "I tried Zapier but quit after 2 weeks" vs. "I'd probably use automation"
✅ **You identify patterns across 3+ interviews** — Same pain points emerge independently
✅ **You're surprised by something** — If everything confirms your assumptions, you're asking leading questions
✅ **You can quote customers verbatim** — Actual language = authentic insights
---
## Interview Logistics
**Recruiting:**
- [Based on Q3 constraints, suggest recruitment channels]
- **Example (if Q3 = Limited access):** "Reach out to 20-30 people to get 5-10 interviews (33% response rate is typical)"
- **Example (if Q3 = Existing customers):** "Email 50 customers with $50 Amazon gift card incentive"
**Scheduling:**
- 45-60 minutes per interview (30-40 min conversation + buffer)
- Record if possible (with consent), or take detailed notes
- Schedule 2-3 per day max (you need time to synthesize)
**Synthesis:**
- After each interview, write key insights immediately (memory fades fast)
- After 5 interviews, look for patterns (common pains, jobs, workarounds)
- Use `problem-statement.md` to frame findings
---
**Ready to start recruiting and interviewing? Let me know if you'd like to refine any part of this plan.**
Step 0 - Context: User shares hypothesis: "Freelancers waste time chasing late payments manually."
Q1 Response: "Problem validation — Confirm that late payment follow-ups are painful enough to solve"
Q2 Response: "People who experience the problem regularly — Freelancers who invoice 5+ clients monthly"
Q3 Response: "Cold outreach required — No existing customers; need to recruit via LinkedIn, Reddit, freelancer communities"
Q4 Response: "Problem validation interviews (Mom Test style) — Focus on past behavior, not hypotheticals"
Generated Plan: Includes 5 Mom Test-style questions (last time you chased a late payment, how do you currently handle it, what have you tried, etc.), biases to avoid (leading questions, hypotheticals), and success criteria (specific stories, past behavior, patterns across 3+ interviews).
Why this works:
Symptom: "What features do you want us to build?"
Consequence: You get feature requests, not problems. Customers don't know solutions.
Fix: Ask about past behavior: "Tell me about the last time you struggled with X."
Symptom: Spending 20 minutes explaining your product idea
Consequence: Customer feels obligated to be nice. No honest feedback.
Fix: Don't mention your solution until the last 5 minutes (if at all). Focus on their problems.
Symptom: Interviewing friends, family, or people who don't experience the problem
Consequence: Polite feedback, not real insights.
Fix: Interview people who experience the problem regularly and recently.
Symptom: "We talked to 2 people, they liked it, let's build!"
Consequence: Small sample = confirmation bias.
Fix: Interview 5-10 people minimum. Look for patterns, not one-off feedback.
Symptom: Relying on memory after interviews
Consequence: Lose details, misremember quotes, can't spot patterns.
Fix: Record (with consent) or take detailed notes. Synthesize immediately after each interview.
problem-statement.md — Use interview insights to frame problem statementproto-persona.md — Define interview target segmentjobs-to-be-done.md — JTBD methodology for interviewsSkill type: Interactive Suggested filename: discovery-interview-prep.md Suggested placement: /skills/interactive/ Dependencies: Uses problem-statement.md, proto-persona.md, jobs-to-be-done.md
Weekly Installs
248
Repository
GitHub Stars
1.5K
First Seen
Feb 12, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykWarn
Installed on
codex219
opencode217
gemini-cli214
github-copilot213
cursor213
kimi-cli209
推荐与联盟计划设计优化指南:病毒式增长营销策略、客户推荐计划、联盟营销
26,100 周安装
竞争对手研究指南:SEO、内容、反向链接与定价分析工具
231 周安装
Azure 工作负载自动升级评估工具 - 支持 Functions、App Service 计划与 SKU 迁移
231 周安装
Kaizen持续改进方法论:软件开发中的渐进式优化与防错设计实践指南
231 周安装
软件UI/UX设计指南:以用户为中心的设计原则、WCAG可访问性与平台规范
231 周安装
Apify 网络爬虫和自动化平台 - 无需编码抓取亚马逊、谷歌、领英等网站数据
231 周安装
llama.cpp 中文指南:纯 C/C++ LLM 推理,CPU/非 NVIDIA 硬件优化部署
231 周安装