continuous-discovery by wondelai/skills
npx skills add https://github.com/wondelai/skills --skill continuous-discovery本框架旨在建立一种可持续的、每周进行的客户发现实践,使产品团队能够持续朝着期望的成果取得进展。该框架并非将发现视为开发前的一个阶段,而是将客户学习嵌入到产品工作的持续节奏中,确保每个决策都有最新证据作为依据。
良好的产品发现需要持续的节奏,而非一次性事件。 每周与客户交流、可视化地映射机会、在构建前测试假设的团队,其表现始终优于依赖直觉、利益相关者意见或季度研究周期的团队。目标是产品三人组(产品经理、设计师、工程师)每周至少有一次客户接触点。
目标:10/10。 在评估或创建产品发现实践时,根据对以下原则的遵循程度进行 0-10 分评分。10/10 意味着团队拥有每周访谈节奏,维护着一个动态的机会解决方案树,系统地测试假设,并使用证据来决定构建什么。较低的分数表明在节奏、结构或严谨性方面存在差距。始终提供当前分数以及达到 10/10 所需的具体改进措施。
核心概念: 机会解决方案树是一种可视化地图,将顶层的期望成果与中层的客户机会以及底层的潜在解决方案连接起来。它使隐性的产品思维变得显性和共享。
其作用原理: 大多数团队直接从业务成果跳到解决方案,完全跳过了客户需求。OST 迫使团队在生成解决方案之前,首先理解机会空间——客户未满足的需求、痛点和愿望。这可以防止构建没人想要的功能。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 季度规划 | 定义成果,然后在承诺功能之前映射机会空间 | 以"提高试用转付费转化率"为成果,然后探究用户不转化的原因 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 功能优先级排序 | 比较不同机会的解决方案,以找到杠杆效应最高的选择 | 针对"用户找不到相关内容"的三个解决方案 vs. 针对"新手上手流程令人困惑"的两个解决方案 |
| 利益相关者对齐 | 使用树作为共享的可视化工具,就战略和权衡达成一致 | 向领导层讲解树状图,说明为何选择机会 X 而非 Y |
伦理边界: 切勿为了证明预先确定的解决方案而挑选机会。树必须反映通过研究发现的实际客户需求。
核心概念: 当前状态体验地图逐步捕捉客户今天如何实现目标,揭示痛点和未满足的需求,这些将成为树上的机会。
其作用原理: 团队通常假设自己了解客户的当前体验,但根据访谈数据协作绘制地图,可以揭示从内部视角看不到的差距、变通方法和情感。地图能生成你永远无法在会议室里凭空想出的机会。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 新问题领域 | 在设计任何东西之前,映射端到端的体验 | 映射小企业主今天如何处理发票,从创建到追款 |
| 流失分析 | 映射流失用户的体验以找到失败点 | 发现用户在入职第 4 步放弃,因为他们需要手头没有的数据 |
| 跨职能对齐 | 共同构建地图,使工程、设计和产品部门共享一个视图 | 三小时的协作会议产出一个共享的参考工件 |
伦理边界: 体验地图必须反映来自访谈的真实客户体验,而非团队对客户感受的想象投射。
核心概念: 基于故事的访谈捕捉特定的过往经历(而非意见或预测),每次访谈被综合成一页快照,整个团队可以快速吸收和参考。
其作用原理: 传统的访谈方法询问客户想要什么——但客户不擅长预测自己未来的行为。基于故事的访谈将洞察建立在真实的过往事件基础上,揭示客户实际做了什么和感受到了什么。快照格式使综合变得快速,并创建了一个不断增长的客户证据库。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 每周节奏 | 每周四安排三次 30 分钟的访谈 | 通过应用内提示从现有用户中招募;轮流主导对话 |
| 机会发现 | 从访谈故事中提取客户需求并添加到 OST | 用户描述了导出数据的变通方法——成为一个机会节点 |
| 团队对齐 | 在显眼位置分享快照,使每个人吸收相同的证据 | 实体墙或数字看板,快照在此积累,模式浮现 |
伦理边界: 切勿引导访谈参与者得出结论。使用关于过往行为的开放式问题,让故事揭示重要内容。
核心概念: 在构建解决方案之前,识别其成功所必须成立的基本假设,按类型和风险进行映射,然后设计小型、快速的测试,首先验证或否定风险最高的假设。
其作用原理: 每个解决方案都建立在一系列关于合意性、可行性、可构建性和可用性的假设之上。大多数团队在构建前不测试任何假设,或者他们测试容易的而非风险高的假设。系统性的假设映射和测试可以防止在错误前提下投入数月时间构建解决方案。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 构建前 | 为顶级候选解决方案映射假设并测试风险最高的 | "用户会与他们的经理分享报告"——在构建分享基础设施之前,用一个画门按钮测试 |
| 比较解决方案 | 测试每个候选方案风险最高的假设,以快速淘汰弱选项 | 解决方案 A 的风险最高假设失败;解决方案 B 的通过——选择 B |
| 降低路线图风险 | 从路线图倒推,识别已承诺功能中隐藏的未测试假设 | Q3 功能假设用户想要实时通知——尚无证据 |
伦理边界: 切勿设计欺骗参与者的假设测试。画门测试应说明该功能即将推出,而非在不披露的情况下模拟不存在的功能。
核心概念: 使用结构化方法相互比较机会,而不是孤立地评估它们。评估机会规模、市场因素、公司因素和客户因素,以找到杠杆效应最高的选择。
其作用原理: 团队默认按利益相关者声音最大、近因偏差(无论最后一个客户说了什么)或直觉来排序。结构化比较迫使进行明确的权衡讨论,并揭示那些否则会直到实施阶段才暴露的分歧。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 季度规划 | 对 OST 中的前 5-7 个机会进行排序,以决定团队重点 | 使用结构化标准比较"用户难以找到内容"与"用户无法实时协作" |
| 冲刺规划 | 根据当前证据选择本次迭代要解决的机会 | 选择你拥有最多访谈证据且解决方案可测试的机会 |
| 组合决策 | 根据风险和潜在影响,在机会间分配团队精力 | 60% 在高置信度机会上,30% 在中等机会上,10% 在探索性机会上 |
伦理边界: 优先级排序框架应揭示真实的客户需求,而非被操纵来证明那些以牺牲用户价值为代价服务于业务指标的功能的合理性。
核心概念: 只有当持续发现成为产品三人组可持续的每周习惯时,它才能发挥作用。这需要自动化招募、创建轻量级仪式,并将发现嵌入到现有工作流程中,而不是将其视为额外工作。
其作用原理: 大多数团队在项目开始时进行一阵研究,然后就停止了。持续发现需要结构性支持:自动化参与者招募、固定的访谈时段、共享的综合工件,以及使发现成为非协商性的团队规范。这种习惯会产生复合效应——坚持数月的团队会形成深刻的客户直觉,从而改变每一个决策。
关键见解:
产品应用场景:
| 上下文 | 应用 | 示例 |
|---|---|---|
| 团队启动 | 在新团队或新计划的第一周建立每周节奏 | 设置自动化招募,预留周四下午时间,创建快照模板 |
| 扩展发现 | 随着习惯的巩固,从每周一次访谈扩展到三次 | 增加周二时段用于流失用户访谈,周五时段用于潜在客户访谈 |
| 管理者支持 | 领导者保护发现时间,并在规划讨论中要求提供证据 | "本周从访谈中学到了什么?"成为一对一会议的固定问题 |
伦理边界: 尊重参与者的时间。访谈控制在 30 分钟内,给予公平补偿,切勿将发现访谈伪装成销售宣传。
| 错误 | 失败原因 | 修复方法 |
|---|---|---|
| 将发现视为开发前的一个阶段 | 洞察过时;团队基于过时的假设构建 | 将发现嵌入到每周的工作中,与交付并行 |
| 只有 PM 与客户交谈 | 设计师和工程师错过上下文;洞察在传递中丢失 | 完整的产品三人组一起进行访谈 |
| 从成果直接跳到解决方案 | 跳过了机会空间;团队构建没人需要的功能 | 构建机会解决方案树,使机会空间显性化 |
| 询问客户想要什么 | 客户预测能力差;你得到的是功能请求,而非需求 | 使用基于故事的访谈:"告诉我上次..." |
| 测试容易的假设而非风险高的假设 | 虚假信心;致命的假设未被测试 | 按重要性和证据映射假设;首先测试高风险假设 |
| 孤立地给机会打分 | 没有权衡讨论;所有事情看起来都重要 | 使用结构化标准将机会进行直接比较 |
| 进行一阵访谈然后停止 | 没有复合学习;团队回归猜测 | 自动化招募并预留固定的重复日历时间 |
| 问题 | 如果答案为"否" | 行动 |
|---|---|---|
| 团队每周是否至少与一位客户交谈? | 你在没有新鲜证据的情况下做决策 | 自动化招募并预留每周访谈时段 |
| 你是否有一个动态的机会解决方案树? | 战略是隐性的且未共享 | 根据你当前的成果和访谈数据构建一个 OST |
| 完整三人组是否都参与访谈? | 洞察经过一人过滤 | 邀请设计师和工程师参加下一次访谈 |
| 你在构建前是否测试假设? | 你在赌未经测试的前提 | 为你下一个功能映射假设并测试风险最高的一个 |
| 你能将已发布的功能追溯到某个客户机会吗? | 交付与发现脱节 | 将你的待办事项项连接到 OST 上的机会 |
| 你是否有整个团队都能看到的访谈快照? | 知识被困在一个人的头脑中 | 创建一个共享的快照看板,并在每次访谈后填充 |
| 你是否在比较机会,而不仅仅是列出它们? | 优先级排序由意见驱动,而非证据 | 对你的前 5 个机会进行结构化比较练习 |
此技能基于 Teresa Torres 开发的持续发现框架。有关完整的方法论、模板和案例研究:
Teresa Torres 是一位国际知名的作家、演讲者和教练,帮助产品团队采用持续发现实践。她曾指导过数百个产品团队,这些团队来自从早期初创公司到全球性企业的各类公司,包括 Capital One、Calendly 和 Reforge。Torres 创建了机会解决方案树,作为一种将业务成果与客户机会和潜在解决方案连接起来的可视化工具。她的博客 Product Talk 是产品经理阅读最广泛的资源之一,她的教练计划已培训了全球数千个产品三人组。在成为教练之前,Torres 曾担任产品负责人超过十年,自 2006 年以来一直活跃在产品管理社区中。《持续发现习惯》将她多年的教练经验提炼成一个任何产品团队都可以采用的实用、可重复的框架。
每周安装数
212
代码库
GitHub 星标数
260
首次出现
2026年2月23日
安全审计
安装于
codex203
opencode202
gemini-cli201
kimi-cli201
cursor201
github-copilot201
Framework for building a sustainable, weekly practice of customer discovery that keeps product teams making progress toward desired outcomes. Rather than treating discovery as a phase that happens before development, this framework embeds customer learning into the ongoing rhythm of product work so that every decision is informed by fresh evidence.
Good product discovery requires a continuous cadence, not a one-time event. Teams that talk to customers every week, map opportunities visually, and test assumptions before building consistently outperform teams that rely on intuition, stakeholder opinions, or quarterly research cycles. The goal is at least one customer touchpoint per week, every week, by the product trio (product manager, designer, engineer).
Goal: 10/10. When reviewing or creating a product discovery practice, rate it 0-10 based on adherence to the principles below. A 10/10 means the team has a weekly interview cadence, maintains a living Opportunity Solution Tree, systematically tests assumptions, and uses evidence to decide what to build. Lower scores indicate gaps in cadence, structure, or rigor. Always provide the current score and specific improvements needed to reach 10/10.
Core concept: An Opportunity Solution Tree (OST) is a visual map that connects a desired outcome at the top to customer opportunities in the middle and potential solutions at the bottom. It makes implicit product thinking explicit and shared.
Why it works: Most teams jump from a business outcome straight to solutions, skipping the customer need entirely. The OST forces teams to first understand the opportunity space -- the unmet needs, pain points, and desires customers have -- before generating solutions. This prevents building features nobody wants.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| Quarterly planning | Define the outcome, then map the opportunity space before committing to features | "Increase trial-to-paid conversion" as outcome, then discover why users don't convert |
| Feature prioritization | Compare solutions across different opportunities to find highest-leverage bets | Three solutions for "users can't find relevant content" vs. two for "onboarding is confusing" |
| Stakeholder alignment | Use the tree as a shared visual to align on strategy and tradeoffs | Walk leadership through the tree to show why you chose opportunity X over Y |
Ethical boundary: Never cherry-pick opportunities to justify a predetermined solution. The tree must reflect genuine customer needs discovered through research.
See: references/opportunity-trees.md
Core concept: Current-state experience maps capture how customers accomplish a goal today, step by step, revealing pain points and unmet needs that become opportunities on the tree.
Why it works: Teams often assume they understand the customer's current experience, but mapping it collaboratively from interview data reveals gaps, workarounds, and emotions that are invisible from the inside. The map generates opportunities you would never brainstorm from a conference room.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| New problem space | Map the end-to-end experience before designing anything | Map how a small business owner handles invoicing today, from creating to chasing payment |
| Churn analysis | Map the experience of users who churned to find failure points | Discover that users abandon onboarding at step 4 because they need data they don't have handy |
| Cross-functional alignment | Build the map together so engineering, design, and product share one view | Three-hour collaborative session produces a shared reference artifact |
Ethical boundary: Experience maps must reflect real customer experiences from interviews, not the team's projection of what they imagine customers feel.
See: references/experience-mapping.md
Core concept: Story-based interviews capture specific past experiences (not opinions or predictions), and each interview is synthesized into a one-page snapshot that the whole team can quickly absorb and reference.
Why it works: Traditional interview methods ask customers what they want -- but customers are poor predictors of their own future behavior. Story-based interviewing grounds insights in real past events, revealing what customers actually did and felt. The snapshot format makes synthesis fast and creates a growing library of customer evidence.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| Weekly cadence | Schedule three 30-minute interviews every Thursday | Recruit from existing users via in-app prompt; rotate who leads the conversation |
| Opportunity discovery | Extract customer needs from interview stories and add to the OST | User describes workaround for exporting data -- becomes an opportunity node |
| Team alignment | Share snapshots in a visible location so everyone absorbs the same evidence | Physical wall or digital board where snapshots accumulate and patterns emerge |
Ethical boundary: Never lead interview participants toward conclusions. Use open-ended questions about past behavior and let the story reveal what matters.
See: references/interview-snapshots.md
Core concept: Before building a solution, identify the underlying assumptions that must be true for it to succeed, map them by type and risk, then design small, fast tests to validate or invalidate the riskiest ones first.
Why it works: Every solution is built on a stack of assumptions about desirability, viability, feasibility, and usability. Most teams test none of them before building, or they test the easy ones instead of the risky ones. Systematic assumption mapping and testing prevents investing months in solutions built on false premises.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| Before building | Map assumptions for the top solution candidates and test the riskiest | "Users will share reports with their manager" -- test with a painted-door button before building sharing infrastructure |
| Comparing solutions | Test the riskiest assumption for each candidate to quickly eliminate weak options | Solution A's riskiest assumption fails; Solution B's passes -- pursue B |
| De-risking a roadmap | Work backward from the roadmap to identify untested assumptions hiding in committed features | Q3 feature assumes users want real-time notifications -- no evidence yet |
Ethical boundary: Never design assumption tests that deceive participants. Painted-door tests should explain that the feature is coming soon, not simulate functionality that doesn't exist without disclosure.
See: references/assumption-mapping.md
Core concept: Use structured methods to compare opportunities against each other rather than evaluating them in isolation. Assess opportunity size, market factors, company factors, and customer factors to find the highest-leverage bets.
Why it works: Teams default to prioritizing by loudest stakeholder voice, recency bias (whatever the last customer said), or gut feel. Structured comparison forces explicit tradeoff discussions and surfaces disagreements that would otherwise go unspoken until implementation is underway.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| Quarterly planning | Rank the top 5-7 opportunities from the OST to decide team focus | Compare "users struggle to find content" vs. "users can't collaborate in real time" using structured criteria |
| Sprint planning | Choose which opportunity to tackle this iteration based on current evidence | Pick the opportunity where you have the most interview evidence and a testable solution |
| Portfolio decisions | Distribute team effort across opportunities by risk and potential impact | 60% on high-confidence opportunity, 30% on medium, 10% on exploratory |
Ethical boundary: Prioritization frameworks should surface real customer needs, not be gamed to justify features that serve business metrics at the expense of user value.
See: references/prioritization-methods.md
Core concept: Continuous discovery only works if it becomes a sustainable weekly habit for the product trio. This requires automating recruitment, creating lightweight rituals, and embedding discovery into the existing workflow rather than treating it as extra work.
Why it works: Most teams do a burst of research at the start of a project and then stop. Continuous discovery requires structural support: automated participant recruitment, standing interview slots, shared synthesis artifacts, and team norms that make discovery non-negotiable. The habit compounds -- teams that maintain it for months develop deep customer intuition that transforms every decision.
Key insights:
Product applications:
| Context | Application | Example |
|---|---|---|
| Team kickoff | Establish the weekly cadence in the first week of a new team or initiative | Set up automated recruitment, block Thursday afternoons, create snapshot template |
| Scaling discovery | Expand from one interview per week to three as the habit solidifies | Add a second slot on Tuesday for churned-user interviews and a Friday slot for prospect interviews |
| Manager support | Leaders protect discovery time and ask for evidence in planning discussions | "What did you learn from interviews this week?" becomes a standing question in 1:1s |
Ethical boundary: Respect participant time. Keep interviews to 30 minutes, compensate fairly, and never use discovery interviews as a disguised sales pitch.
See: references/case-studies.md
| Mistake | Why It Fails | Fix |
|---|---|---|
| Treating discovery as a phase before development | Insights go stale; team builds on outdated assumptions | Embed discovery into every week alongside delivery |
| Only the PM talks to customers | Designer and engineer miss context; insights lost in translation | The full product trio interviews together |
| Jumping from outcome to solutions | Skips the opportunity space; team builds features nobody needs | Build an Opportunity Solution Tree to make the opportunity space explicit |
| Asking customers what they want | Customers predict poorly; you get feature requests, not needs | Use story-based interviewing: "Tell me about the last time..." |
| Testing easy assumptions instead of risky ones | False confidence; the fatal assumption goes untested | Map assumptions by importance and evidence; test high-risk first |
| Scoring opportunities in isolation | No tradeoff discussion; everything looks important | Compare opportunities head-to-head with structured criteria |
| Doing a burst of interviews then stopping | No compounding learning; team reverts to guessing | Automate recruitment and block recurring calendar time |
| Question | If No | Action |
|---|---|---|
| Does the team talk to at least one customer per week? | You're making decisions without fresh evidence | Automate recruitment and block a weekly interview slot |
| Do you have a living Opportunity Solution Tree? | Strategy is implicit and unshared | Build an OST from your current outcome and interview data |
| Does the full trio participate in interviews? | Insights are filtered through one person | Invite designer and engineer to the next interview |
| Are you testing assumptions before building? | You're betting on untested premises | Map assumptions for your next feature and test the riskiest one |
| Can you trace a shipped feature back to a customer opportunity? | Delivery is disconnected from discovery | Connect your backlog items to opportunities on the OST |
| Do you have interview snapshots the whole team can see? | Knowledge is trapped in one person's head | Create a shared snapshot board and fill it after each interview |
| Are you comparing opportunities, not just listing them? | Prioritization is driven by opinion, not evidence | Run a structured comparison exercise on your top 5 opportunities |
This skill is based on the continuous discovery framework developed by Teresa Torres. For the complete methodology, templates, and case studies:
Teresa Torres is an internationally acclaimed author, speaker, and coach who helps product teams adopt continuous discovery practices. She has coached hundreds of product teams at companies ranging from early-stage startups to global enterprises including Capital One, Calendly, and Reforge. Torres created the Opportunity Solution Tree as a visual tool for connecting business outcomes to customer opportunities and potential solutions. Her blog, Product Talk, is one of the most widely read resources for product managers, and her coaching programs have trained thousands of product trios worldwide. Before becoming a coach, Torres spent over a decade as a product leader and has been active in the product management community since 2006. Continuous Discovery Habits distills her years of coaching into a practical, repeatable framework that any product team can adopt.
Weekly Installs
212
Repository
GitHub Stars
260
First Seen
Feb 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex203
opencode202
gemini-cli201
kimi-cli201
cursor201
github-copilot201
站立会议模板:敏捷开发每日站会指南与工具(含远程团队异步模板)
10,500 周安装