product-discovery by majiayu000/claude-arsenal
npx skills add https://github.com/majiayu000/claude-arsenal --skill product-discovery这些规则是强制性的。违反它们意味着该技能未正确运作。
永远不要从解决方案开始。始终先定义问题和结果。
❌ 禁止:
"我们应该为产品页面构建一个搜索栏"
"让我们添加AI推荐"
"用户需要一个移动应用"
✅ 要求:
"问题:用户找不到产品(目录页退出率40%)
结果:将退出率降低至20%
可能的解决方案:
1. 带过滤器的搜索栏
2. AI驱动的推荐
3. 更好的分类导航
4. 可视化产品浏览"
永远不要在没有真实用户研究证据的情况下假设用户需求。
❌ 禁止:
- "用户可能想要X"(无数据的假设)
- "我们的竞争对手有X,所以我们也需要它"(未经验证的模仿)
- "CEO认为我们应该构建X"(无证据的权威意见)
- "很明显用户需要X"(未经验证的直觉)
✅ 要求:
- "8位受访用户中有5位提到X是一个痛点"
- "分析显示60%的用户在第3步放弃"
- "原型测试:10位用户中有7位成功完成任务"
- "调查(n=500):45%的用户将该功能评为'必须有'"
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
每个细分市场少于5次用户访谈,永远不要验证一个问题。
❌ 禁止:
- "我们和2位用户聊过,他们很喜欢这个想法"
- "一位客户请求了这个功能"
- "根据与销售的快速交谈..."
✅ 要求:
| 细分市场 | 访谈次数 | 关键发现 |
|---------|------------|-------------|
| 高级用户 | 6 | 6人中有5人在X方面遇到困难 |
| 新用户 | 5 | 5人中有4人在入门阶段流失 |
| 流失用户 | 5 | 5人中有3人提到缺少功能Y |
每个细分市场最低:5次访谈
访谈次数越多,置信度越高
每个假设都必须是可测试且可证伪的,并有明确的成功标准。
❌ 禁止:
- "用户会喜欢新设计"(不可证伪)
- "这将提高参与度"(无成功标准)
- "该功能会有用"(模糊)
✅ 要求:
| 假设 | 测试 | 成功标准 | 结果 |
|------------|------|------------------|--------|
| 用户将在新流程中完成入门 | 用10位用户进行原型测试 | >70%完成率 | 待定 |
| 用户更喜欢视觉搜索 | A/B测试 | >10%转化率提升 | 待定 |
| 价格点可接受 | 落地页测试 | >3%转化率 | 待定 |
| 场景 | 框架/工具 | 产出 |
|---|---|---|
| 验证产品创意 | 产品机会评估 | 继续/停止决策 |
| 评估市场机会规模 | TAM/SAM/SOM | 市场规模估算 |
| 理解用户需求 | 用户研究(访谈、调查) | 用户洞察、痛点 |
| 分析竞争 | 竞争分析 | 竞争格局图 |
| 探索用户动机 | 待完成工作 | 工作故事、结果 |
| 功能优先级排序 | Kano模型 | 功能分类 |
| 定义价值主张 | 价值主张画布 | 价值主张陈述 |
| 测试产品概念 | 精益创业 / MVP | 已验证的认知 |
| 映射机会 | 机会解决方案树 | 优先级排序的机会 |
探索由三个角色每周协同领导:
产品经理 → 定义结果,拥有路线图
设计师 → 探索解决方案,测试可用性
工程师 → 评估可行性,提出技术解决方案
## 1. 客户访谈(每周)
- 每周至少安排3-5次访谈
- 混合当前用户、流失用户、潜在客户
- 专注于理解问题,而非推销解决方案
- 记录并与团队分享洞察
## 2. 假设检验(每周)
- 识别关于解决方案的最高风险假设
- 设计快速测试(原型、落地页、假门)
- 与真实用户进行实验
- 根据成功标准衡量结果
## 3. 机会映射(持续进行)
- 构建机会解决方案树
- 将客户需求映射到潜在解决方案
- 基于影响力和可行性进行优先级排序
- 随着学习更新
探索(构建什么) 交付(如何构建)
├─ 客户访谈 ├─ 冲刺计划
├─ 原型测试 ├─ 开发
├─ 假设验证 ├─ QA测试
├─ 市场研究 ├─ 部署
└─ 机会评估 └─ 发布后监控
关键区别:探索在承诺构建之前降低风险
在开始任何产品计划之前,回答这些问题:
## 1. 问题定义
**我们要解决什么问题?**
- 具体且可衡量
- 验证这是一个真实的问题(非假设)
## 2. 目标市场
**我们为谁解决这个问题?**
- 定义具体的用户细分市场
- 评估可寻址市场规模
## 3. 机会规模
**这个机会有多大?**
- 收入潜力
- 用户增长潜力
- 战略价值
## 4. 成功指标
**我们如何衡量成功?**
- 领先指标(使用率、参与度)
- 滞后指标(收入、留存率)
- 预先定义目标
## 5. 替代方案
**目前存在哪些替代方案?**
- 直接竞争对手
- 间接解决方案
- 当前用户的变通方法
## 6. 我们的优势
**为什么我们最适合解决这个问题?**
- 独特能力
- 市场地位
- 技术优势
## 7. 战略契合度
**为什么是现在?为什么是我们?**
- 市场时机
- 战略一致性
- 资源可用性
## 8. 依赖关系
**我们需要什么才能成功?**
- 技术依赖
- 合作伙伴要求
- 监管考虑
## 9. 风险
**可能出什么问题?**
- 市场风险(会有人想要吗?)
- 执行风险(我们能构建出来吗?)
- 货币化风险(他们会付费吗?)
## 10. 延迟成本
**如果我们不构建这个会怎样?**
- 竞争劣势
- 收入损失
- 市场机会窗口
快速对机会进行优先级排序:
高价值,低努力 → 优先做(快速获胜)
高价值,高努力 → 战略性规划(大赌注)
低价值,低努力 → 稍后做(填补空白)
低价值,高努力 → 不要做(金钱陷阱)
## 生成性研究(存在什么问题?)
使用时机:启动新产品领域,探索未知空间
方法:
- 人种学实地研究
- 情境调查
- 日记研究
- 开放式访谈
## 评估性研究(我们的解决方案有效吗?)
使用时机:测试特定解决方案,验证设计
方法:
- 可用性测试
- 原型测试
- A/B测试
- 概念测试
## 定量研究(有多少?多大程度?)
使用时机:需要统计验证,衡量影响
方法:
- 调查
- 分析数据分析
- A/B实验
- 市场规模评估
## 定性研究(为什么?如何?)
使用时机:理解动机,发掘洞察
方法:
- 用户访谈
- 焦点小组
- 客户咨询委员会
- 用户观察
## 准备
- 定义研究目标和假设
- 创建访谈指南(但保持灵活性)
- 招募合适的参与者(每个细分市场6-8人)
- 安排45-60分钟的会话
## 访谈期间
✓ 提问开放式问题("告诉我关于...")
✓ 追问5次"为什么?"以找到根本原因
✓ 多听少说(80/20法则)
✓ 询问过去的行为,而非未来的假设
✓ 寻找变通方法和痛点
✓ 录音并做笔记
✗ 不要提问引导性问题
✗ 不要推销你的解决方案
✗ 不要问"你会使用X吗?"(人们会说谎)
✗ 访谈时不要同时处理多项任务
## 示例问题
- "带我回顾一下你上次[执行任务]的过程"
- "关于[当前解决方案],最令人沮丧的是什么?"
- "你今天是如何解决这个问题的?"
- "什么会让[任务]对你来说更容易?"
- "告诉我更多关于那个..."
## 何时进行调查
✓ 验证定性研究的发现
✓ 大规模衡量满意度或情绪
✓ 功能优先级排序(Kano调查)
✓ 按行为/需求细分用户
## 调查设计
- 保持简短(<10分钟完成)
- 在移动设备上每屏一个问题
- 混合问题类型(多项选择、量表、开放式)
- 避免引导性或带有偏见的问题
- 发送前与5人测试调查
## 问题类型
- 多项选择 → 细分、分类
- 李克特量表(1-5) → 满意度、重要性
- 开放式 → 定性洞察
- 排序 → 优先级排序
- NPS(0-10) → 忠诚度测量
## 分发
- 应用内调查(高响应率,偏向于活跃用户)
- 电子邮件调查(覆盖面更广,响应率较低)
- 激励深思熟虑的回复(10美元礼品卡、早期访问)
- 对有趣的回复进行后续访谈
## 用于探索的AI工具
- **洞察综合** — AI分析访谈记录,识别模式
- **合成用户画像** — AI生成的用户代理,用于快速测试
- **市场情报** — AI跟踪竞争对手动向、价格变化
- **调查分析** — 自动情感分析、主题提取
- **趋势检测** — AI早期识别新兴市场趋势
## 示例
- Crayon → 竞争情报自动化
- Glimpse → 基于网络数据的趋势检测
- Delve AI → 自动用户画像创建
- Attest → AI驱动的调查洞察
- Quantilope → 机器学习研究自动化
## 最佳实践
✓ 使用AI扩展研究,而非取代人类洞察
✓ 通过真实用户对话验证AI发现
✓ 将AI分析与定性深度相结合
✗ 不要仅依赖合成用户
✗ 不要跳过与真实客户的交谈
## 现代方法
- 探索嵌入每个冲刺,而非一个阶段
- 每周用户接触点(访谈、测试、反馈)
- 快速实验(同时运行数十个测试)
- 基于证据快速调整方向(以天计,而非月计)
## 团队结构
- 产品三人组负责其领域的探索
- 集中式研究团队提供支持(工具、方法)
- 客户成功分享反馈循环
- 数据分析师提供定量洞察
## 节奏
- 每周:客户访谈、原型测试
- 每两周:机会审查、假设验证
- 每月:市场分析、竞争审查
- 每季度:战略性探索(新市场、大赌注)
将结果映射到解决方案路径的可视化框架:
结果(业务目标)
|
┌────────┴────────┐
│ │
机会 1 机会 2
│ │
├─ 解决方案 A ├─ 解决方案 C
├─ 解决方案 B └─ 解决方案 D
└─ 解决方案 C
## 步骤1:定义结果
从可衡量的业务结果开始
示例:"将第30天留存率从20%提高到30%"
## 步骤2:映射机会
通过研究发现客户需求/痛点
示例:"用户不理解核心功能"
## 步骤3:生成解决方案
为每个机会,头脑风暴多个解决方案
示例:
- 更好的入门教程
- 应用内工具提示
- 交互式产品导览
## 步骤4:测试假设
针对每个解决方案,识别最高风险假设并进行测试
示例:"用户将完成一个5步教程"
测试:构建简单原型,用10位用户测试
## 步骤5:比较解决方案
使用证据选择最佳前进路径
构建经过测试验证的,放弃失败的
✓ 可视化实现结果的多种路径
✓ 防止直接采用第一个解决方案
✓ 鼓励在收窄前进行广泛探索
✓ 记录决策原因
✓ 保持团队在优先级上一致
## 探索板列
┌─────────────┬──────────────┬──────────────┬─────────────┐
│ 机会 │ 假设 │ 实验 │ 已验证 │
│ │ │ │ │
│ 我们已识别 │ 待验证的 │ 正在运行的 │ 准备构建 │
│ 的客户需求 │ 最高风险假设 │ 测试 │ │
└─────────────┴──────────────┴──────────────┴─────────────┘
## 流程
1. 机会从研究中流入
2. 解决方案产生待测试的假设
3. 实验验证/推翻假设
4. 已验证的解决方案进入交付待办事项
在从探索转移到交付之前:
## 探索清单
- [ ] 客户问题已验证(5+次访谈)
- [ ] 解决方案已通过原型测试(10+位用户)
- [ ] 成功指标已定义且可衡量
- [ ] 技术可行性已由工程团队确认
- [ ] 业务案例已批准(收入/留存影响)
- [ ] 设计稿已完成并测试
- [ ] 未决问题已解决或明确承认
- [ ] 故事已分解为可交付的增量
## ✗ 解决方案优先的探索
从"我们应该构建X"开始,然后寻找证据支持它
→ 替代方案:从结果和问题开始,探索多种解决方案
## ✗ 偶发性研究
将探索作为一个阶段,然后在开发开始时停止
→ 替代方案:在整个产品生命周期中持续每周探索
## ✗ 确认偏误
只与那些会验证你想法的人交谈
→ 替代方案:寻找反面证据,与流失用户交谈
## ✗ 虚假验证
询问"你会使用这个吗?"并相信答案
→ 替代方案:用逼真的原型测试,衡量实际行为
## ✗ 分析瘫痪
无休止的研究,从不交付
→ 替代方案:预先定义什么证据"足够"可以继续前进
## ✗ 为所有人构建
试图一次性解决所有用户的问题
→ 替代方案:专注于特定细分市场,做好它,然后扩展
## ✗ 忽视微弱信号
将早期负面反馈视为"只是少数用户"
→ 替代方案:将投诉视为早期预警信号,进行调查
每周安装量
163
仓库
GitHub 星标数
12
首次出现
2026年1月24日
安全审计
安装于
opencode152
gemini-cli148
codex148
cursor147
github-copilot136
kimi-cli124
These rules are mandatory. Violating them means the skill is not working correctly.
Never start with a solution. Always define the problem and outcome first.
❌ FORBIDDEN:
"We should build a search bar for the product page"
"Let's add AI recommendations"
"Users need a mobile app"
✅ REQUIRED:
"Problem: Users can't find products (40% exit rate on catalog)
Outcome: Reduce exit rate to 20%
Possible solutions:
1. Search bar with filters
2. AI-powered recommendations
3. Better category navigation
4. Visual product browsing"
Never assume user needs without evidence from real user research.
❌ FORBIDDEN:
- "Users probably want X" (assumption without data)
- "Our competitor has X, so we need it too" (copycat without validation)
- "The CEO thinks we should build X" (HiPPO without evidence)
- "It's obvious users need X" (intuition without validation)
✅ REQUIRED:
- "5 out of 8 interviewed users mentioned X as a pain point"
- "Analytics show 60% of users abandon at step 3"
- "Prototype test: 7/10 users completed task successfully"
- "Survey (n=500): 45% rated feature as 'must have'"
Never validate a problem with fewer than 5 user interviews per segment.
❌ FORBIDDEN:
- "We talked to 2 users and they loved the idea"
- "One customer requested this feature"
- "Based on a quick chat with sales..."
✅ REQUIRED:
| Segment | Interviews | Key Finding |
|---------|------------|-------------|
| Power Users | 6 | 5/6 struggle with X |
| New Users | 5 | 4/5 drop off at onboarding |
| Churned | 5 | 3/5 cited missing feature Y |
Minimum per segment: 5 interviews
Confidence increases with more interviews
Every assumption must be testable and falsifiable with clear success criteria.
❌ FORBIDDEN:
- "Users will like the new design" (not falsifiable)
- "This will improve engagement" (no success criteria)
- "The feature will be useful" (vague)
✅ REQUIRED:
| Assumption | Test | Success Criteria | Result |
|------------|------|------------------|--------|
| Users will complete onboarding in new flow | Prototype test with 10 users | >70% completion | TBD |
| Users prefer visual search | A/B test | >10% lift in conversions | TBD |
| Price point is acceptable | Landing page test | >3% conversion | TBD |
| Scenario | Framework/Tool | Output |
|---|---|---|
| Validate product idea | Product Opportunity Assessment | Go/no-go decision |
| Size market opportunity | TAM/SAM/SOM | Market size estimates |
| Understand user needs | User Research (interviews, surveys) | User insights, pain points |
| Analyze competition | Competitive Analysis | Competitive landscape map |
| Discover user motivations | Jobs-to-be-Done (JTBD) | Job stories, outcomes |
| Prioritize features | Kano Model | Feature categorization |
| Define value proposition | Value Proposition Canvas | Value prop statement |
| Test product concept | Lean Startup / MVP | Validated learnings |
| Map opportunities | Opportunity Solution Tree |
Discovery is led by three roles working together weekly:
Product Manager → Defines outcomes, owns roadmap
Designer → Explores solutions, tests usability
Engineer → Assesses feasibility, proposes technical solutions
## 1. Customer Interviews (Weekly)
- Schedule 3-5 interviews per week minimum
- Mix of current users, churned users, prospects
- Focus on understanding problems, not pitching solutions
- Record and share insights with team
## 2. Assumption Testing (Weekly)
- Identify riskiest assumptions about solutions
- Design quick tests (prototypes, landing pages, fake doors)
- Run experiments with real users
- Measure results against success criteria
## 3. Opportunity Mapping (Ongoing)
- Build opportunity solution tree
- Map customer needs to potential solutions
- Prioritize based on impact and feasibility
- Update as you learn
Discovery (What to Build) Delivery (How to Build It)
├─ Customer interviews ├─ Sprint planning
├─ Prototype testing ├─ Development
├─ Assumption validation ├─ QA testing
├─ Market research ├─ Deployment
└─ Opportunity assessment └─ Post-launch monitoring
Key difference: Discovery reduces risk BEFORE committing to build
Before starting any product initiative, answer these questions:
## 1. Problem Definition
**What problem are we solving?**
- Be specific and measurable
- Validate it's a real problem (not assumed)
## 2. Target Market
**For whom are we solving this problem?**
- Define specific user segments
- Size the addressable market (TAM/SAM/SOM)
## 3. Opportunity Size
**How big is the opportunity?**
- Revenue potential
- User growth potential
- Strategic value
## 4. Success Metrics
**How will we measure success?**
- Leading indicators (usage, engagement)
- Lagging indicators (revenue, retention)
- Define targets upfront
## 5. Alternative Solutions
**What alternatives exist today?**
- Direct competitors
- Indirect solutions
- Current user workarounds
## 6. Our Advantage
**Why are we best suited to solve this?**
- Unique capabilities
- Market position
- Technical advantages
## 7. Strategic Fit
**Why now? Why us?**
- Market timing
- Strategic alignment
- Resource availability
## 8. Dependencies
**What do we need to succeed?**
- Technical dependencies
- Partnership requirements
- Regulatory considerations
## 9. Risks
**What could go wrong?**
- Market risk (will anyone want it?)
- Execution risk (can we build it?)
- Monetization risk (will they pay?)
## 10. Cost of Delay
**What happens if we don't build this?**
- Competitive disadvantage
- Lost revenue
- Market opportunity window
Quick prioritization of opportunities:
High Value, Low Effort → Do First (Quick Wins)
High Value, High Effort → Plan Strategically (Big Bets)
Low Value, Low Effort → Do Later (Fill Gaps)
Low Value, High Effort → Don't Do (Money Pit)
## Generative Research (What problems exist?)
Use when: Starting new product area, exploring unknown space
Methods:
- Ethnographic field studies
- Contextual inquiry
- Diary studies
- Open-ended interviews
## Evaluative Research (Does our solution work?)
Use when: Testing specific solutions, validating designs
Methods:
- Usability testing
- Prototype testing
- A/B testing
- Concept testing
## Quantitative Research (How much? How many?)
Use when: Need statistical validation, measuring impact
Methods:
- Surveys
- Analytics analysis
- A/B experiments
- Market sizing
## Qualitative Research (Why? How?)
Use when: Understanding motivations, uncovering insights
Methods:
- User interviews
- Focus groups
- Customer advisory boards
- User observation
## Preparation
- Define research goals and hypotheses
- Create interview guide (but stay flexible)
- Recruit right participants (6-8 per segment)
- Schedule 45-60 min sessions
## During Interview
✓ Ask open-ended questions ("Tell me about...")
✓ Follow up with "Why?" 5 times to get to root cause
✓ Listen more than talk (80/20 rule)
✓ Ask about past behavior, not future hypotheticals
✓ Look for workarounds and pain points
✓ Record and take notes
✗ Don't ask leading questions
✗ Don't pitch your solution
✗ Don't ask "Would you use X?" (people lie)
✗ Don't multi-task while interviewing
## Example Questions
- "Walk me through the last time you [did task]"
- "What's most frustrating about [current solution]?"
- "How are you solving this problem today?"
- "What would make [task] easier for you?"
- "Tell me more about that..."
## When to Survey
✓ Validate findings from qualitative research
✓ Measure satisfaction or sentiment at scale
✓ Prioritize features (Kano surveys)
✓ Segment users by behavior/needs
## Survey Design
- Keep it short (<10 min to complete)
- One question per screen on mobile
- Mix question types (multiple choice, scale, open-ended)
- Avoid leading or biased questions
- Test survey with 5 people before sending
## Question Types
- Multiple choice → Segmentation, categorization
- Likert scale (1-5) → Satisfaction, importance
- Open-ended → Qualitative insights
- Ranking → Prioritization
- NPS (0-10) → Loyalty measurement
## Distribution
- In-app surveys (high response, biased to engaged users)
- Email surveys (broader reach, lower response)
- Incentivize thoughtful responses ($10 gift card, early access)
- Follow up with interviews for interesting responses
## AI Tools for Discovery
- **Insight synthesis** — AI analyzes interview transcripts, identifies patterns
- **Synthetic personas** — AI-generated user proxies for rapid testing
- **Market intelligence** — AI tracks competitor moves, pricing changes
- **Survey analysis** — Automated sentiment analysis, theme extraction
- **Trend detection** — AI identifies emerging market trends early
## Examples
- Crayon → Competitive intelligence automation
- Glimpse → Trend detection from web data
- Delve AI → Automated persona creation
- Attest → AI-powered survey insights
- Quantilope → Machine learning research automation
## Best Practices
✓ Use AI to scale research, not replace human insight
✓ Validate AI findings with real user conversations
✓ Combine AI analysis with qualitative depth
✗ Don't rely solely on synthetic users
✗ Don't skip talking to real customers
## Modern Approach
- Discovery is embedded in every sprint, not a phase
- Weekly user touchpoints (interviews, tests, feedback)
- Rapid experimentation (dozens of tests running)
- Fast pivots based on evidence (days, not months)
## Team Structure
- Product trios own discovery for their area
- Centralized research team supports (tools, methods)
- Customer success shares feedback loop
- Data analysts provide quantitative insights
## Cadence
- Weekly: Customer interviews, prototype tests
- Bi-weekly: Opportunity review, assumption validation
- Monthly: Market analysis, competitive review
- Quarterly: Strategic discovery (new markets, big bets)
Visual framework for mapping the path from outcome to solution:
OUTCOME (Business goal)
|
┌────────┴────────┐
│ │
OPPORTUNITY 1 OPPORTUNITY 2
│ │
├─ Solution A ├─ Solution C
├─ Solution B └─ Solution D
└─ Solution C
## Step 1: Define Outcome
Start with measurable business outcome
Example: "Increase Day 30 retention from 20% to 30%"
## Step 2: Map Opportunities
Discover customer needs/pain points through research
Example: "Users don't understand core features"
## Step 3: Generate Solutions
For each opportunity, brainstorm multiple solutions
Example:
- Better onboarding tutorial
- In-app tooltips
- Interactive product tour
## Step 4: Test Assumptions
For each solution, identify riskiest assumption and test
Example: "Users will complete a 5-step tutorial"
Test: Build simple prototype, test with 10 users
## Step 5: Compare Solutions
Use evidence to choose best path forward
Build what tests validate, discard what fails
✓ Visualizes multiple paths to outcome
✓ Prevents jumping to first solution
✓ Encourages broad exploration before narrowing
✓ Documents why decisions were made
✓ Keeps team aligned on priorities
## Discovery Board Columns
┌─────────────┬──────────────┬──────────────┬─────────────┐
│ OPPORTUNITIES│ ASSUMPTIONS │ EXPERIMENTS │ VALIDATED │
│ │ │ │ │
│ Customer │ Riskiest │ Running │ Ready to │
│ needs we've │ assumptions │ tests │ build │
│ identified │ to validate │ │ │
└─────────────┴──────────────┴──────────────┴─────────────┘
## Flow
1. Opportunities flow from research
2. Solutions generate assumptions to test
3. Experiments validate/invalidate assumptions
4. Validated solutions enter delivery backlog
Before moving from discovery to delivery:
## Discovery Checklist
- [ ] Customer problem validated (5+ interviews)
- [ ] Solution tested with prototype (10+ users)
- [ ] Success metrics defined and measurable
- [ ] Technical feasibility confirmed by engineering
- [ ] Business case approved (revenue/retention impact)
- [ ] Design mocks completed and tested
- [ ] Open questions resolved or explicitly acknowledged
- [ ] Story broken into shippable increments
## ✗ Solution-First Discovery
Starting with "We should build X" then finding evidence to support it
→ Instead: Start with outcome and problem, explore multiple solutions
## ✗ Episodic Research
Doing discovery as a phase, then stopping when development starts
→ Instead: Continuous weekly discovery throughout product lifecycle
## ✗ Confirmation Bias
Only talking to users who will validate your ideas
→ Instead: Seek disconfirming evidence, talk to churned users
## ✗ Fake Validation
Asking "Would you use this?" and trusting the answer
→ Instead: Test with realistic prototypes, measure actual behavior
## ✗ Analysis Paralysis
Endless research without ever shipping
→ Instead: Define upfront what evidence is "enough" to move forward
## ✗ Building for Everyone
Trying to solve for all users at once
→ Instead: Focus on specific segment, nail it, then expand
## ✗ Ignoring Weak Signals
Dismissing early negative feedback as "just a few users"
→ Instead: Treat complaints as early warning signs, investigate
Weekly Installs
163
Repository
GitHub Stars
12
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode152
gemini-cli148
codex148
cursor147
github-copilot136
kimi-cli124
站立会议模板:敏捷开发每日站会指南与工具(含远程团队异步模板)
10,500 周安装
| Prioritized opportunities |