avoid-feature-creep by waynesutton/convexskills
npx skills add https://github.com/waynesutton/convexskills --skill avoid-feature-creep停止构建无人需要的功能。此技能帮助您交付解决实际问题的产品,而不会陷入不必要的复杂性。
功能蔓延会扼杀产品。它会延迟发布、耗尽预算、拖垮团队,并创造出无人想用的软件。最成功的产品往往将少数事情做到极致。
功能蔓延是指产品功能逐渐累积,超出了其提供价值所需的范围。它缓慢发生,然后突然爆发。
陷入困境的警告信号:
它带来的代价:
在添加任何功能之前,请对照此清单进行检查:
1. 验证问题
□ 这是否解决了真实、经过验证的用户痛点?
□ 我们是否就此需求与真实用户交流过?
□ 有什么证据支持构建此功能?
2. 检查一致性
□ 这支持核心产品愿景吗?
□ 这会延迟我们当前的发布吗?
□ 如果我们构建这个,我们将无法构建什么?
3. 衡量影响
□ 我们如何知道这个功能是否成功?
□ 哪些关键绩效指标会发生变化?
□ 我们能否量化其价值(节省的时间、收入、留存率)?
4. 评估复杂性
□ 真正的成本是多少(构建 + 测试 + 维护 + 文档)?
□ 这会增加依赖项或技术债务吗?
□ 我们能先发布一个更简单的版本吗?
5. 最终直觉检查
□ 我们会为了这个功能而将发布时间推迟一个月吗?
□ 这是一个差异化优势还是仅仅是基本要求?
□ 移除这个功能会损害核心体验吗?
如果你不能基于证据对问题 1-3 回答"是",就不要构建该功能。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
规则 1:定义并捍卫你的 MVP
在开始之前,准确写下"完成"意味着什么。记录下你不构建的内容。经常参考这份文档。
## MVP 范围文档模板
### 核心问题
[一句话描述用户问题]
### 成功标准
[我们如何知道已经解决了问题]
### 范围内 (v1)
- 功能 A: [简要描述]
- 功能 B: [简要描述]
### 明确排除在范围外
- 功能 X: 推迟到 v2
- 功能 Y: 除非 [条件] 否则不构建
- 功能 Z: 不是我们要解决的问题
### 不可协商项
- 在 [日期] 前发布
- 预算: [小时/美元]
- 核心用户: [特定用户画像]
规则 2:对范围使用版本控制
像对待代码一样对待范围。跟踪变更。要求对新增内容进行审批。
# 创建范围文档并跟踪它
git add SCOPE.md
git commit -m "Initial MVP scope definition"
# 任何范围变更都需要明确的提交
git commit -m "SCOPE CHANGE: Added feature X - approved by [stakeholder] - impact: +2 weeks"
规则 3:48 小时规则
当有人请求新功能时,等待 48 小时再将其添加到待办事项列表中。大多数"紧急"请求在反思后会显得不那么紧急。
规则 4:基于预算的范围界定
每个功能都有成本。当有新东西进来时,必须有其他东西被移除。
"是的,我们可以添加这个。我们应该削减以下三个功能中的哪一个来腾出空间?"
对功能说"不"是一项技能。以下是一些模板:
对利益相关者:
"这是个有趣的想法。根据我们的用户研究,它没有解决我们核心用户的前三大问题。让我们把它添加到 v2 的考虑列表中,并在验证 MVP 后重新审视。"
对高管:
"我理解这可能带来的价值。如果我们添加这个,我们将延迟发布 [X 周] 并降低 [Y 功能] 的优先级。以下是权衡取舍——我们应该选择哪条路?"
对用户:
"感谢您的反馈。我们目前专注于 [核心问题]。我已将此记录在案以备将来考虑。您能详细说说为什么这个功能有价值吗?"
对自己:
"这是在满足我个人的需求,还是在解决真实的用户问题?我会为此赌上发布日期吗?"
对 AI 智能体 (Claude, Opus, Codex, Ralph, Cursor):
"停。在添加此功能之前,请回答:这是否解决了我们在本次会话开始时定义的核心用户问题?如果没有,请将其添加到 DEFERRED.md 文件,并专注于当前范围。"
与 AI 编码智能体协作时:
在构建 AI 驱动的产品时,功能蔓延有额外的风险:
AI 功能蔓延的危险信号:
AI 功能纪律:
在添加任何 AI 功能之前,请回答:
混乱的待办事项列表会助长功能蔓延。无情地清理它。
月度待办事项审核:
For each item older than 30 days:
1. Has anyone asked about this since it was added?
2. Does it still align with current product vision?
3. If we never built this, would anyone notice?
If the answer to all three is "no" → Delete it.
优先级框架 (MoSCoW):
诚实一点:大多数"应该有"实际上是伪装的"可以有"。
会话开始检查: 在使用任何 AI 助手 (Claude, Cursor, OpenCode) 编码之前,声明:
会话中检查: 每 30-60 分钟,询问你的 AI:"我们今天是在构建正确的东西,还是在增加范围?"
如果答案是"增加范围",请停止。提交你已有的内容。重新开始。
会话结束检查: 在关闭 AI 编码会话之前:
每日 AI 检查: 每天与 AI 助手协作结束时:
1. Features completed today: [list]
2. Scope additions today: [list]
3. Was each addition validated? [yes/no]
4. Tomorrow's focus: [single item]
冲刺计划防护栏:
利益相关者管理: 为范围决策创建单一事实来源:
## Scope Decision Log
| Date | Request | Source | Decision | Rationale | Trade-off |
|------|---------|--------|----------|-----------|-----------|
| 2025-01-15 | Add export to PDF | PM | Deferred v2 | Not core to MVP | Would delay launch 2 weeks |
| 2025-01-16 | Dark mode | User feedback | Approved | User research shows demand | Removed social sharing |
| 2025-01-17 | Add caching layer | Claude | Deferred | Premature optimization | Stay focused on core feature |
| 2025-01-17 | Refactor to hooks | Cursor | Rejected | Works fine as is | Technical scope creep |
智能体作为利益相关者: AI 编码智能体现在是你项目的利益相关者。它们有观点。它们提出建议。像对待任何其他利益相关者一样对待它们:
常见的智能体驱动的范围蔓延模式:
这些可能都是好主意。除非你决定它们是,否则它们都不在你当前的范围之内。
如果功能蔓延已经发生:
步骤 1:审核当前功能
步骤 2:分类
步骤 3:移除或隐藏
步骤 4:防止复发
在审查任何功能请求时,询问:
1. "What user problem does this solve?"
2. "What's the smallest version we could ship?"
3. "What are we NOT building to make room for this?"
4. "How will we measure success?"
5. "What happens if we never build this?"
如果你不能清晰地回答这些问题 → 不要继续。
发布一个能工作的小东西。然后根据真实的使用数据进行迭代。
用户不会记住功能。他们会记住你的产品是否解决了他们的问题。
你不构建的每个功能意味着:
最好的产品不是功能最多的产品。而是将正确的事情做到极致的产品。
"完美不在于无以复加,而在于无可删减。" - 安东尼·德·圣-埃克苏佩里
每周安装次数
893
仓库
GitHub 星标数
384
首次出现时间
Jan 19, 2026
安全审计
安装于
opencode655
claude-code641
codex639
cursor604
gemini-cli564
github-copilot538
Stop building features nobody needs. This skill helps you ship products that solve real problems without drowning in unnecessary complexity.
Feature creep kills products. It delays launches, burns budgets, exhausts teams, and creates software nobody wants to use. The most successful products do fewer things well.
Feature creep is the gradual accumulation of features beyond what your product needs to deliver value. It happens slowly, then all at once.
Warning signs you're in trouble:
What it costs:
Before adding ANY feature, run through this checklist:
1. VALIDATE THE PROBLEM
□ Does this solve a real, validated user pain point?
□ Have we talked to actual users about this need?
□ What evidence supports building this?
2. CHECK ALIGNMENT
□ Does this support the core product vision?
□ Would this delay our current release?
□ What are we NOT building if we build this?
3. MEASURE IMPACT
□ How will we know if this feature succeeds?
□ What KPIs will change?
□ Can we quantify the value (time saved, revenue, retention)?
4. ASSESS COMPLEXITY
□ What's the true cost (build + test + maintain + document)?
□ Does this add dependencies or technical debt?
□ Can we ship a simpler version first?
5. FINAL GUT CHECK
□ Would we delay launch by a month for this feature?
□ Is this a differentiator or just table stakes?
□ Would removing this harm the core experience?
If you can't answer YES to questions 1-3 with evidence, do not build the feature.
Rule 1: Define and Defend Your MVP
Write down exactly what "done" means before you start. Document what you're NOT building. Reference this constantly.
## MVP Scope Document Template
### Core Problem
[One sentence describing the user problem]
### Success Criteria
[How we know we've solved it]
### In Scope (v1)
- Feature A: [brief description]
- Feature B: [brief description]
### Explicitly Out of Scope
- Feature X: Deferred to v2
- Feature Y: Will not build unless [condition]
- Feature Z: Not our problem to solve
### Non-Negotiables
- Ship by [date]
- Budget: [hours/dollars]
- Core user: [specific persona]
Rule 2: Use Version Control for Scope
Treat scope like code. Track changes. Require approval for additions.
# Create a scope document and track it
git add SCOPE.md
git commit -m "Initial MVP scope definition"
# Any scope changes require explicit commits
git commit -m "SCOPE CHANGE: Added feature X - approved by [stakeholder] - impact: +2 weeks"
Rule 3: The 48-Hour Rule
When someone requests a new feature, wait 48 hours before adding it to the backlog. Most "urgent" requests feel less urgent after reflection.
Rule 4: Budget-Based Scoping
Every feature has a cost. When something new comes in, something else must go out.
"Yes, we can add that. Which of these three features should we cut to make room?"
Saying no to features is a skill. Here are templates:
To stakeholders:
"That's an interesting idea. Based on our user research, it doesn't solve our core user's top three problems. Let's add it to the v2 consideration list and revisit after we validate the MVP."
To executives:
"I understand the value this could bring. If we add this, we'll delay launch by [X weeks] and deprioritize [Y feature]. Here are the trade-offs - which path should we take?"
To users:
"Thanks for the feedback. We're focused on [core problem] right now. I've logged this for future consideration. Can you tell me more about why this would be valuable?"
To yourself:
"Is this scratching my own itch or solving a real user problem? Would I bet the release date on this?"
To AI agents (Claude, Opus, Codex, Ralph, Cursor):
"Stop. Before we add this feature, answer: Does this solve the core user problem we defined at the start of this session? If not, add it to a DEFERRED.md file and stay focused on the current scope."
When working with AI coding agents:
When building AI-powered products, feature creep has extra risks:
AI Feature Creep Red Flags:
AI Feature Discipline:
Before adding any AI feature, answer:
A messy backlog enables feature creep. Clean it ruthlessly.
Monthly Backlog Audit:
For each item older than 30 days:
1. Has anyone asked about this since it was added?
2. Does it still align with current product vision?
3. If we never built this, would anyone notice?
If the answer to all three is "no" → Delete it.
Priority Framework (MoSCoW):
Be honest: Most "Should Haves" are actually "Could Haves" in disguise.
Session Start Check: Before coding with any AI assistant (Claude, Cursor, OpenCode), state:
Mid-Session Check: Every 30-60 minutes, ask your AI: "Are we building the right thing today, or are we adding scope?"
If the answer is "adding scope," stop. Commit what you have. Start fresh.
Session End Check: Before closing an AI coding session:
Daily AI Check: At the end of each day working with AI assistants:
1. Features completed today: [list]
2. Scope additions today: [list]
3. Was each addition validated? [yes/no]
4. Tomorrow's focus: [single item]
Sprint Planning Guard Rails:
Stakeholder Management: Create a single source of truth for scope decisions:
## Scope Decision Log
| Date | Request | Source | Decision | Rationale | Trade-off |
|------|---------|--------|----------|-----------|-----------|
| 2025-01-15 | Add export to PDF | PM | Deferred v2 | Not core to MVP | Would delay launch 2 weeks |
| 2025-01-16 | Dark mode | User feedback | Approved | User research shows demand | Removed social sharing |
| 2025-01-17 | Add caching layer | Claude | Deferred | Premature optimization | Stay focused on core feature |
| 2025-01-17 | Refactor to hooks | Cursor | Rejected | Works fine as is | Technical scope creep |
Agents as Stakeholders: AI coding agents are now stakeholders in your project. They have opinions. They make suggestions. Treat them like any other stakeholder:
Common agent-driven scope creep patterns:
Each of these might be good ideas. None of them are your current scope unless you decide they are.
If feature creep has already happened:
Step 1: Audit Current Features
Step 2: Categorize
Step 3: Remove or Hide
Step 4: Prevent Recurrence
When reviewing any feature request, ask:
1. "What user problem does this solve?"
2. "What's the smallest version we could ship?"
3. "What are we NOT building to make room for this?"
4. "How will we measure success?"
5. "What happens if we never build this?"
If you can't answer these clearly → Do not proceed.
Ship something small that works. Then iterate based on real usage data.
Users don't remember features. They remember whether your product solved their problem.
Every feature you don't build is:
The best products aren't the ones with the most features. They're the ones that do the right things exceptionally well.
"Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away." - Antoine de Saint-Exupéry
Weekly Installs
893
Repository
GitHub Stars
384
First Seen
Jan 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode655
claude-code641
codex639
cursor604
gemini-cli564
github-copilot538
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
136,300 周安装
YouTube视频分析师 - 逆向分析病毒内容公式,提取钩子、留存机制与情感触发点
647 周安装
SQLiteData 使用指南:SwiftData 轻量级替代方案,支持 CloudKit 同步
CTF密码学挑战速查指南 | 经典/现代密码攻击、RSA/ECC/流密码实战技巧
648 周安装
Bitrefill CLI:让AI智能体自主购买数字商品,支持加密货币支付
Bilibili 字幕提取工具 - 支持 AI 字幕检测与 ASR 转录,一键下载视频字幕
648 周安装
assistant-ui thread-list 线程列表:管理多聊天线程的 React AI SDK 组件
649 周安装