gtm-product-led-growth by github/awesome-copilot
npx skills add https://github.com/github/awesome-copilot --skill gtm-product-led-growth构建自助式获客与扩张机制。但首先,要弄清楚产品驱动增长是否真的适合你的产品。
触发场景:
适用背景:
并行运行两种机制的经验教训:
这是经典的初创公司辩论。产品驱动增长阵营认为:"开发者想要自助服务。"销售驱动增长阵营则认为:"企业需要手把手指导。"我们没有争论,而是并行测试了两种机制,为期6个月。同一款产品,两种市场进入策略,全程跟踪所有数据。
结果:
产品驱动增长:高流量,低年度合同价值(约5千美元),收入实现时间快,流失率较高。销售驱动增长:流量较低,高年度合同价值(约5万美元),收入实现时间较慢,流失率较低。尽管流量只有十分之一,但销售驱动增长在收入上胜出10倍。
原因: 产品复杂性 + 购买决策者层级 = 销售驱动增长胜出。该产品需要与现有基础设施集成,跨团队进行变更管理,以及多利益相关方达成一致。开发者喜欢自助服务,但他们不是经济决策者。
产品驱动增长在以下情况有效:
销售驱动增长在以下情况有效:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
在构建产品驱动增长之前,先测试你的机制。不要因为产品驱动增长流行就假设它更好。 产品驱动增长在流量获取上效率高,但面对复杂产品时,销售驱动增长可能更有利可图。
模式:
当你将活动与用户获取之间的关系系统化时,增长就会产生复合效应。不是"做更多营销",而是将具体的投入映射到可衡量的产出。
如何构建你的增长方程式:
为每个渠道定义:活动(投入)→ 流量(产出)→ 转化。
为何重要:
一旦你验证了这个方程式,规模化就变成了数学问题。"我下个月需要增加200个用户" → "我需要再写10篇博客文章"或"我需要再投入5000美元广告费"。没有这个方程式,你就是在猜测。
测试方程式:
常见错误:
未经测试就猜测转化率。假设来自同一渠道的所有用户质量相同。在验证方程式之前就进行扩展。
模式:
每个渠道都有其经济学。如果不跟踪,你就会在输家上过度投资,在赢家上投资不足。
按渠道跟踪:
决策框架:
月度渠道审查: 哪些渠道盈利?哪些渠道是负担?季度重新分配:将预算的3倍分配给赢家,淘汰输家。
关键洞察:渠道质量各不相同
便宜的用户获取成本并不意味着好的用户获取成本。自然搜索可能以0美元的用户获取成本带来用户,且30天留存率为85%。付费搜索可能以12美元的用户获取成本带来用户,但30天留存率只有45%。考虑到留存率和用户生命周期价值,"免费"渠道的价值可能高出10倍。
系统性测试:
每月测试2个新渠道。给每个渠道4周的数据收集时间。如果经济学模型不成立,果断淘汰。无论结果如何,都要记录学习成果——失败的经验和成功的经验同样有价值。
常见错误:
只跟踪用户获取成本,不跟踪留存率。一个获取用户便宜但流失率高的渠道,其成本可能高于获取用户昂贵但留存率高的渠道。
模式:
用户在前5-10分钟内决定产品的价值。如果他们不能快速达到"顿悟时刻",就会放弃。
激活审计:
如果首次价值实现时间 > 10分钟,你就存在激活问题。
优化前: 注册 → 确认邮箱 → 填写资料 → 配置设置 → 阅读文档 → 首次操作
优化后: 注册 → 预加载示例数据 → 首次操作(立即获得顿悟时刻)
具体改进措施:
常见错误:
假设用户会阅读文档。他们不会。他们会点击浏览5分钟,如果没有任何效果,他们就会离开。
模式:
产品驱动增长适用于1千到1万美元的年度经常性收入。在2万到5万美元之间,这种机制会失效,因为组织摩擦开始出现:采购、法律、安全、多利益相关方达成一致。
混合方法:
产品驱动增长(0-1万美元): 自助注册 → 免费层级 → 付费层级 → 信用卡结账 → 自动化引导
销售辅助(1万-5万美元): 自助发现 → 销售根据使用信号介入 → 人工协商合同 → 专属引导
企业级(5万美元以上): 外展或内联线索 → 演示 → 概念验证 → 提案 → 法律/安全审查 → 执行发起人
产品合格线索信号(何时触发销售介入):
交接方式:
糟糕的方式:"嘿,我看到你注册了。"(冷淡、笼统、破坏信任)好的方式:"你的团队正在12个代码库中使用[特定功能]。我们可以帮助你[实现特定价值]。需要15分钟聊聊吗?"(热情、具体、提供价值)
常见错误:
销售过早介入低于5千美元的交易。这会破坏产品驱动增长机制,吓跑用户。让他们自助服务,直到他们需要帮助。
模式:
预测总是错的。但计划仍然有价值,因为它们迫使人们思考并建立问责制。
建立三种情景模型:
基准情景(当前趋势延续):
上行情景(如果所有增长计划都执行到位):
下行情景(如果关键渠道失败):
用于:
月度更新: 比较预测与实际结果。调整模型。不要预测完就忘了。
常见错误:
过于乐观的预测,假设一切都会顺利。不进行月度更新。将预测视为目标(它是一个范围,而不是一个数字)。
模式:
知识会随着人员流失而消失。目标不是一次性的胜利,而是将有效的方法系统化。
每次成功的活动或实验后,写一份一页纸的剧本:
PLAYBOOK: [渠道/策略名称]
Goal: [目标成果]
Steps: [编号步骤,足够具体,让不熟悉的人也能执行]
Expected Output: [具体指标]
Metrics to Track: [如何衡量]
Risks & Mitigations: [可能出错的地方及应对措施]
Owner: [负责人]
Last Updated: [日期]
检验标准: 一个没有参与过的人能否执行这个剧本?如果不能,说明它太模糊了。
季度审查。 删除不再有效的剧本。更新已经演变的剧本。这将成为你的增长操作系统。
常见错误:
运行实验而不记录学习成果。在理解机制之前就进行扩展。增长知识只存在于一个人的头脑中。
Can users get value in <10 min without docs?
├─ No → Sales-led required
└─ Yes → Can they self-serve implementation?
├─ No → Sales-led required
└─ Yes → Is buyer = user?
├─ No → Hybrid (PLG + sales-assist)
└─ Yes → Pure PLG viable
CAC < (LTV × margin)?
├─ No → Kill within 4 weeks
└─ Yes → 90-day retention > 60%?
├─ No → Optimize (improve activation/onboarding)
└─ Yes → Scale aggressively (3x budget)
1. 假设产品驱动增长总是有效 产品复杂性 + 购买决策者层级 = 销售驱动增长胜出。在投入前先测试。
2. 没有渠道经济学 每个渠道都有用户获取成本、留存率和用户生命周期价值。跟踪它们,否则你就是在盲目飞行。
3. 免费层级过于慷慨或过于有限 过于慷慨:没有转化。过于有限:没有激活。允许10-20个顿悟时刻。
4. 没有增长方程式 "做更多营销"不是策略。按渠道映射投入 → 产出 → 转化。
5. 在验证之前就扩展 在扩展任何渠道之前,先收集4周的数据。如果经济学模型不成立,果断淘汰。
6. 增长知识只存在于一个人的头脑中 将每个成功的实验记录为剧本。
产品驱动增长准备度: 10分钟内感知价值 + 自助实施 + 购买者 = 使用者
增长方程式: 活动(投入)→ 流量(产出)→ 转化,按渠道划分
渠道经济学: 用户获取成本、转化率、30/90天留存率、用户生命周期价值、回收期——按渠道划分,月度审查
淘汰标准: 用户获取成本 > (用户生命周期价值 × 利润率) → 4周内改进,否则淘汰
产品合格线索信号: 使用深度 + 扩张(多用户)+ 购买(单点登录/合规请求)
销售交接: <1万美元:产品驱动增长 → 1万-5万美元:销售辅助 → >5万美元:完整销售流程
预测: 基准 + 上行 + 下行情景,每月更新
基于在多家平台公司的经验——领导增长团队从零开始构建产品驱动增长和销售驱动增长机制,并在高速增长公司中成功运作产品驱动增长和销售驱动增长相结合的体系。这种结合教会了双方:早期建立这些机制需要什么(当资源稀缺且每个决策都至关重要时),以及成熟版本在大规模时的样子(增长方程式、渠道经济学系统、免费增值定价关卡,以及将每一次成功和失败都记录成可执行剧本的系统性A/B测试)。这不是理论——而是构建这套机制以及在成功机制中运作的经验教训。
每周安装量
191
代码库
GitHub 星标数
26.9K
首次出现
6 天前
安全审计
安装于
codex173
gemini-cli172
opencode172
cursor170
github-copilot168
amp168
Build self-serve acquisition and expansion motions. But first, figure out if PLG is even the right motion for your product.
Triggers:
Context:
What I Learned Running Both Motions in Parallel:
Classic startup debate. PLG camp: "Developers want self-serve." Sales camp: "Enterprises need hand-holding." Instead of arguing, we tested both for 6 months. Same product, two GTM motions, tracked everything.
The Results:
PLG: High volume, low ACV ($5K), fast time-to-revenue, higher churn. Sales-led: Lower volume, high ACV ( $50K), slower time-to-revenue, lower churn. Sales won 10x on dollars despite 10x less volume.
Why: Product complexity + buyer seniority = sales-led wins. The product required integration with existing infrastructure, change management across teams, and multi-stakeholder alignment. Developers loved self-serve. But they weren't the economic buyer.
PLG works when:
Sales-led works when:
Before building PLG, test your motion. Don't assume PLG is better because it's trendy. PLG is efficient at volume, but sales-led can be more profitable with complexity.
The Pattern:
Growth compounds when you systematize the relationship between activities and user acquisition. Not "do more marketing" — map specific inputs to measurable outputs.
How to Build Your Growth Equation:
For each channel, define: Activity (input) → Traffic (output) → Conversions.
Why This Matters:
Once you validate the equation, scaling becomes math. "I need 200 more users next month" → "I need 10 more blog posts" or "I need $5K more ad spend." Without the equation, you're guessing.
Testing the Equation:
Common Mistake:
Guessing at conversion rates without testing. Assuming all users from the same channel are equal quality. Scaling before validating the equation.
The Pattern:
Every channel has economics. Without tracking them, you over-invest in losers and under-invest in winners.
Track Per Channel:
The Decision Framework:
Monthly channel review: Which channels are profitable? Which are drains? Quarterly reallocation: 3x budget to winners, kill losers.
Critical Insight: Channel Quality Varies
Cheap CAC doesn't mean good CAC. Organic search might deliver users at $0 CAC with 85% 30-day retention. Paid search might deliver users at $12 CAC with 45% 30-day retention. The "free" channel is 10x more valuable when you factor in retention and LTV.
Systematic Testing:
Test 2 new channels monthly. Give each 4 weeks of data. Kill decisively if economics don't work. Document learnings regardless of outcome — what didn't work is as valuable as what did.
Common Mistake:
Tracking CAC without retention. A cheap channel that churns users costs more than an expensive channel that retains them.
The Pattern:
Users decide product value in the first 5-10 minutes. If they don't reach the aha moment fast, they abandon.
The Activation Audit:
If TTFV > 10 minutes, you have an activation problem.
Before: Sign up → confirm email → fill profile → configure settings → read docs → first action
After: Sign up → pre-loaded sample data → first action (immediate aha moment)
Specific Fixes:
Common Mistake:
Assuming users will read documentation. They won't. They'll click around for 5 minutes, and if nothing works, they leave.
The Pattern:
PLG works for $1K-$10K ARR. Between $20K-$50K, the motion breaks because organizational friction kicks in: procurement, legal, security, multi-stakeholder buy-in.
The Hybrid Approach:
PLG ($0-$10K): Self-serve sign-up → free tier → paid tier → credit card checkout → automated onboarding
Sales-Assisted ($10K-$50K): Self-serve discovery → sales engages on usage signals → human-negotiated contract → dedicated onboarding
Enterprise ($50K+): Outbound or inbound lead → demo → POC → proposal → legal/security review → executive sponsor
PQL Signals (When to Trigger Sales):
The Handoff:
Bad: "Hey, I saw you signed up." (Cold, generic, kills trust) Good: "Your team is using [specific feature] across 12 repos. We can help you [specific value]. Want 15 minutes?" (Warm, specific, offers value)
Common Mistake:
Sales engaging too early on <$5K deals. Kills PLG motion, scares users. Let them self-serve until they need help.
The Pattern:
Forecasts are always wrong. Plans are still valuable because they force thinking and create accountability.
Model Three Scenarios:
Baseline (current trajectory continues):
Upside (if all growth initiatives execute):
Downside (if key channels fail):
Use This For:
Monthly Update: Compare forecast to actual. Adjust model. Don't forecast-and-forget.
Common Mistake:
Overly optimistic forecasts that assume everything works. Not updating monthly. Treating forecast as target (it's a range, not a number).
The Pattern:
Knowledge dies with people. The goal isn't one-off wins — it's systematizing what works.
After every successful campaign or experiment, write a 1-page playbook:
PLAYBOOK: [Channel/Tactic Name]
Goal: [What outcome]
Steps: [Numbered, specific enough for someone unfamiliar]
Expected Output: [Specific metrics]
Metrics to Track: [How to measure]
Risks & Mitigations: [What could go wrong]
Owner: [Name]
Last Updated: [Date]
The Test: Could someone who wasn't involved execute this playbook? If not, it's too vague.
Review quarterly. Remove playbooks that no longer work. Update ones that have evolved. This becomes your growth operating system.
Common Mistake:
Running experiments without documenting learnings. Scaling before you understand the mechanism. Having growth knowledge trapped in one person's head.
Can users get value in <10 min without docs?
├─ No → Sales-led required
└─ Yes → Can they self-serve implementation?
├─ No → Sales-led required
└─ Yes → Is buyer = user?
├─ No → Hybrid (PLG + sales-assist)
└─ Yes → Pure PLG viable
CAC < (LTV × margin)?
├─ No → Kill within 4 weeks
└─ Yes → 90-day retention > 60%?
├─ No → Optimize (improve activation/onboarding)
└─ Yes → Scale aggressively (3x budget)
1. Assuming PLG always works Product complexity + buyer seniority = sales-led wins. Test before committing.
2. No channel economics Every channel has CAC, retention, and LTV. Track them or you're flying blind.
3. Free tier too generous or too limited Too generous: no conversion. Too limited: no activation. Allow 10-20 aha moments.
4. No growth equation "Do more marketing" isn't a strategy. Map inputs → outputs → conversions per channel.
5. Scaling before validating 4 weeks of data before scaling any channel. Kill decisively if economics don't work.
6. Growth knowledge in one person's head Document every successful experiment as a playbook.
PLG readiness: Value in <10 min + self-serve implementation + buyer = user
Growth equation: Activity (input) → Traffic (output) → Conversions, per channel
Channel economics: CAC, conversion, 30/90-day retention, LTV, payback — per channel, monthly review
Kill criteria: CAC > (LTV × margin) → 4 weeks to improve, then kill
PQL signals: Usage depth + expansion (multi-user) + buying (SSO/compliance requests)
Sales handoff: <$10K: PLG → $10K-$50K: Sales-assist → >$50K: Full sales
Forecast: Baseline + Upside + Downside, updated monthly
Based on experience across multiple platform companies — leading a growth team building PLG and sales-led motions from scratch, and operating inside successful PLG + sales-led machines at hypergrowth companies. The combination taught both sides: what it takes to establish these motions early (when resources are thin and every bet matters) and what the mature version looks like at scale (growth equations, channel economics systems, freemium pricing gates, and systematic A/B testing that documents every win and loss into executable playbooks). Not theory — lessons from building the machine and operating inside ones that worked.
Weekly Installs
191
Repository
GitHub Stars
26.9K
First Seen
6 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex173
gemini-cli172
opencode172
cursor170
github-copilot168
amp168
站立会议模板:敏捷开发每日站会指南与工具(含远程团队异步模板)
10,500 周安装