gtm-ai-gtm by github/awesome-copilot
npx skills add https://github.com/github/awesome-copilot --skill gtm-ai-gtm面向 AI 产品的上市策略。这些并非通用的 AI 原则,而是将自主 AI 智能体销售给企业客户时总结出的模式——在这些场景中,“自主”会吓跑买家,而“队友”则能促成转化。
触发场景:
适用背景:
我在销售自主 AI 智能体中学到的:
三个月后,企业安全审查快速通过了。好迹象,对吧?但随后模式浮现:安全部门批准了,但运维部门拒绝了我们。
真正的顾虑不是“AI 会破坏生产环境吗?”——他们假设它最终总会出问题。真正的问题是:
“当智能体做错事时,谁来负责?”
不是“我们信任这个智能体吗?”——而是“我们信任我们的团队能处理这个吗?”
为何这很重要:
自主智能体带来了新的运营负担。你销售的不仅仅是 AI 能力,更是组织的准备度。当你的智能体在凌晨 2 点导致生产中断时,谁会收到告警?谁来修复?谁向副总裁解释?
框架:责任层级
在部署 AI 智能体之前,企业需要明确的答案:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
如果你不能回答所有三个问题,他们就不会购买。你的 AI 再好也没用。
这如何改变你的销售流程:
旧方法:
新方法:
资格确认问题:
“带我走一遍,当智能体执行了一个破坏工作流的操作时会发生什么。谁会收到告警?谁负责调查?谁决定是回滚还是向前修复?”
如果他们无法回答,说明他们还没准备好。暂停交易,先帮助他们建立流程。
常见错误:
将其视为产品层面的顾虑(“我们会让 AI 更准确”)。这其实是组织层面的顾虑。更高的准确率并不能解决“凌晨 2 点谁来负责?”的问题。
我观察到的成功模式:
成功应用 AI 智能体的公司通常已经具备:
而遇到困难的公司则通常:
决策标准:
在向企业演示自主 AI 之前,问问自己:“如果这破坏了他们的生产环境,他们团队中谁负责修复?”如果你无法回答,他们也无法购买。
定位陷阱:
早期与企业沟通时,我们定位为“自主 AI 智能体”。买家退缩了。只改了一个词——“自主” → “AI 队友”——交易进展就得到了显著改善。
为什么?措辞选择塑造了买家心理。
三种定位框架:
1. 副驾驶(最安全,价值最低)
2. 智能体(最吓人,价值最高)
3. 队友(最佳平衡点)
定位转变:
之前: “处理端到端复杂工作流的自主 AI 智能体”
之后: “与你的工程师协作处理复杂任务的 AI 队友”
具体有效的措辞选择:
❌ 不要说:
✅ 要说:
如何选择你的定位框架:
Does your AI make decisions without human approval?
├─ Yes → Are you selling to developers or enterprises?
│ ├─ Developers → "Agent" framing (they want autonomous)
│ └─ Enterprises → "Teammate" framing (they want control)
└─ No → "Copilot" framing (augmentation, not automation)
残酷的真相:
你可以构建一个智能体,但将其定位为副驾驶。你无法构建一个副驾驶,却将其定位为智能体。产品能力设定了上限,定位决定了你在这个上限下的落点。
常见错误:
因为听起来很厉害而使用“自主”。厉害 ≠ 可信。如果买家对你的定位感到退缩,你就已经失去了他们。
模式:
我合作过的每一家 AI 公司都面临这个问题:客户 A 每月使用 1,000 次 API 调用。客户 B 使用 10,000 次。你是否向客户 B 收取 10 倍的费用?如果是,他们会流失。如果不是,你的利润会崩溃。
三种模型:
1. 基于席位(每月每位用户 $X)
2. 基于使用量(每次 API 调用 / 预测 / 小时 $X)
3. 基于结果(每次达成结果 $X)
实际有效的方法(混合模式):
基础费用(覆盖固定成本)+ 可变费用(随价值增长)。
示例结构:
我希望早些进行的定价对话:
在为基于使用量的 AI 定价时:
询问客户: “你们手动完成这件事的成本是多少?”
如果你的服务是每次 API 调用 0.10 美元,但为他们节省了 2 美元的人力成本,那你就定价过低了。如果是每次调用 0.50 美元,但只为他们节省了 0.40 美元,他们就不会大量使用。
定价规则:
你的可变成本应占客户替代成本的 20-30%。既要高到足以体现价值,又要低到足以让他们放心使用。
常见错误:
照搬 OpenAI 的定价(每 1K 个 token 0.01 美元),因为“大家都这么做”。你的成本结构不是 OpenAI 的成本结构。你的价值也不是 OpenAI 的价值。为你的业务定价。
模式:
你不能靠说“相信我们,它有效”来销售 AI。你需要分阶段建立信任。
第一步:透明度(在首次演示之前)
在他们询问之前就发送这三份文档:
为何有效: 买家期望进行尽职调查。如果你在他们询问之前就发送文档,你会显得自信且可信。
第二步:控制(在演示中)
向他们展示安全机制:
为何有效: 对“失控 AI”的恐惧是真实存在的。展示控制机制证明你考虑过故障模式。
第三步:性能(第 4-8 周)
证明它有效:
为何有效: 证据胜过承诺。一个客户说“我们每周节省了 X 小时”抵得上 100 个营销声明。
第四步:规模化(当他们认真考虑时)
展示企业级就绪度:
为何有效: 企业不会部署最小可行产品。他们需要证明你不会在 1000 个用户时崩溃。
我犯过的错误:
在解释 AI 如何工作之前,试图证明其性能。买家不相信基准测试,因为他们不理解系统。顺序很重要。
决策标准:
如果买家在你演示之前就问“这是如何工作的?”,说明你跳过了透明度步骤。退一步,发送文档。
无效的做法:
预先录制的演示,AI 神奇地解决一切问题。买家会想:“这在我们混乱的数据上行不通。”
有效的做法:
展示 AI 犯错并恢复的过程。说真的。
有效的演示结构:
1. 问题(30 秒) “你的工程师在 [具体任务] 上花费数小时。这是它的样子。”
2. AI 尝试(60 秒) “这是 AI 处理相同任务的过程。”
3. 人工审查(30 秒) “这是工程师审查和批准的环节。”
4. 结果(30 秒) “[X 小时] → [Y 分钟]。工程师仍然对结果负责,AI 加速了执行。”
为何有效:
我观察到的模式:
展示完美 AI 的演示 → 买家怀疑 展示不完美但能恢复的 AI 的演示 → 买家参与度高
常见错误:
挑选 AI 100% 准确的例子。买家知道真实世界的数据是混乱的。如果你不展示混乱,他们会认为你在隐藏。
顾虑:
“这看起来很棒,但当 AI 做错事时会发生什么?”
糟糕的回答: “我们的 AI 准确率为 95%,并且我们每周都在改进。”(潜台词:“它会有 5% 的时间破坏生产,祝你好运”)
好的回答: “好问题。让我们一起走一遍故障场景。”
然后提问:
这起到的作用:
后续跟进:
“这是我们的建议:从低风险环境开始。让 AI 处理非关键工作流 2-4 周。看看你的团队如何处理它的错误。当你对流程有信心时,再扩大范围。”
为何有效:
你销售的不是完美。你销售的是一个需要运营成熟度的工具。筛选成熟的买家比说服不成熟的买家更好。
模式:
成熟的买家会说:“我们已经有了工具故障的操作手册,我们会把 AI 加进去。” 不成熟的买家会说:“你能让它永不失败吗?”
决策标准:
如果买家要求 100% 准确率,请走开。他们还没准备好。等他们有了事件响应流程后再回来。
模式:
你在 AI 智能体领域竞争。每个竞争对手的主页都在说同样的话:“用 AI 自动化 [工作流]”。你的差异化需要解释复杂的、买家不理解的基准测试。
这就是定位陷阱:在竞争对手的战场上,与资金更雄厚的公司在功能上竞争。
如何诊断:
对 AI 定位有效的结构性优势:
无法持久的特性优势:
测试方法:
对于每一个定位声明,问:竞争对手能否通过一个产品冲刺周期就复制它?如果是,那就不可防御。不要把你的上市策略建立在这上面。
常见错误:
声称你在大家都会做的事情上“做得更好”。在 AI 领域,基准测试每月都在变化。根据你方法的结构性差异来定位,而不是根据你模型的暂时性优势。
模式:
对 AI 智能体意向最高的企业买家,是那些已经采用了类似工具并遇到其局限性的人。他们已经投入了学习成本,理解了问题空间,并且有明确的升级商业案例。
如何识别天花板时刻:
潜在客户具备:
如何定位他们:
为何转化率更高:
与冷启动推广相比,天花板时刻的对话转化率高 3-5 倍,因为:
资格确认问题:
“你们用现有工具尝试自动化的最复杂任务是什么,它在哪个环节失败了?”
如果他们能给出具体答案并描述具体痛点,他们就是天花板时刻的买家。如果他们说“它工作得很好”,那他们还没准备好。
常见错误:
试图说服对工具不熟悉的潜在客户采用 AI 智能体。转化率低,教育周期长,而且他们会把你与“什么都不做”比较,而不是与“做得更好”比较。瞄准那些已经相信这个类别的买家。
Does your AI act autonomously (no approval per action)?
├─ Yes → Who are you selling to?
│ ├─ Developers → "Agent" framing
│ └─ Enterprises → "Teammate" framing
└─ No → "Copilot" framing
Can you measure customer outcomes reliably?
├─ Yes → Outcome-based (or hybrid with outcome component)
└─ No → Continue...
│
Does usage vary 5x+ by customer?
├─ Yes → Hybrid (base + usage)
└─ No → Seat-based
Do they have incident response processes for tool failures?
├─ Yes → Continue...
│ │
│ Do they have on-call rotations for production systems?
│ ├─ Yes → Qualified buyer
│ └─ No → Help them build it first
└─ No → Not ready (come back in 6 months)
1. 因为听起来厉害而使用“自主”
2. 隐藏 AI 故障模式
3. 将“它会破坏生产环境吗?”视为主要顾虑
4. 像 OpenAI 那样为基于使用量的 AI 定价
5. 在演示前跳过透明度文档
6. 演示完美的 AI
7. 向要求 100% 准确率的买家销售
企业顾虑清单:
定位措辞选择:
演示结构:
信任阶梯:
定价混合公式:
基于在开发者工具和基础设施领域的 AI 智能体上市策略经验。模式来源于销售自主 AI 产品的企业交易周期——有些直接参与,有些与销售领导层共同支持——包括从功能竞争转向结构性差异化的定位陷阱诊断、显著提高外联转化率的天花板时刻资格确认,以及经过安全、运维和工程类买家角色测试的框架。不是理论——来自“自主”扼杀对话而“队友”促成转化的交易教训。
每周安装量
179
代码库
GitHub 星标数
26.9K
首次出现
6 天前
安全审计
安装于
codex164
opencode164
gemini-cli163
cursor162
github-copilot160
kimi-cli160
Go-to-market strategy for AI products. These aren't generic AI principles — they're patterns from selling autonomous AI agents into enterprises where "autonomous" scared buyers and "teammate" converted them.
Triggers:
Context:
What I Learned Selling Autonomous AI Agents:
Three months in, enterprise security reviews were passing fast. Good sign, right? Then the pattern emerged: security approved, but operations rejected us.
The objection wasn't "will the AI break production?" — they assumed it would break production eventually. The real question was:
"Who's responsible when the agent does something wrong?"
Not "do we trust the agent?" — "do we trust our team to handle this?"
Why This Matters:
Autonomous agents create a new operational burden. You're not selling AI capability, you're selling organizational readiness. When your agent halts production at 2am, who gets paged? Who fixes it? Who explains it to the VP?
Framework: The Accountability Cascade
Before deploying AI agents, enterprises need clear answers:
If you can't answer all three, they won't buy. Doesn't matter how good your AI is.
How This Changes Your Sales Process:
Old approach:
New approach:
The Qualification Question:
"Walk me through what happens when the agent takes an action that breaks a workflow. Who gets alerted? Who investigates? Who decides whether to roll back or fix forward?"
If they can't answer, they're not ready. Pause the deal and help them build the process first.
Common Mistake:
Treating this as a product objection ("we'll make the AI more accurate"). It's an organizational objection. More accuracy doesn't solve "who owns this at 2am?"
Pattern I've Seen Work:
Companies that succeed with AI agents already have:
Companies that struggle:
Decision Criteria:
Before demoing autonomous AI to enterprises, ask yourself: "If this breaks their production, who on their team owns the fix?" If you can't answer, they can't buy.
The Positioning Trap:
Early enterprise conversations, we positioned as "autonomous AI agent." Buyers flinched. One word change — "autonomous" → "AI teammate" — and deal progression improved measurably.
Why? Word choice shapes buyer psychology.
The Three Framings:
1. Copilot (Safest, Lowest Value)
2. Agent (Scariest, Highest Value)
3. Teammate (Sweet Spot)
The Positioning Shift:
Before: "Autonomous AI agent that handles complex workflows end-to-end"
After: "AI teammate that pairs with your engineers on complex tasks"
Specific Language Choices That Mattered:
❌ Don't say:
✅ Do say:
How to Choose Your Framing:
Does your AI make decisions without human approval?
├─ Yes → Are you selling to developers or enterprises?
│ ├─ Developers → "Agent" framing (they want autonomous)
│ └─ Enterprises → "Teammate" framing (they want control)
└─ No → "Copilot" framing (augmentation, not automation)
The Hard Truth:
You can build an agent but position it as a copilot. You can't build a copilot and position it as an agent. Product capabilities set a ceiling, positioning chooses where you land below it.
Common Mistake:
Using "autonomous" because it sounds impressive. Impressive ≠ trusted. If buyers flinch at your positioning, you've lost them.
The Pattern:
Every AI company I've worked with faces this: Customer A uses 1,000 API calls/month. Customer B uses 10,000. Do you charge Customer B 10x more? If yes, they churn. If no, your margins collapse.
The Three Models:
1. Seat-Based ($X per user/month)
2. Usage-Based ($X per API call / prediction / hour)
3. Outcome-Based ($X per outcome achieved)
What Actually Works (Hybrid):
Base fee (covers fixed costs) + variable fee (scales with value).
Example structure:
The Pricing Conversation I Wish I'd Had Earlier:
When pricing usage-based AI:
Ask the customer: "How much would it cost you to do this manually?"
If it's $0.10 per API call but saves them $2 in labor, you're underpriced. If it costs $0.50 per call but saves them $0.40, they won't use it enough to matter.
Pricing Rule:
Your variable cost should be 20-30% of customer's alternative cost. High enough to capture value, low enough that they'll use it liberally.
Common Mistake:
Copying OpenAI's pricing ($0.01 per 1K tokens) because "that's what everyone does." Your cost structure isn't OpenAI's cost structure. Your value isn't OpenAI's value. Price for your business.
The Pattern:
You can't sell AI by saying "trust us, it works." You build trust in stages.
First: Transparency (Before First Demo)
Send these three docs before they ask:
Why this works: Buyers expect to do diligence. If you send docs before they ask , you look confident and credible.
Second: Control (In the Demo)
Show them the safety mechanisms:
Why this works: Fear of "runaway AI" is real. Showing control mechanisms proves you thought about failure modes.
Third: Performance (Week 4-8)
Prove it works:
Why this works: Proof beats promises. One customer saying "we saved X hours/week" is worth 100 marketing claims.
Fourth: Scale (When They're Serious)
Show enterprise readiness:
Why this works: Enterprises don't deploy MVPs. They need proof you won't fall over at 1000 users.
The Mistake I Made:
Trying to prove performance before explaining how the AI worked. Buyers didn't trust the benchmarks because they didn't understand the system. Order matters.
Decision Criteria:
If buyers ask "how does this work?" before you've demoed, you skipped transparency. Back up and send the docs.
What Doesn't Work:
Canned demo where AI magically solves everything. Buyers think "this won't work on our messy data."
What Works:
Show the AI making a mistake and recovering. Seriously.
Demo Structure That Works:
1. The Problem (30 seconds) "Your engineers spend hours on [specific task]. Here's what that looks like."
2. The AI Attempt (60 seconds) "Here's the AI handling the same task."
3. The Human Review (30 seconds) "Here's where the engineer reviews and approves."
4. The Outcome (30 seconds) "[X hours] → [Y minutes]. Engineer still owns the outcome, AI accelerates execution."
Why This Works:
The Pattern I've Seen:
Demos with perfect AI → Buyers skeptical Demos with imperfect AI that recovers → Buyers engaged
Common Mistake:
Cherry-picking examples where AI is 100% accurate. Buyers know real-world data is messy. If you don't show messiness, they assume you're hiding it.
The Objection:
"This looks great, but what happens when the AI does something wrong?"
Bad Answer: "Our AI is 95% accurate, and we're improving it every week." (Translation: "It will break production 5% of the time, good luck with that")
Good Answer: "Great question. Let's walk through a failure scenario together."
Then Ask:
What This Does:
The Follow-Up:
"Here's what we recommend: Start with low-risk environments. Let the AI handle non-critical workflows for 2-4 weeks. See how your team handles its mistakes. Then expand scope when you're confident in the process."
Why This Works:
You're not selling perfection. You're selling a tool that requires operational maturity. Filtering for mature buyers is better than convincing immature ones.
The Pattern:
Mature buyers say: "We already have runbooks for tool failures, we'll add AI to them." Immature buyers say: "Can you make it never fail?"
Decision Criteria:
If a buyer demands 100% accuracy, walk away. They're not ready. Come back when they have incident response processes.
The Pattern:
You're competing in the AI agent space. Every competitor's homepage says the same thing: "Automate [workflow] with AI." Your differentiation requires explaining complex technical benchmarks that buyers don't understand.
This is the positioning trap: competing on features against better-funded companies on their battlefield.
How to Diagnose It:
Structural advantages that work for AI positioning:
Feature advantages that don't last:
The Test:
For every positioning claim, ask: Can a competitor copy this with a single product sprint? If yes, it's not defensible. Don't build your GTM on it.
Common Mistake:
Claiming you're "better" at what everyone does. In AI, benchmarks change monthly. Position on what's structurally different about your approach, not what's temporarily better about your model.
The Pattern:
The highest-intent enterprise buyers for AI agents are people who've already adopted a comparable tool and hit its limits. They've invested in learning, they understand the problem space, and they have a clear business case for the upgrade.
How to Identify Ceiling Moments:
The prospect has:
How to Target Them:
Why This Converts Better:
Ceiling-moment conversations convert 3-5x vs cold outreach because:
The Qualification Question:
"What's the most complex task you've tried to automate with your current tool, and where did it break down?"
If they have a specific answer with specific pain, they're a ceiling-moment buyer. If they say "it works fine," they're not ready.
Common Mistake:
Trying to convince tool-naive prospects to adopt AI agents. Bad conversion rates, long education cycles, and they'll compare you to "doing nothing" instead of "doing it better." Target buyers who already believe in the category.
Does your AI act autonomously (no approval per action)?
├─ Yes → Who are you selling to?
│ ├─ Developers → "Agent" framing
│ └─ Enterprises → "Teammate" framing
└─ No → "Copilot" framing
Can you measure customer outcomes reliably?
├─ Yes → Outcome-based (or hybrid with outcome component)
└─ No → Continue...
│
Does usage vary 5x+ by customer?
├─ Yes → Hybrid (base + usage)
└─ No → Seat-based
Do they have incident response processes for tool failures?
├─ Yes → Continue...
│ │
│ Do they have on-call rotations for production systems?
│ ├─ Yes → Qualified buyer
│ └─ No → Help them build it first
└─ No → Not ready (come back in 6 months)
1. Using "autonomous" because it sounds impressive
2. Hiding AI failure modes
3. Treating "will it break production?" as the objection
4. Pricing usage-based AI like OpenAI
5. Skipping transparency docs before demo
6. Demoing perfect AI
7. Selling to buyers who demand 100% accuracy
Enterprise objection checklist:
Positioning word choices:
Demo structure:
Trust ladder:
Pricing hybrid formula:
Based on enterprise AI agent GTM across developer tools and infrastructure. Patterns drawn from working enterprise deal cycles selling autonomous AI products — some carried directly, others supported alongside sales leadership — including the positioning trap diagnosis that shifted from feature competition to structural differentiation, the ceiling-moment qualification that improved outbound conversion significantly, and frameworks tested across security, operations, and engineering buyer personas. Not theory — lessons from deals where "autonomous" killed conversations and "teammate" converted.
Weekly Installs
179
Repository
GitHub Stars
26.9K
First Seen
6 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex164
opencode164
gemini-cli163
cursor162
github-copilot160
kimi-cli160
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
50,900 周安装