重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
startup-competitors by ferdinandobons/startup-skill
npx skills add https://github.com/ferdinandobons/startup-skill --skill startup-competitors提供超越表面资料的深度竞争情报。利用真实网络数据生成可执行的竞争策略卡、定价格局分析和战略弱点映射。
INTAKE → RESEARCH (3 parallel waves) → SYNTHESIS → BATTLE CARDS
流程聚焦于:理解产品,从 3 个维度深入研究竞争对手,综合发现,并产出可执行的成果。典型运行时间:在 Claude Code(并行代理)中为 15-25 分钟,在 Claude.ai(顺序执行)中为 30-45 分钟。
默认输出语言为英语。如果用户使用其他语言书写或明确要求使用某种语言,则所有输出均使用该语言。
简短而聚焦——进行 1-2 轮提问,而非冗长的访谈。目标是获取足够上下文以进行有针对性的研究。
在提问之前,检查是否已为此项目完成 startup-design 会话。在工作目录或子目录中查找以下文件:
01-discovery/competitor-landscape.md — 竞争对手概况和分析01-discovery/market-analysis.md — 市场规模、趋势、监管01-discovery/target-audience.md — 客户画像、痛点广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
00-intake/brief.md — 产品描述和背景如果这些文件存在,请读取它们并使用其中的数据作为起点:
competitor-landscape.md 中的竞争对手列表作为深入分析的起点(startup-design 在表面层面分析了 5-8 个竞争对手——此技能将对每个进行更深入的分析)market-analysis.md 中提取市场规模和趋势,以了解竞争格局的背景target-audience.md 中的客户痛点来聚焦情感挖掘的重点告知用户:"我发现了一个先前 startup-design 会话的数据。我将以此作为起点,进行更深入的竞争分析。"
如果 startup-design 文件提供了足够的上下文,则完全跳过信息收集访谈。直接进入研究阶段。
第 1 轮 — 基本信息:
第 2 轮 — 聚焦问题(仅在需要时):
不要过度访谈。如果用户预先给出了清晰的描述,则直接进入研究阶段。竞争分析本身会揭示重要信息。
保存到 {project-name}/intake.md — 一份关于产品、市场和已知竞争对手的简要总结。如果基于 startup-design 数据,请注明所使用的源文件。项目名称应源自产品/市场(使用短横线命名法,例如 ai-email-assistant)。
创建 {project-name}/PROGRESS.md,包含:项目名称、技能名称 (startup-competitors)、开始日期、语言、研究模式(实时 / 基于知识库)以及阶段检查清单。每个阶段完成后更新它。如果 PROGRESS.md 已存在于之前的会话中,则从最后一个未完成的阶段恢复。
信息收集后,评估市场复杂性并向用户展示研究深度建议。
参考: 阅读
references/research-scaling.md了解复杂性评分矩阵、层级定义、波次配置和用户沟通模板。
research-scaling.md 获取确切模板)所选层级决定了阶段 2 中每个波次的代理数量以及每个代理的搜索轮次。参见 research-scaling.md 了解每个层级的确切波次配置。
三个并行的研究波次,每个波次从不同角度分析竞争格局。它们共同提供 360 度全景视图。
检查 Agent 工具是否可用:
此技能需要 WebSearch 来获取真实数据。如果 WebSearch 不可用或被拒绝,则回退到基于知识库的模式:使用训练数据,在所有发现处标记 [Knowledge-Based — verify independently],并将置信度评级降低一个级别。
参考: 在开始任何波次之前,请阅读
references/research-principles.md。它定义了来源质量层级、交叉引用规则以及如何处理数据缺口。
参考: 阅读
references/research-wave-1-profiles-pricing.md获取代理模板。
两个代理(或两个顺序执行的模块):
A1: 竞争对手深度剖析 — 识别并分析 5-8 个直接竞争对手,外加 2-3 个相邻解决方案(更广泛的平台、手动替代方案、来自相邻类别但竞争相同预算的工具)。针对每个:产品、功能、团队规模、融资情况、增长信号、优势、劣势。超越其营销页面——检查评论、招聘信息和融资数据。
A2: 定价情报 — 针对每个竞争对手:逆向工程其定价模型。不仅仅是"每月 49 美元",而是:价值度量标准是什么(按席位?按使用量?固定费用?)、层级如何区分、使用了哪些定价心理学(锚定、诱饵、魅力定价)、转换成本是多少(技术、合同、情感)。构建逐层比较。
参考: 阅读
references/research-wave-2-sentiment-mining.md获取代理模板。
两个代理(或两个顺序执行的模块):
B1: 评论挖掘 — 挖掘 G2、Capterra、TrustRadius、Product Hunt 和 App Store 上每个竞争对手的评论。提取模式:人们赞扬什么?抱怨什么?请求什么功能?按竞争对手和痛点主题进行组织。包含逐字引用。
B2: 论坛与社区挖掘 — 挖掘 Reddit、Indie Hackers、Hacker News、Quora 和利基社区。查找:对现有工具的抱怨、"你用 X 做什么?"的讨论串、迁移故事、变通方案讨论。构建一个语言地图——客户用来描述他们问题和愿望的确切词语。识别流失信号——人们离开每个竞争对手的原因。
参考: 阅读
references/research-wave-3-gtm-signals.md获取代理模板。
两个代理(或两个顺序执行的模块):
C1: 市场进入策略分析 — 针对每个竞争对手:主要获客渠道、销售模式(自助服务 vs. 销售主导)、内容策略(博客频率、主题、质量)、社交媒体存在、付费广告信号、合作伙伴策略。构建一个渠道机会地图,显示每个渠道的竞争对手饱和度与机会。
C2: 战略与增长信号 — 融资轨迹(轮次、投资者、时机)、招聘模式(工程人员多 = 构建产品,销售人员多 = 扩张规模,支持人员多 = 挣扎)、内容/SEO 足迹(他们为哪些关键词排名,存在哪些空白)、来自更新日志和公开声明的产品路线图信号。识别每个竞争对手拥有的内容支柱以及哪些主题无人很好地覆盖。
所有三个波次完成后,在综合之前,向用户简要展示研究发现:分析了多少个竞争对手、主要的客户痛点主题、最显著的战略信号(融资、招聘、GTM 模式)。询问:"这符合你的预期吗?在我进行综合之前,是否有需要添加或删除的竞争对手?"
保持为一条消息——这是一个快速的对齐检查,而不是完整的报告。
参考: 阅读
references/research-synthesis.md了解综合协议和竞争策略卡模板。
检查点之后,将原始发现综合成战略交付成果。此步骤创造真正的价值——它不是报告,而是跨数据源的模式匹配。
每个交付文件必须以标准化标题开头:# {标题}: {产品},后跟 *Skill: startup-competitors | Generated: {日期}*。每个交付文件必须以红色旗帜、黄色旗帜和来源部分结尾。
{project-name}/competitors-report.md — 主要交付成果:
{project-name}/competitive-matrix.md — 功能比较表:
{project-name}/pricing-landscape.md — 专门的定价分析:
{project-name}/battle-cards/{competitor-name}.md — 每个竞争对手一份:
将原始研究文件保存在 {project-name}/raw/ 中供参考:
competitor-profiles.mdpricing-intelligence.mdreview-mining.mdforum-mining.mdgtm-analysis.mdstrategic-signals.md综合完成并编写完所有交付文件后,运行一次验证检查。
参考: 阅读
references/verification-agent.md获取完整的验证协议、通用检查和技能特定检查。
{project-name}/verification-report.md在 Claude.ai 或 Agent 工具不可用时,请遵循相同的协议在主对话中自行运行验证检查。
参考: 阅读
references/honesty-protocol.md获取完整协议和反模式详情。
竞争情报只有在诚实时才有用。核心规则适用(标记主张、量化、声明缺口),外加竞争情报特定的补充:
参见 references/honesty-protocol.md 获取完整的反模式表(6 个条目)和详细协议。
仅阅读当前阶段所需的内容。
| 文件 | 何时阅读 | ~行数 | 用途 |
|---|---|---|---|
honesty-protocol.md | 会话开始时 | ~72 | 包含反模式的完整诚实协议 |
research-principles.md | 开始阶段 2 之前 | ~54 | 来源质量、交叉引用、数据缺口 |
research-wave-1-profiles-pricing.md | 运行波次 1 时 | ~186 | 概况 + 定价的代理模板 |
research-wave-2-sentiment-mining.md | 运行波次 2 时 | ~189 | 评论 + 论坛挖掘的代理模板 |
research-wave-3-gtm-signals.md | 运行波次 3 时 | ~192 | GTM + 战略信号的代理模板 |
research-synthesis.md | 所有波次完成后 | ~231 | 如何综合 + 竞争策略卡模板 |
research-scaling.md | 信息收集后,阶段 2 之前 | ~80 | 复杂性评分、层级定义、波次配置 |
verification-agent.md | 综合后 | ~85 | 验证协议、通用 + 技能特定检查 |
每周安装次数
47
代码仓库
GitHub 星标数
149
首次出现
2026年3月10日
安全审计
安装于
gemini-cli46
github-copilot46
codex46
amp46
cline46
kimi-cli46
Deep competitive intelligence that goes beyond surface-level profiles. Produces actionable battle cards, pricing landscape analysis, and strategic vulnerability mapping using real web data.
INTAKE → RESEARCH (3 parallel waves) → SYNTHESIS → BATTLE CARDS
The process is focused: understand the product, research competitors deeply across 3 dimensions, synthesize findings, and produce actionable output. Typical runtime: 15-25 minutes in Claude Code (parallel agents), 30-45 minutes in Claude.ai (sequential).
Default output language is English. If the user writes in another language or explicitly requests one, use that language for all outputs instead.
Short and focused — 1-2 rounds of questions, not an extended interview. The goal is just enough context to run targeted research.
Before asking questions, check if a startup-design session has already been completed for this project. Look for these files in the working directory or subdirectories:
01-discovery/competitor-landscape.md — competitor profiles and analysis01-discovery/market-analysis.md — market size, trends, regulatory01-discovery/target-audience.md — customer personas, pain points00-intake/brief.md — product description and contextIf these files exist, read them and use the data as a head start:
competitor-landscape.md as the starting point for deeper analysis (startup-design profiles 5-8 competitors at surface level — this skill goes much deeper on each)market-analysis.md to contextualize the competitive landscapetarget-audience.md to focus the sentiment mining on what matters mostTell the user: "I found data from a previous startup-design session. I'll use it as a starting point and go deeper on the competitive analysis."
Skip the intake interview entirely if the startup-design files provide enough context. Go straight to research.
Round 1 — The basics:
Round 2 — Sharpening (only if needed):
Don't over-interview. If the user gives a clear description upfront, skip straight to research. The competitive analysis itself will surface what matters.
Save to {project-name}/intake.md — a brief summary of the product, market, and known competitors. If built on startup-design data, note the source files used. The project name should be derived from the product/market (kebab-case, e.g., ai-email-assistant).
Create {project-name}/PROGRESS.md with: project name, skill name (startup-competitors), start date, language, research mode (Live / Knowledge-Based), and a phase checklist. Update it after each phase completes. If PROGRESS.md already exists from a previous session, resume from the last incomplete phase.
After intake, assess market complexity and present the Research Depth recommendation to the user.
Reference: Read
references/research-scaling.mdfor the complexity scoring matrix, tier definitions, wave configurations, and the user communication template.
research-scaling.md for the exact template)The selected tier determines the number of agents per wave and search rounds per agent in Phase 2. See research-scaling.md for exact wave configurations per tier.
Three parallel research waves, each attacking the competitive landscape from a different angle. Together they produce a 360-degree view.
Check if the Agent tool is available:
This skill requires WebSearch for real data. If WebSearch is unavailable or denied, fall back to Knowledge-Based Mode : use training data, mark all findings with [Knowledge-Based — verify independently] , and reduce confidence ratings by one level.
Reference: Read
references/research-principles.mdbefore starting any wave. It defines source quality tiers, cross-referencing rules, and how to handle data gaps.
Reference: Read
references/research-wave-1-profiles-pricing.mdfor agent templates.
Two agents (or two sequential blocks):
A1: Competitor Deep-Dives — Identify and profile 5-8 direct competitors plus 2-3 adjacent solutions (broader platforms, manual alternatives, tools from neighboring categories that compete for the same budget). For each: product, features, team size, funding, traction signals, strengths, weaknesses. Go beyond their marketing page — check reviews, job postings, and funding data.
A2: Pricing Intelligence — For each competitor: reverse-engineer the pricing model. Not just "it costs $49/mo" but: what's the value metric (per seat? per usage? flat?), how do tiers differentiate, what pricing psychology do they use (anchoring, decoy, charm pricing), what's the switching cost (technical, contractual, emotional). Build a tier-by-tier comparison.
Reference: Read
references/research-wave-2-sentiment-mining.mdfor agent templates.
Two agents (or two sequential blocks):
B1: Review Mining — Mine G2, Capterra, TrustRadius, Product Hunt, and App Store reviews for each competitor. Extract patterns: what do people praise? What do they complain about? What features do they request? Organize by competitor and by pain theme. Include verbatim quotes.
B2: Forum & Community Mining — Mine Reddit, Indie Hackers, Hacker News, Quora, and niche communities. Find: complaints about existing tools, "what do you use for X?" threads, migration stories, workaround discussions. Build a language map — the exact words customers use to describe their problems and desires. Identify churn signals — why people leave each competitor.
Reference: Read
references/research-wave-3-gtm-signals.mdfor agent templates.
Two agents (or two sequential blocks):
C1: Go-to-Market Analysis — For each competitor: primary acquisition channel, sales motion (self-serve vs. sales-led), content strategy (blog frequency, topics, quality), social presence, paid advertising signals, partnership plays. Build a channel opportunity map showing competitor saturation vs. opportunity per channel.
C2: Strategic & Growth Signals — Funding trajectory (rounds, investors, timing), hiring patterns (engineering-heavy = building, sales-heavy = scaling, support-heavy = struggling), content/SEO footprint (what keywords they rank for, where the gaps are), product roadmap signals from changelogs and public statements. Identify content pillars each competitor owns and which topics nobody covers well.
After all three waves complete, before synthesis, briefly present what the research found to the user: how many competitors were profiled, the top customer pain themes, the most notable strategic signals (funding, hiring, GTM patterns). Ask: "Does this align with your expectations? Any competitors to add or remove before I synthesize?"
Keep it to one message — this is a quick alignment check, not a full report.
Reference: Read
references/research-synthesis.mdfor synthesis protocol and battle card template.
After the checkpoint, synthesize raw findings into strategic deliverables. This step creates the real value — it's not reporting, it's pattern-matching across data sources.
Every deliverable file must start with a standardized header: # {Title}: {product} followed by *Skill: startup-competitors | Generated: {date}*. Every deliverable must end with Red Flags, Yellow Flags, and Sources sections.
{project-name}/competitors-report.md — The main deliverable:
{project-name}/competitive-matrix.md — Feature comparison table:
{project-name}/pricing-landscape.md — Dedicated pricing analysis:
{project-name}/battle-cards/{competitor-name}.md — One per competitor:
Keep raw research files in {project-name}/raw/ for reference:
competitor-profiles.mdpricing-intelligence.mdreview-mining.mdforum-mining.mdgtm-analysis.mdstrategic-signals.mdAfter synthesis completes and all deliverable files are written, run a verification pass.
Reference: Read
references/verification-agent.mdfor the full verification protocol, universal checks, and skill-specific checks.
{project-name}/verification-report.mdIn Claude.ai or when Agent tool is unavailable, run the verification checks yourself in the main conversation following the same protocol.
Reference: Read
references/honesty-protocol.mdfor full protocol and anti-pattern details.
Competitive intelligence is only useful if it's honest. Core rules apply (label claims, quantify, declare gaps), plus competitive-intelligence-specific additions:
See references/honesty-protocol.md for the full anti-pattern table (6 entries) and detailed protocol.
Read only what you need for the current phase.
| File | When to Read | ~Lines | Purpose |
|---|---|---|---|
honesty-protocol.md | Start of session | ~72 | Full honesty protocol with anti-patterns |
research-principles.md | Before starting Phase 2 | ~54 | Source quality, cross-referencing, data gaps |
research-wave-1-profiles-pricing.md | When running Wave 1 | ~186 | Agent templates for profiles + pricing |
research-wave-2-sentiment-mining.md | When running Wave 2 |
Weekly Installs
47
Repository
GitHub Stars
149
First Seen
Mar 10, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli46
github-copilot46
codex46
amp46
cline46
kimi-cli46
DOCX文件创建、编辑与分析完整指南 - 使用docx-js、Pandoc和Python脚本
55,800 周安装
| ~189 |
| Agent templates for review + forum mining |
research-wave-3-gtm-signals.md | When running Wave 3 | ~192 | Agent templates for GTM + strategic signals |
research-synthesis.md | After all waves complete | ~231 | How to synthesize + battle card template |
research-scaling.md | After intake, before Phase 2 | ~80 | Complexity scoring, tier definitions, wave configurations |
verification-agent.md | After synthesis | ~85 | Verification protocol, universal + skill-specific checks |