ai-shaped-readiness-advisor by deanpeters/product-manager-skills
npx skills add https://github.com/deanpeters/product-manager-skills --skill ai-shaped-readiness-advisor评估你的产品工作是 "AI优先"(使用AI更快地自动化现有任务)还是 "AI重塑"(围绕AI能力从根本上重新设计产品团队的运作方式)。用它来评估你在 2026年5项必备产品经理能力 上的准备情况,找出差距,并获得关于应优先构建哪项能力的具体建议。
关键区别: AI优先是"可爱"(用Copilot更快地写PRD)。AI重塑是"生存"(构建一个人类和AI都信任的持久"现实层",编排AI工作流,压缩学习周期)。
这不是关于AI工具——而是关于 围绕AI作为协同智能进行组织重构。本交互式技能将引导你完成成熟度评估,然后推荐你的下一步行动。
| 维度 | AI优先(可爱) | AI重塑(生存) |
|---|---|---|
| 思维模式 | 自动化现有任务 | 重新设计工作完成方式 |
| 目标 | 加速工件创建 | 压缩学习周期 |
| AI角色 | 任务助手 | 战略协同智能 |
| 优势 | 暂时的效率提升 | 可防御的竞争护城河 |
| 示例 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| "Copilot写PRD快2倍" |
| "AI智能体在48小时内而非3周内验证假设" |
关键洞察: 如果竞争对手可以通过投入人力来复制你的AI使用方式,那就不是差异化——那只是效率(几个月内就会成为基本要求)。
这些能力定义了AI重塑的产品工作。你将评估自己在每项能力上的成熟度。
构建一个人类和AI都能信任的持久 "现实层"——将AI注意力视为稀缺资源并有意识地分配它。
包含内容:
关键原则: "如果你不能指出证据、约束和定义,你就没有上下文。你只有感觉。"
关键区别:上下文填充 vs. 上下文工程
5个诊断性问题:
AI优先版本: 将PRD粘贴到ChatGPT;没有上下文边界;"越多越好"的心态 AI重塑版本: CLAUDE.md文件、证据数据库、AI智能体引用的约束注册表;两层记忆架构;研究→计划→重置→实施循环以防止上下文腐化
深入探究: 查看 context-engineering-advisor 获取关于诊断上下文填充和实施记忆架构的详细指导。
创建可重复、可追溯的AI工作流(而非一次性提示)。
包含内容:
关键原则: 一次性提示是战术性的。编排的工作流是战略性的。
AI优先版本: "让ChatGPT分析这个用户反馈" AI重塑版本: 自动工作流,摄入反馈、标记主题、生成假设、标记矛盾、记录决策
使用AI压缩 学习周期(而不仅仅是加速任务)。
包含内容:
关键原则: 有目的地做更少的事。AI消除瓶颈,而不是产生更多工作。
AI优先版本: "AI更快地写用户故事" AI重塑版本: "AI通宵运行可行性检查,消除2周的技术探索"
重新设计团队系统,使AI作为 协同智能 运作,而非责任盾牌。
包含内容:
关键原则: AI放大判断,不取代责任。
AI优先版本: "我用了AI"作为糟糕输出的借口 AI重塑版本: 明确的审查协议;AI输出被视为需要人工验证的草稿
超越效率,创造 可防御的竞争优势。
包含内容:
关键原则: "如果竞争对手可以通过投入人力来复制它,那就不是差异化。"
AI优先版本: "我们使用AI来写更好的文档" AI重塑版本: "我们在2天内验证产品假设,而行业标准是3周——每季度交付6倍多的已验证功能"
✅ 在以下情况使用:
❌ 不要在以下情况使用:
将 workshop-facilitation 作为此技能的默认交互协议。
它定义了:
Other (specify))此文件定义了特定领域的评估内容。如果存在冲突,请遵循此文件的领域逻辑。
此交互式技能使用 自适应提问 来评估你在5项能力上的成熟度,然后推荐应优先处理哪一项。
Context Qx/8Scoring Qx/5Other (specify) 用于开放式答案。接受多选回复,如 1,3 或 1 and 3。1., 2., 3.)并接受类似 #1, 1, 1 and 3, 1,3 或自定义文本的选择。智能体开场提示(首先使用这个):
"开始前快速预告:这通常需要大约7-10分钟,总共最多13个问题(8个上下文 + 5个评分)。
你想怎么做?
接受 #1, 1, 1 and 3, 1,3 或自定义文本的选择。
模式行为:
Assumption。High, Medium, Low)。在最终摘要中,当使用了上下文转储或最佳猜测模式时,包含一个 待验证假设 部分。
智能体提问:
使用以下确切序列收集上下文,一次一个问题:
问题8之后,用4行总结回来:
智能体提问:
让我们评估你的 上下文设计 能力——你构建人类和AI都能信任的"现实层"的程度,以及你是在做 上下文填充(无意图的数量)还是 上下文工程(为注意力塑造结构)。
哪项陈述最能描述你当前的状态?
级别1(AI优先 / 上下文填充): "每次我需要什么时,我都会把整个文档粘贴到ChatGPT中。没有共享知识库。没有上下文边界。"
级别2(新兴 / 早期结构): "我们有一些文档(PRD、战略备忘录),但它们很分散。没有一致的格式。开始注意到上下文填充问题(模糊响应、需要多次重试)。"
级别3(过渡中 / 上下文工程新兴): "我们已经开始使用CLAUDE.md文件和项目说明。约束注册表存在。我们正在识别什么需要持久化 vs. 检索。正在试验研究→计划→重置→实施循环。"
级别4(AI重塑 / 上下文工程精通): "我们维护一个持久的现实层:约束注册表(20+条目)、证据数据库、操作术语表(30+术语)。两层记忆架构(短期对话 + 通过向量数据库长期持久化)。定义并拥有上下文边界。AI智能体自动引用这些。我们使用研究→计划→重置→实施来防止上下文腐化。"
选择你的级别: [1, 2, 3, 或 4]
注意: 如果你选择了级别1-2并且在上下文填充方面有困难,考虑使用 context-engineering-advisor 来诊断和修复上下文囤积障碍,然后再继续。
用户响应: [选择]
智能体记录: 上下文设计成熟度 = [级别 X]
智能体提问:
现在评估 智能体编排——你是否有可重复的AI工作流,还是只有一次性提示。
哪项陈述最能描述你当前的状态?
级别1(AI优先): "我根据需要将提示输入ChatGPT。没有保存的工作流或模板。"
级别2(新兴): "我有一些保存的提示可以重用。可能有一些自定义GPT或Claude项目。"
级别3(过渡中): "我们已经构建了一些多步骤工作流(研究 → 综合 → 批判)。在Notion或Linear等工具中跟踪。"
级别4(AI重塑): "我们有编排的AI工作流可以自主运行:研究 → 综合 → 批判 → 决策 → 记录理由。每个步骤都是可追溯且版本控制的。"
选择你的级别: [1, 2, 3, 或 4]
用户响应: [选择]
智能体记录: 智能体编排成熟度 = [级别 X]
智能体提问:
接下来,成果加速——你是在使用AI压缩学习周期,还是仅仅加速任务?
哪项陈述最能描述你当前的状态?
级别1(AI优先): "AI帮助我更快地写文档(PRD、用户故事)。每周为我节省几个小时。"
级别2(新兴): "AI帮助进行研究和综合(总结用户反馈、分析竞争对手)。节省研究时间。"
级别3(过渡中): "我们使用AI更快地运行实验(PoL探针、可行性检查)。将验证时间从几周缩短到几天。"
级别4(AI重塑): "AI系统地消除瓶颈:通宵可行性检查、异步综合取代会议、根据约束自动验证。学习周期快5-10倍。"
选择你的级别: [1, 2, 3, 或 4]
用户响应: [选择]
智能体记录: 成果加速成熟度 = [级别 X]
智能体提问:
现在评估 团队-AI协同——你为AI作为协同智能重新设计团队系统的程度。
哪项陈述最能描述你当前的状态?
级别1(AI优先): "我私下使用AI。团队不知道或不使用它。没有共享规范。"
级别2(新兴): "团队使用AI,但没有正式的审查流程。'我用了AI'被随意提及。"
级别3(过渡中): "我们正在出现审查规范(AI输出是草稿,不是最终版)。讨论了证据标准但未编成法典。"
级别4(AI重塑): "明确的协议:AI输出需要人工验证,证据标准编成法典,决策权限明确(AI推荐,人类决定)。团队将AI视为协同智能。"
选择你的级别: [1, 2, 3, 或 4]
用户响应: [选择]
智能体记录: 团队-AI协同成熟度 = [级别 X]
智能体提问:
最后,战略差异化——你是在创造可防御的竞争优势,还是仅仅效率提升?
哪项陈述最能描述你当前的状态?
级别1(AI优先): "我们使用AI来更快地工作(写更好的文档、更快地响应客户)。只有效率提升。"
级别2(新兴): "AI使我们能够做以前做不到的事情(分析10倍多的数据、测试更多假设)。新的能力,但竞争对手可以复制。"
级别3(过渡中): "我们已经围绕AI重新设计了一些工作流(例如,在2天内而非3周内验证假设)。开始创造分离。"
级别4(AI重塑): "我们已经从根本上重新连接了我们的运作方式:客户获得了在其他地方无法获得的能力,我们的学习周期比行业标准快10倍,我们的经济效益好5倍。竞争对手不进行全面组织重新设计就无法复制。"
选择你的级别: [1, 2, 3, 或 4]
用户响应: [选择]
智能体记录: 战略差异化成熟度 = [级别 X]
智能体综合:
这是你的AI重塑准备度概况:
┌─────────────────────────────┬───────┬──────────┐
│ Competency │ Level │ Maturity │
├─────────────────────────────┼───────┼──────────┤
│ 1. Context Design │ X │ [Label] │
│ 2. Agent Orchestration │ X │ [Label] │
│ 3. Outcome Acceleration │ X │ [Label] │
│ 4. Team-AI Facilitation │ X │ [Label] │
│ 5. Strategic Differentiation│ X │ [Label] │
└─────────────────────────────┴───────┴──────────┘
Overall Assessment: [AI-First / Emerging / Transitioning / AI-Shaped]
成熟度标签:
整体评估逻辑:
智能体提问:
根据你的成熟度概况,你应该优先处理哪项能力?
智能体分析依赖关系:
依赖逻辑:
智能体推荐:
根据你的概况,我建议首先关注 [能力名称],因为:
选项1:上下文设计(如果级别1-2)
选项2:智能体编排(如果上下文是3+,但编排是1-2)
选项3:成果加速(如果上下文 + 编排是3+)
选项4:团队-AI协同(如果使用是个人的,而非团队范围的)
选项5:战略差异化(如果所有其他能力都是3+)
你想关注哪个?
选项:
用户响应: [选择]
智能体根据所选优先级提供量身定制的行动计划:
目标: 构建一个人类和AI都信任的持久"现实层"——从上下文填充转向上下文工程。
预阶段:诊断上下文填充(如果需要) 如果你处于级别1-2,首先诊断上下文填充症状:
context-engineering-advisor)阶段1:记录约束(第1周)
阶段2:构建操作术语表(第2周)
阶段3:建立证据标准 + 上下文边界(第3周)
阶段4:实施记忆架构 + 工作流(第4周)
成功标准:
相关技能:
context-engineering-advisor(交互式)——深入探讨诊断上下文填充和实施记忆架构problem-statement.md——在定义问题之前定义约束epic-hypothesis.md——基于证据的假设写作目标: 将一次性提示转变为可重复、可追溯的AI工作流。
阶段1:映射当前工作流(第1周)
阶段2:设计编排工作流(第2周)
阶段3:构建和测试(第3周)
阶段4:记录和扩展(第4周)
成功标准:
相关技能:
pol-probe-advisor.md——使用编排的工作流进行验证实验目标: 使用AI压缩学习周期,而不仅仅是加速任务。
阶段1:识别瓶颈(第1周)
阶段2:设计AI干预(第2周)
阶段3:运行试点(第3周)
阶段4:扩展(第4周)
成功标准:
相关技能:
pol-probe.md——使用AI更快地运行PoL探针discovery-process.md——使用AI压缩发现周期目标: 重新设计团队系统,使AI作为协同智能运作,而非责任盾牌。
阶段1:建立审查规范(第1周)
阶段2:设定证据标准(第2周)
阶段3:定义决策权限(第3周)
阶段4:建立心理安全(第4周)
成功标准:
相关技能:
problem-statement.md——基于证据的问题定义epic-hypothesis.md——可测试的、基于证据的假设目标: 创造可防御的竞争优势,而不仅仅是效率提升。
阶段1:识别护城河机会(第1周)
Assess whether your product work is "AI-first" (using AI to automate existing tasks faster) or "AI-shaped" (fundamentally redesigning how product teams operate around AI capabilities). Use this to evaluate your readiness across 5 essential PM competencies for 2026 , identify gaps, and get concrete recommendations on which capability to build first.
Key Distinction: AI-first is cute (using Copilot to write PRDs faster). AI-shaped is survival (building a durable "reality layer" that both humans and AI trust, orchestrating AI workflows, compressing learning cycles).
This is not about AI tools—it's about organizational redesign around AI as co-intelligence. The interactive skill guides you through a maturity assessment, then recommends your next move.
| Dimension | AI-First (Cute) | AI-Shaped (Survival) |
|---|---|---|
| Mindset | Automate existing tasks | Redesign how work gets done |
| Goal | Speed up artifact creation | Compress learning cycles |
| AI Role | Task assistant | Strategic co-intelligence |
| Advantage | Temporary efficiency gains | Defensible competitive moat |
| Example | "Copilot writes PRDs 2x faster" | "AI agent validates hypotheses in 48 hours instead of 3 weeks" |
Critical Insight: If a competitor can replicate your AI usage by throwing bodies at it, it's not differentiation—it's just efficiency (which becomes table stakes within months).
These competencies define AI-shaped product work. You'll assess your maturity on each.
Building a durable "reality layer" that both humans and AI can trust—treating AI attention as a scarce resource and allocating it deliberately.
What it includes:
Key Principle: "If you can't point to evidence, constraints, and definitions, you don't have context. You have vibes."
Critical Distinction: Context Stuffing vs. Context Engineering
The 5 Diagnostic Questions:
AI-first version: Pasting PRDs into ChatGPT; no context boundaries; "more is better" mentality AI-shaped version: CLAUDE.md files, evidence databases, constraint registries AI agents reference; two-layer memory architecture; Research→Plan→Reset→Implement cycle to prevent context rot
Deep Dive: See context-engineering-advisor for detailed guidance on diagnosing context stuffing and implementing memory architecture.
Creating repeatable, traceable AI workflows (not one-off prompts).
What it includes:
Key Principle: One-off prompts are tactical. Orchestrated workflows are strategic.
AI-first version: "Ask ChatGPT to analyze this user feedback" AI-shaped version: Automated workflow that ingests feedback, tags themes, generates hypotheses, flags contradictions, logs decisions
Using AI to compress learning cycles (not just speed up tasks).
What it includes:
Key Principle: Do less, purposefully. AI removes bottlenecks, not generates more work.
AI-first version: "AI writes user stories faster" AI-shaped version: "AI runs feasibility checks overnight, eliminating 2 weeks of technical discovery"
Redesigning team systems so AI operates as co-intelligence , not an accountability shield.
What it includes:
Key Principle: AI amplifies judgment, doesn't replace accountability.
AI-first version: "I used AI" as excuse for bad outputs AI-shaped version: Clear review protocols; AI outputs treated as drafts requiring human validation
Moving beyond efficiency to create defensible competitive advantages.
What it includes:
Key Principle: "If a competitor can copy it by throwing bodies at it, it's not differentiation."
AI-first version: "We use AI to write better docs" AI-shaped version: "We validate product hypotheses in 2 days vs. industry standard 3 weeks—ship 6x more validated features per quarter"
✅ Use this when:
❌ Don't use this when:
Use workshop-facilitation as the default interaction protocol for this skill.
It defines:
Other (specify) when useful)This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
This interactive skill uses adaptive questioning to assess your maturity across 5 competencies, then recommends which to prioritize.
Context Qx/8 during context gatheringScoring Qx/5 during maturity scoringOther (specify) for open-ended answers. Accept multi-select replies like 1,3 or 1 and 3.Agent opening prompt (use this first):
"Quick heads-up before we start: this usually takes about 7-10 minutes and up to 13 questions total (8 context + 5 scoring).
How do you want to do this?
Accept selections as #1, 1, 1 and 3, 1,3, or custom text.
Mode behavior:
Assumption.High, Medium, Low) for each assumption.At the final summary, include an Assumptions to Validate section when context dump or best guess mode was used.
Agent asks:
Collect context using this exact sequence, one question at a time:
After question 8, summarize back in 4 lines:
Agent asks:
Let's assess your Context Design capability—how well you've built a "reality layer" that both humans and AI can trust, and whether you're doing context stuffing (volume without intent) or context engineering (structure for attention).
Which statement best describes your current state?
Level 1 (AI-First / Context Stuffing): "I paste entire documents into ChatGPT every time I need something. No shared knowledge base. No context boundaries."
Level 2 (Emerging / Early Structure): "We have some docs (PRDs, strategy memos), but they're scattered. No consistent format. Starting to notice context stuffing issues (vague responses, normalized retries)."
Level 3 (Transitioning / Context Engineering Emerging): "We've started using CLAUDE.md files and project instructions. Constraints registry exists. We're identifying what to persist vs. retrieve. Experimenting with Research→Plan→Reset→Implement cycle."
Level 4 (AI-Shaped / Context Engineering Mastery): "We maintain a durable reality layer: constraints registry (20+ entries), evidence database, operational glossary (30+ terms). Two-layer memory architecture (short-term conversational + long-term persistent via vector DB). Context boundaries defined and owned. AI agents reference these automatically. We use Research→Plan→Reset→Implement to prevent context rot."
Select your level: [1, 2, 3, or 4]
Note: If you selected Level 1-2 and struggle with context stuffing, consider using context-engineering-advisor to diagnose and fix Context Hoarding Disorder before proceeding.
User response: [Selection]
Agent records: Context Design maturity = [Level X]
Agent asks:
Now let's assess Agent Orchestration —whether you have repeatable AI workflows or just one-off prompts.
Which statement best describes your current state?
Level 1 (AI-First): "I type prompts into ChatGPT as needed. No saved workflows or templates."
Level 2 (Emerging): "I have a few saved prompts I reuse. Maybe some custom GPTs or Claude Projects."
Level 3 (Transitioning): "We've built some multi-step workflows (research → synthesis → critique). Tracked in tools like Notion or Linear."
Level 4 (AI-Shaped): "We have orchestrated AI workflows that run autonomously: research → synthesis → critique → decision → log rationale. Each step is traceable and version-controlled."
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Agent Orchestration maturity = [Level X]
Agent asks:
Next, Outcome Acceleration —are you using AI to compress learning cycles, or just speed up tasks?
Which statement best describes your current state?
Level 1 (AI-First): "AI helps me write docs faster (PRDs, user stories). Saves me a few hours per week."
Level 2 (Emerging): "AI helps with research and synthesis (summarize user feedback, analyze competitors). Saves research time."
Level 3 (Transitioning): "We use AI to run experiments faster (PoL probes, feasibility checks). Cut validation time from weeks to days."
Level 4 (AI-Shaped): "AI systematically removes bottlenecks: overnight feasibility checks, async synthesis replaces meetings, automated validation against constraints. Learning cycles 5-10x faster."
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Outcome Acceleration maturity = [Level X]
Agent asks:
Now assess Team-AI Facilitation —how well you've redesigned team systems for AI as co-intelligence.
Which statement best describes your current state?
Level 1 (AI-First): "I use AI privately. Team doesn't know or doesn't use it. No shared norms."
Level 2 (Emerging): "Team uses AI, but no formal review process. 'I used AI' mentioned casually."
Level 3 (Transitioning): "We have review norms emerging (AI outputs are drafts, not finals). Evidence standards discussed but not codified."
Level 4 (AI-Shaped): "Clear protocols: AI outputs require human validation, evidence standards codified, decision authority explicit (AI recommends, humans decide). Team treats AI as co-intelligence."
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Team-AI Facilitation maturity = [Level X]
Agent asks:
Finally, Strategic Differentiation —are you creating defensible competitive advantages, or just efficiency gains?
Which statement best describes your current state?
Level 1 (AI-First): "We use AI to work faster (write better docs, respond to customers quicker). Efficiency gains only."
Level 2 (Emerging): "AI enables us to do things we couldn't before (analyze 10x more data, test more hypotheses). New capabilities, but competitors could replicate."
Level 3 (Transitioning): "We've redesigned some workflows around AI (e.g., validate hypotheses in 2 days vs. 3 weeks). Starting to create separation."
Level 4 (AI-Shaped): "We've fundamentally rewired how we operate: customers get capabilities they can't get elsewhere, our learning cycles are 10x faster than industry standard, our economics are 5x better. Competitors can't replicate without full org redesign."
Select your level: [1, 2, 3, or 4]
User response: [Selection]
Agent records: Strategic Differentiation maturity = [Level X]
Agent synthesizes:
Here's your AI-Shaped Readiness Profile:
┌─────────────────────────────┬───────┬──────────┐
│ Competency │ Level │ Maturity │
├─────────────────────────────┼───────┼──────────┤
│ 1. Context Design │ X │ [Label] │
│ 2. Agent Orchestration │ X │ [Label] │
│ 3. Outcome Acceleration │ X │ [Label] │
│ 4. Team-AI Facilitation │ X │ [Label] │
│ 5. Strategic Differentiation│ X │ [Label] │
└─────────────────────────────┴───────┴──────────┘
Overall Assessment: [AI-First / Emerging / Transitioning / AI-Shaped]
Maturity Labels:
Overall Assessment Logic:
Agent asks:
Based on your maturity profile, which competency should you prioritize first?
Agent analyzes dependencies:
Dependency Logic:
Agent recommends:
Based on your profile, I recommend focusing on [Competency Name] first because:
Option 1: Context Design (if Level 1-2)
Option 2: Agent Orchestration (if Context is 3+, but Orchestration is 1-2)
Option 3: Outcome Acceleration (if Context + Orchestration are 3+)
Option 4: Team-AI Facilitation (if usage is individual, not team-wide)
Option 5: Strategic Differentiation (if all others are 3+)
Which would you like to focus on?
Options:
User response: [Selection]
Agent provides tailored action plan based on selected priority:
Goal: Build a durable "reality layer" that both humans and AI trust—move from context stuffing to context engineering.
Pre-Phase: Diagnose Context Stuffing (If Needed) If you're at Level 1-2, first diagnose context stuffing symptoms:
context-engineering-advisor)Phase 1: Document Constraints (Week 1)
Phase 2: Build Operational Glossary (Week 2)
Phase 3: Establish Evidence Standards + Context Boundaries (Week 3)
Phase 4: Implement Memory Architecture + Workflows (Week 4)
Success Criteria:
Related Skills:
context-engineering-advisor (Interactive) — Deep dive on diagnosing context stuffing and implementing memory architectureproblem-statement.md — Define constraints before framing problemsepic-hypothesis.md — Evidence-based hypothesis writingGoal: Turn one-off prompts into repeatable, traceable AI workflows.
Phase 1: Map Current Workflows (Week 1)
Phase 2: Design Orchestrated Workflow (Week 2)
Phase 3: Build and Test (Week 3)
Phase 4: Document and Scale (Week 4)
Success Criteria:
Related Skills:
pol-probe-advisor.md — Use orchestrated workflows for validation experimentsGoal: Use AI to compress learning cycles, not just speed up tasks.
Phase 1: Identify Bottleneck (Week 1)
Phase 2: Design AI Intervention (Week 2)
Phase 3: Run Pilot (Week 3)
Phase 4: Scale (Week 4)
Success Criteria:
Related Skills:
pol-probe.md — Use AI to run PoL probes fasterdiscovery-process.md — Compress discovery cycles with AIGoal: Redesign team systems so AI operates as co-intelligence, not accountability shield.
Phase 1: Establish Review Norms (Week 1)
Phase 2: Set Evidence Standards (Week 2)
Phase 3: Define Decision Authority (Week 3)
Phase 4: Build Psychological Safety (Week 4)
Success Criteria:
Related Skills:
problem-statement.md — Evidence-based problem framingepic-hypothesis.md — Testable, evidence-backed hypothesesGoal: Create defensible competitive advantages, not just efficiency gains.
Phase 1: Identify Moat Opportunities (Week 1)
Phase 2: Design AI-Enabled Capability (Week 2)
Phase 3: Build and Test (Weeks 3-4)
Phase 4: Validate Moat (Week 5)
Success Criteria:
Related Skills:
positioning-statement.md — Articulate your AI-driven differentiationjobs-to-be-done.md — Understand what customers hire your AI capabilities to doAgent offers:
Would you like me to create a progress tracker for your AI-shaped transformation?
Tracker includes:
Options:
Context:
Assessment Results:
Recommendation: Focus on Context Design first.
Action Plan (Week 1-4):
Outcome: After 4 weeks, Context Design → Level 3. Unlocks Agent Orchestration next quarter.
Context:
Assessment Results:
Recommendation: Focus on Outcome Acceleration (foundation is solid; now compress learning cycles).
Action Plan (Week 1-4):
Outcome: Learning cycles 5x faster → strategic separation from competitors → Level 4 Outcome Acceleration + Level 3 Strategic Differentiation.
Context:
Assessment Results:
Recommendation: Focus on Team-AI Facilitation first (distributed team needs shared norms before building infrastructure).
Action Plan (Week 1-4):
Outcome: Team-AI Facilitation → Level 3. Creates foundation for Context Design and Agent Orchestration next.
Failure Mode: "We use AI to write PRDs 2x faster—we're AI-shaped!"
Consequence: Competitors copy within 3 months; no lasting advantage.
Fix: Ask: "If a competitor threw 2x more people at this, could they match us?" If yes, it's efficiency (table stakes), not differentiation.
Failure Mode: Building Agent Orchestration workflows without durable context.
Consequence: AI workflows are fragile (context changes break everything).
Fix: Context Design is foundational. Don't skip it. Build constraints registry, glossary, evidence standards first.
Failure Mode: "I'm AI-shaped, but my team isn't."
Consequence: Can't scale; workflows die when you're on vacation.
Fix: Prioritize Team-AI Facilitation. Shared norms > individual productivity.
Failure Mode: "Should we use Claude or ChatGPT?"
Consequence: Tool debates distract from organizational redesign.
Fix: Tools don't matter. Workflows matter. Focus on redesigning how work gets done, not which AI you use.
Failure Mode: "AI helps us ship faster!"
Consequence: Ship the wrong thing faster (if you're not compressing learning cycles).
Fix: Outcome Acceleration is about learning faster, not building faster. Validate hypotheses in days, not weeks.
Weekly Installs
220
Repository
GitHub Stars
1.5K
First Seen
Feb 12, 2026
Security Audits
Gen Agent Trust HubPassSocketFailSnykPass
Installed on
codex196
opencode193
gemini-cli190
github-copilot189
cursor187
kimi-cli186
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
41,800 周安装
1., 2., 3.) and accept selections like #1, 1, 1 and 3, 1,3, or custom text.