重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
product-analysis by daymade/claude-code-skills
npx skills add https://github.com/daymade/claude-code-skills --skill product-analysis结合 Claude Code 智能体团队 和 Codex CLI 的多路径并行产品分析,实现跨模型测试时计算扩展。
核心原则:同一分析任务,多种 AI 视角,深度综合。
/product-analysis full
│
├─ 步骤 0: 自动检测可用工具 (codex? 竞品?)
│
┌────┼──────────────┐
│ │ │
Claude Code Codex CLI (自动检测)
任务智能体 (后台 Bash)
(探索 ×3-5) (×2-3 并行)
│ │
└────────┬──────────┘
│
综合 (主上下文)
│
结构化报告
在启动任何智能体之前,检测可用的工具:
# 检查是否安装了 Codex CLI
which codex 2>/dev/null && codex --version
决策逻辑:
codex:通知用户 — "检测到 Codex CLI (版本 X)。将运行跨模型分析以获得更丰富的视角。"codex:静默继续仅使用 Claude Code 智能体。要求用户安装任何东西。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
同时检测项目类型以定制智能体提示:
# 检测项目类型
ls package.json 2>/dev/null # Node.js/React
ls pyproject.toml 2>/dev/null # Python
ls Cargo.toml 2>/dev/null # Rust
ls go.mod 2>/dev/null # Go
解析 $ARGUMENTS 以确定分析范围:
| 范围 | 涵盖内容 | 典型智能体数量 |
|---|---|---|
full | UX + API + 架构 + 文档 (默认) | 5 Claude + Codex (如果可用) |
ux | 前端导航、信息密度、用户旅程、空状态、新手指引 | 3 Claude + Codex (如果可用) |
api | 后端 API 覆盖范围、端点健康状态、错误处理、一致性 | 2 Claude + Codex (如果可用) |
arch | 模块结构、依赖关系图、代码重复、关注点分离 | 2 Claude + Codex (如果可用) |
compare X Y | 自我审计 + 竞争性基准测试 (调用 /competitors-analysis) | 3 Claude + competitors-analysis |
使用 Task 工具(后台模式)同时启动所有探索智能体。
针对每个维度,生成一个 subagent_type: Explore 且 run_in_background: true 的 Task 智能体:
智能体 A — 前端导航与信息密度
探索前端导航结构和入口点:
1. App.tsx:同时挂载了多少个顶级组件?
2. 左侧边栏:有多少个按钮/条目?每个链接到哪里?
3. 右侧边栏:有多少个标签页?每个标签页有多少个部分?
4. 浮动面板:有多少个抽屉/模态框?哪些功能重叠?
5. 为新用户统计第一屏的总交互元素数量。
6. 识别重复的入口点(同一功能可从 2 个以上位置访问)。
提供具体的文件路径、行号和元素计数。
智能体 B — 用户旅程与空状态
探索新用户体验:
1. 空状态页面:没有会话的用户会看到什么?统计可点击元素。
2. 新手指引流程:有多少个步骤?呈现了哪些信息?
3. 提示输入区域:输入框周围有多少个按钮/控件?哪些是高频率使用,哪些是低频率使用?
4. 移动端适配:有多少个导航项?与桌面端有何不同?
5. 估算:新用户能否在 3 分钟内完成第一次对话?
提供具体的文件路径、行号和用户体验评估。
智能体 C — 后端 API 与健康状态
探索后端 API 接口:
1. 列出所有 API 端点(方法 + 路径 + 用途)。
2. 识别未使用或没有前端消费者的端点。
3. 检查错误处理一致性(所有端点是否都返回结构化错误?)。
4. 检查身份验证/授权模式(哪些端点需要认证?)。
5. 识别任何功能重复的端点。
提供具体的文件路径和行号。
智能体 D — 架构与模块结构 (仅限 full/arch 范围)
探索模块结构和依赖关系:
1. 绘制模块依赖关系图(哪些模块导入了哪些模块)。
2. 识别循环依赖或紧耦合。
3. 查找跨模块的代码重复(相同模式出现在 3 个以上地方)。
4. 检查关注点分离(每个模块是否具有单一职责?)。
5. 识别死代码或未使用的导出。
提供具体的文件路径和行号。
智能体 E — 文档与配置一致性 (仅限 full 范围)
探索文档和配置:
1. 比较 README 声明与实际实现的功能。
2. 检查配置文件一致性 (base.yaml vs .env.example vs 代码默认值)。
3. 查找过时的文档(引用已移除的功能/文件)。
4. 检查测试覆盖缺口(哪些模块没有测试?)。
提供具体的文件路径和行号。
如果在步骤 0 检测到 Codex CLI,则通过后台 Bash 启动并行的 Codex 分析。
每个 Codex 调用获得相同维度的提示,但来自不同模型的视角:
codex -m o4-mini \
-c model_reasoning_effort="high" \
--full-auto \
"分析此项目的前端导航结构。统计新用户在第一屏可见的所有交互元素。识别同一功能可从 2 个以上位置访问的重复入口点。提供具体的文件路径和计数。"
并行运行 2-3 个 Codex 命令(后台 Bash),每个对应一个主要维度。
重要提示:Codex 在项目的工作目录中运行。它拥有完整的文件系统访问权限。--full-auto 标志(对于旧版本是 --dangerously-bypass-approvals-and-sandbox)启用了自主执行。
当范围为 compare 时,为每个竞争对手调用 competitors-analysis 技能:
使用 Skill 工具调用:/competitors-analysis {竞争对手名称} {竞争对手网址}
这将委托给正交的 competitors-analysis 技能,该技能处理:
所有智能体完成后,在主对话上下文中综合发现。
比较不同智能体之间的发现(Claude vs Claude, Claude vs Codex):
从智能体报告中提取硬性数字:
| 指标 | 测量内容 |
|---|---|
| 第一屏交互元素 | 新用户可见的按钮/链接/输入框总数 |
| 功能入口点重复 | 具有 2 个以上入口点的功能数量 |
| 无前端消费者的 API 端点 | 未使用的后端路由数量 |
| 新手指引步骤到首次价值实现 | 从启动到首次成功操作的步骤数 |
| 模块耦合度评分 | 循环或双向依赖的数量 |
生成分层优化报告:
## 产品分析报告
### 执行摘要
[1-2 句话:关键发现]
### 量化发现
| 指标 | 数值 | 评估 |
|--------|-------|------------|
| ... | ... | ... |
### P0: 关键 (阻碍发布)
[影响基本可用性的问题]
### P1: 高优先级 (发布周内)
[显著降低体验的问题]
### P2: 中优先级 (下一个冲刺)
[值得解决但不阻塞的问题]
### 跨模型洞察
[仅一个模型识别的发现 — 值得调查]
### 竞争地位 (如果为 compare 范围)
[我们在关键维度上的比较情况]
$ARGUMENTS 获取范围which codex)compare 范围,调用 /competitors-analysis每周安装次数
65
仓库
GitHub 星标数
708
首次出现
2026 年 2 月 25 日
安全审计
安装于
codex62
github-copilot61
kimi-cli61
amp61
gemini-cli61
cursor61
Multi-path parallel product analysis that combines Claude Code agent teams and Codex CLI for cross-model test-time compute scaling.
Core principle : Same analysis task, multiple AI perspectives, deep synthesis.
/product-analysis full
│
├─ Step 0: Auto-detect available tools (codex? competitors?)
│
┌────┼──────────────┐
│ │ │
Claude Code Codex CLI (auto-detected)
Task Agents (background Bash)
(Explore ×3-5) (×2-3 parallel)
│ │
└────────┬──────────┘
│
Synthesis (main context)
│
Structured Report
Before launching any agents, detect what tools are available:
# Check if Codex CLI is installed
which codex 2>/dev/null && codex --version
Decision logic :
codex is found: Inform the user — "Codex CLI detected (version X). Will run cross-model analysis for richer perspectives."codex is not found: Silently proceed with Claude Code agents only. Do NOT ask the user to install anything.Also detect the project type to tailor agent prompts:
# Detect project type
ls package.json 2>/dev/null # Node.js/React
ls pyproject.toml 2>/dev/null # Python
ls Cargo.toml 2>/dev/null # Rust
ls go.mod 2>/dev/null # Go
Parse $ARGUMENTS to determine analysis scope:
| Scope | What it covers | Typical agents |
|---|---|---|
full | UX + API + Architecture + Docs (default) | 5 Claude + Codex (if available) |
ux | Frontend navigation, information density, user journey, empty state, onboarding | 3 Claude + Codex (if available) |
api | Backend API coverage, endpoint health, error handling, consistency | 2 Claude + Codex (if available) |
arch | Module structure, dependency graph, code duplication, separation of concerns | 2 Claude + Codex (if available) |
compare X Y |
Launch all exploration agents simultaneously using Task tool (background mode).
For each dimension, spawn a Task agent with subagent_type: Explore and run_in_background: true:
Agent A — Frontend Navigation & Information Density
Explore the frontend navigation structure and entry points:
1. App.tsx: How many top-level components are mounted simultaneously?
2. Left sidebar: How many buttons/entries? What does each link to?
3. Right sidebar: How many tabs? How many sections per tab?
4. Floating panels: How many drawers/modals? Which overlap in functionality?
5. Count total first-screen interactive elements for a new user.
6. Identify duplicate entry points (same feature accessible from 2+ places).
Give specific file paths, line numbers, and element counts.
Agent B — User Journey & Empty State
Explore the new user experience:
1. Empty state page: What does a user with no sessions see? Count clickable elements.
2. Onboarding flow: How many steps? What information is presented?
3. Prompt input area: How many buttons/controls surround the input box? Which are high-frequency vs low-frequency?
4. Mobile adaptation: How many nav items? How does it differ from desktop?
5. Estimate: Can a new user complete their first conversation in 3 minutes?
Give specific file paths, line numbers, and UX assessment.
Agent C — Backend API & Health
Explore the backend API surface:
1. List ALL API endpoints (method + path + purpose).
2. Identify endpoints that are unused or have no frontend consumer.
3. Check error handling consistency (do all endpoints return structured errors?).
4. Check authentication/authorization patterns (which endpoints require auth?).
5. Identify any endpoints that duplicate functionality.
Give specific file paths and line numbers.
Agent D — Architecture & Module Structure (full/arch scope only)
Explore the module structure and dependencies:
1. Map the module dependency graph (which modules import which).
2. Identify circular dependencies or tight coupling.
3. Find code duplication across modules (same pattern in 3+ places).
4. Check separation of concerns (does each module have a single responsibility?).
5. Identify dead code or unused exports.
Give specific file paths and line numbers.
Agent E — Documentation & Config Consistency (full scope only)
Explore documentation and configuration:
1. Compare README claims vs actual implemented features.
2. Check config file consistency (base.yaml vs .env.example vs code defaults).
3. Find outdated documentation (references to removed features/files).
4. Check test coverage gaps (which modules have no tests?).
Give specific file paths and line numbers.
If Codex CLI was detected in Step 0, launch parallel Codex analyses via background Bash.
Each Codex invocation gets the same dimensional prompt but from a different model's perspective:
codex -m o4-mini \
-c model_reasoning_effort="high" \
--full-auto \
"Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."
Run 2-3 Codex commands in parallel (background Bash), one per major dimension.
Important : Codex runs in the project's working directory. It has full filesystem access. The --full-auto flag (or --dangerously-bypass-approvals-and-sandbox for older versions) enables autonomous execution.
When scope is compare, invoke the competitors-analysis skill for each competitor:
Use the Skill tool to invoke: /competitors-analysis {competitor-name} {competitor-url}
This delegates to the orthogonal competitors-analysis skill which handles:
After all agents complete, synthesize findings in the main conversation context.
Compare findings across agents (Claude vs Claude, Claude vs Codex):
Extract hard numbers from agent reports:
| Metric | What to measure |
|---|---|
| First-screen interactive elements | Total count of buttons/links/inputs visible to new user |
| Feature entry point duplication | Number of features with 2+ entry points |
| API endpoints without frontend consumer | Count of unused backend routes |
| Onboarding steps to first value | Steps from launch to first successful action |
| Module coupling score | Number of circular or bi-directional dependencies |
Produce a layered optimization report:
## Product Analysis Report
### Executive Summary
[1-2 sentences: key finding]
### Quantified Findings
| Metric | Value | Assessment |
|--------|-------|------------|
| ... | ... | ... |
### P0: Critical (block launch)
[Issues that prevent basic usability]
### P1: High Priority (launch week)
[Issues that significantly degrade experience]
### P2: Medium Priority (next sprint)
[Issues worth addressing but not blocking]
### Cross-Model Insights
[Findings that only one model identified — worth investigating]
### Competitive Position (if compare scope)
[How we compare on key dimensions]
$ARGUMENTS for scopewhich codex)/competitors-analysis if compare scopeWeekly Installs
65
Repository
GitHub Stars
708
First Seen
Feb 25, 2026
Security Audits
Gen Agent Trust HubFailSocketWarnSnykFail
Installed on
codex62
github-copilot61
kimi-cli61
amp61
gemini-cli61
cursor61
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
56,600 周安装
Auth0 Fastify API 集成指南:使用 JWT 令牌保护 Node.js 后端服务
110 周安装
语音与音频工程师:TTS语音合成、语音克隆与播客制作全指南
108 周安装
Google ADK 智能体开发指南:构建多智能体系统与工作流编排
107 周安装
Obsidian Bases 技能详解:创建编辑 .base 文件,实现动态视图、过滤器与公式配置
107 周安装
Effect-TS 中文指南:TypeScript 函数式编程、错误处理与依赖管理最佳实践
107 周安装
OpenServ Agent SDK - 使用TypeScript构建和部署自定义AI智能体
107 周安装
Self-audit + competitive benchmarking (invokes /competitors-analysis) |
| 3 Claude + competitors-analysis |