重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
writing-plans by guanyang/antigravity-skills
npx skills add https://github.com/guanyang/antigravity-skills --skill writing-plans编写全面的实施方案,假设工程师对我们的代码库毫无了解且品味存疑。记录他们需要知道的一切:每个任务需要修改哪些文件、代码、可能需要查阅的测试和文档、如何测试。将整个计划分解为易于处理的小任务。遵循 DRY、YAGNI、TDD 原则。频繁提交。
假设他们是一位熟练的开发者,但几乎不了解我们的工具集或问题领域。假设他们不太了解良好的测试设计。
开始时声明: "我正在使用 writing-plans 技能来创建实施方案。"
上下文: 这应该在专用工作树中运行(由 brainstorming 技能创建)。
计划保存到: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
如果规范涵盖多个独立的子系统,应该在 brainstorming 阶段分解为子项目规范。如果没有,建议将其拆分为独立的计划——每个子系统一个计划。每个计划都应独立产出可工作、可测试的软件。
在定义任务之前,先规划将创建或修改哪些文件,以及每个文件的职责。这是分解决策被确定的地方。
这个结构指导着任务分解。每个任务应该产生自包含的、独立有意义的变更。
每个步骤是一个动作(2-5 分钟):
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
每个计划必须以这个头部开始:
# [功能名称] 实施方案
> **对于智能体工作者:** 必需子技能:使用 superpowers:subagent-driven-development(推荐)或 superpowers:executing-plans 来按任务实施此计划。步骤使用复选框(`- [ ]`)语法进行跟踪。
**目标:** [一句话描述构建的内容]
**架构:** [2-3 句话描述方法]
**技术栈:** [关键技术/库]
---
### 任务 N: [组件名称]
**文件:**
- 创建:`exact/path/to/file.py`
- 修改:`exact/path/to/existing.py:123-145`
- 测试:`tests/exact/path/to/test.py`
- [ ] **步骤 1:编写失败的测试**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **步骤 2:运行测试以验证失败**
运行:`pytest tests/path/test.py::test_name -v`
预期:失败,提示"function not defined"
- [ ] **步骤 3:编写最小实现**
```python
def function(input):
return expected
```
- [ ] **步骤 4:运行测试以验证通过**
运行:`pytest tests/path/test.py::test_name -v`
预期:通过
- [ ] **步骤 5:提交**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
每个步骤必须包含工程师需要的实际内容。以下是计划失败——永远不要写这些:
编写完整计划后,以新的视角查看规范,并根据规范检查计划。这是你自己运行的检查清单——不是分派子智能体。
1. 规范覆盖: 浏览规范中的每个部分/要求。你能指出实现它的任务吗?列出任何遗漏。
2. 占位符扫描: 在你的计划中搜索危险信号——任何上述"无占位符"部分中的模式。修复它们。
3. 类型一致性: 你在后续任务中使用的类型、方法签名和属性名称是否与早期任务中定义的一致?在任务 3 中调用的函数 clearLayers() 与在任务 7 中的 clearFullLayers() 不一致就是一个错误。
如果发现问题,请内联修复。无需重新审查——只需修复并继续。如果发现没有任务对应的规范要求,请添加任务。
保存计划后,提供执行选择:
"计划已完成并保存到 docs/superpowers/plans/<filename>.md。两种执行选项:
1. 子智能体驱动(推荐) - 我为每个任务分派一个新的子智能体,在任务间进行审查,快速迭代
2. 内联执行 - 在此会话中使用 executing-plans 执行任务,带检查点的批量执行
选择哪种方式?"
如果选择子智能体驱动:
如果选择内联执行:
每周安装次数
57
仓库
GitHub 星标数
589
首次出现
2026 年 1 月 26 日
安全审计
安装于
opencode53
codex52
github-copilot51
cursor50
gemini-cli50
antigravity49
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every step must contain the actual content an engineer needs. These are plan failures — never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
After saving the plan, offer execution choice:
"Plan complete and saved todocs/superpowers/plans/<filename>.md. Two execution options:
1. Subagent-Driven (recommended) - I dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen:
Weekly Installs
57
Repository
GitHub Stars
589
First Seen
Jan 26, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode53
codex52
github-copilot51
cursor50
gemini-cli50
antigravity49
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
123,700 周安装