writing-plans by obra/superpowers
npx skills add https://github.com/obra/superpowers --skill writing-plans假设工程师对我们的代码库零了解且品味存疑,编写全面的实施计划。记录他们需要知道的一切:每个任务需要修改哪些文件、代码、测试、可能需要查阅的文档、如何测试。将整个计划分解为易于处理的小任务。遵循 DRY、YAGNI、TDD 原则。频繁提交。
假设他们是一名熟练的开发人员,但几乎不了解我们的工具集或问题领域。假设他们不太了解良好的测试设计。
开始时声明: "我正在使用 writing-plans 技能来创建实施计划。"
上下文: 这应该在专用工作树中运行(由 brainstorming 技能创建)。
计划保存到: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
如果规范涵盖多个独立的子系统,应该在 brainstorming 阶段分解为子项目规范。如果没有,建议将其分解为单独的计划——每个子系统一个计划。每个计划都应能独立产出可工作的、可测试的软件。
在定义任务之前,先规划出将创建或修改哪些文件,以及每个文件的职责。这是分解决策被锁定的地方。
这个结构决定了任务的分解。每个任务应该产生自包含的、独立有意义的变更。
每个步骤是一个动作(2-5分钟):
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
每个计划必须以这个头部开始:
# [功能名称] 实施计划
> **对于代理工作者:** 必需子技能:使用 superpowers:subagent-driven-development(推荐)或 superpowers:executing-plans 来按任务实施此计划。步骤使用复选框(`- [ ]`)语法进行跟踪。
**目标:** [一句话描述构建的内容]
**架构:** [2-3句话描述方法]
**技术栈:** [关键技术/库]
---
### 任务 N: [组件名称]
**文件:**
- 创建:`exact/path/to/file.py`
- 修改:`exact/path/to/existing.py:123-145`
- 测试:`tests/exact/path/to/test.py`
- [ ] **步骤 1:编写失败的测试**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **步骤 2:运行测试以验证其失败**
运行:`pytest tests/path/test.py::test_name -v`
预期:失败,提示"function not defined"
- [ ] **步骤 3:编写最小实现**
```python
def function(input):
return expected
```
- [ ] **步骤 4:运行测试以验证其通过**
运行:`pytest tests/path/test.py::test_name -v`
预期:通过
- [ ] **步骤 5:提交**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
编写完整计划后:
审查循环指南:
保存计划后,提供执行选择:
"计划已完成并保存到 docs/superpowers/plans/<filename>.md。两种执行选项:
1. 子代理驱动(推荐) - 我为每个任务派遣一个新的子代理,在任务之间进行审查,快速迭代
2. 内联执行 - 在此会话中使用 executing-plans 执行任务,通过检查点进行批量执行
选择哪种方法?"
如果选择子代理驱动:
如果选择内联执行:
每周安装量
37.1K
仓库
GitHub 星标数
107.7K
首次出现
2026年1月19日
安全审计
安装于
opencode32.3K
codex31.1K
gemini-cli31.1K
github-copilot29.7K
cursor28.1K
kimi-cli27.6K
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
After writing the complete plan:
Review loop guidance:
After saving the plan, offer execution choice:
"Plan complete and saved todocs/superpowers/plans/<filename>.md. Two execution options:
1. Subagent-Driven (recommended) - I dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen:
Weekly Installs
37.1K
Repository
GitHub Stars
107.7K
First Seen
Jan 19, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode32.3K
codex31.1K
gemini-cli31.1K
github-copilot29.7K
cursor28.1K
kimi-cli27.6K
97,600 周安装