npx skills add https://github.com/davidkiss/smart-ai-skills --skill task-breakdown编写全面的任务分解,假设即将实现规范的专业人员对我们的项目零了解且品味存疑。记录他们需要知道的一切:检查哪些现有文件、每个任务涉及哪些文件以及需要对它们进行哪些更改。将整个计划分解为易于处理的小任务。遵循 DRY 原则。遵循 YAGNI 原则。遵循 TDD 原则。
假设他们是一名熟练的工人,但几乎不了解我们的工具集或问题领域。假设他们不知道如何验证自己是否在做正确的事情。
分析可用技能,并在需要时建议创建新技能。如果你建议创建新技能,则必须在创建任务分解之前创建它们。
开始时声明: "我正在使用 task-breakdown 技能来创建计划。"
呈现任务:
将任务保存到: docs/YYYY-MM-DD-<feature-name>-tasks.md
每个步骤是一个动作(2-5 分钟):
每个任务分解必须以这个标题开头:
# [任务名称] 任务分解
**目标:** [一句话描述要实现什么]
**方法:** [2-3 句话描述方法]
**技能:** [要使用的技能列表]
**技术细节:** [要使用的关键工具、服务、技术/库]
---
### 任务 N: [组件名称]
**文件:**
- 创建:`exact/path/to/file.py`
- 修改:`exact/path/to/existing.py:123-145`
- 测试:`tests/exact/path/to/test.py`
**步骤 1:编写失败的测试**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
Write comprehensive task breakdowns assuming the expert who is going to implement the specs has zero context for our project and questionable taste. Document everything they need to know: which existing files to check, which files to touch for each task and what changes to make to them. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD.
Assume they are a skilled worker, but know almost nothing about our toolset or problem domain. Assume they don't know how to verify they are doing the right thing.
Analyze available skills and propose creating new skills if needed. If you propose creating new skills, you MUST create them before creating the task breakdown.
Announce at start: "I'm using the task-breakdown skill to create a plan."
Presenting the tasks:
Save tasks to: docs/YYYY-MM-DD-<feature-name>-tasks.md
Each step is one action (2-5 minutes):
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
步骤 2:运行测试以验证其失败
运行:pytest tests/path/test.py::test_name -v 预期:失败,提示 "function not defined"
步骤 3:编写最简实现
def function(input):
return expected
步骤 4:清理代码更改 如果可用,使用技能来清理代码更改
步骤 5:审查代码更改 如果可用,使用技能来审查代码更改。确保代码遵循项目的编码标准,并与规范和任务分解保持一致。
步骤 6:运行测试以验证其通过
运行:pytest tests/path/test.py::test_name -v 预期:通过
## 记住
- 始终使用确切的文件路径
- 对于编码任务,在任务分解中提供完整代码(而不是"添加验证")
- 提供确切的命令和预期输出
- 使用 @ 语法引用相关技能
- DRY, YAGNI, TDD
## 执行交接
保存任务分解后,提供任务执行选项:
**"任务分解完成并已保存到 `docs/YYYY-MM-DD-<feature-name>-tasks.md`。**
**基于子代理的任务执行(本次会话)** - 我每个任务派遣一个新的子代理,任务间进行审查,快速迭代
- **必需子技能:** 使用 subagent-task-execution
- 保持在此会话中
- 每个任务使用新的子代理 + 代码审查
每周安装次数
107
仓库
首次出现
2026年2月15日
安全审计
安装于
opencode106
github-copilot75
codex74
kimi-cli74
gemini-cli74
amp74
Every task breakdown MUST start with this header:
# [Task Name] Task Breakdown
**Goal:** [One sentence describing what this achieves]
**Approach:** [2-3 sentences about approach]
**Skills:** [List of skills to use]
**Tech Details:** [Key tools, services, technologies/libraries to use]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
Step 2: Run test to verify it fails
Run: pytest tests/path/test.py::test_name -v Expected: FAIL with "function not defined"
Step 3: Write minimal implementation
def function(input):
return expected
Step 4: Cleanup code changes Use skill(s) if available to cleanup code changes
Step 5: Review code changes Use skill(s) if available to review code changes. Make sure code follows the project's coding standards and aligns with the specs and the task breakdown.
Step 6: Run test to verify it passes
Run: pytest tests/path/test.py::test_name -v Expected: PASS
## Remember
- Exact file paths always
- For coding tasks, complete code in task breakdown (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD
## Execution Handoff
After saving the task breakdown, offer task execution:
**"Task breakdown complete and saved to `docs/YYYY-MM-DD-<feature-name>-tasks.md`.**
**Subagent-based task execution (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
- **REQUIRED SUB-SKILL:** Use subagent-task-execution
- Stay in this session
- Fresh subagent per task + code review
Weekly Installs
107
Repository
First Seen
Feb 15, 2026
Security Audits
Installed on
opencode106
github-copilot75
codex74
kimi-cli74
gemini-cli74
amp74
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
40,000 周安装
Coinbase Pay-for-Service:使用 USDC 自动支付 API 调用费用的完整指南
1,600 周安装
AI代码审查分析工具 - 自动化代码质量、安全、性能审查最佳实践
1,600 周安装
Council多模型共识委员会:AI智能体并行评审验证、头脑风暴与研究工具
1,600 周安装
小红书自动化助手 | 通过MCP工具实现小红书内容发布、搜索、互动全流程自动化
1,600 周安装
Claude自动化推荐器:AI代码库分析工具,智能推荐Claude Code自动化方案
1,600 周安装
每日AI新闻简报 - 聚合最新人工智能新闻动态、研究突破与行业趋势
1,700 周安装