npx skills add https://github.com/build000r/skills --skill describe在编写代码之前定义“完成”的标准。生成结构化的测试用例规范,使实现和评审过程清晰明确。
生命周期定位:
每当答案显而易见时,根据用户的请求和代码上下文推断是 bug fix、small feature 还是 refactor。当措辞已经明确了范围时,不要停下来要求用户对工作进行分类。
示例:
仅当多个分类都看似正确且选择会实质性地改变测试计划时才询问。如果必须询问,请针对模糊之处提出具体问题,而不是强迫用户进行元标签分类。
备用问题:
"我应该将此视为错误修复、小功能还是重构以确定测试覆盖范围?"
这决定了测试用例的形式:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
找到受影响的代码。阅读它。在提出更多问题之前理解当前行为。
- 使用 Grep/Glob 查找相关代码
- 阅读文件,注意函数签名、现有测试、错误处理
- 确定边界:输入从哪里进入,输出从哪里离开?
既然你已经阅读了代码,现在提出有针对性的问题。只将独立的问题进行批处理。应用询问级联规则:
典型轮次:
R2 — 核心行为(1-2个问题,取决于推断或确认的范围):
R3 — 边界(批处理独立问题,取决于 R2):
R4+ — 仅在 R3 揭示复杂性时进行。 在每次回答后重新评估。一个战略性的回答可能会消除问题或产生新的问题。
使用下面的输出模板生成测试用例规范。
测试用例规则:
测试用例类型:
| 类型 | 何时使用 |
|---|---|
happy | 核心成功路径 —— 应该工作的部分 |
error | 无效输入、数据缺失、权限被拒绝 |
regression | 必须不被破坏的现有行为 |
edge | 边界值、空集合、并发访问 |
按范围划分的最低覆盖率:
展示规范。提出最后一个问题(不是通用批准):
"我有 N 个测试用例,覆盖了[摘要]。是否有我遗漏的场景,或者我们应该调整任何预期值?"
如果用户添加了用例或修正,更新规范并重新展示。
使用此结构。根据需要调整章节,但保持 Given/When/Then 格式严格。
# Describe: [变更标题]
## 上下文
- **类型:** bug-fix | feature | refactor
- **受影响:** path/to/file.py:L42, path/to/other.py:L15
- **当前行为:** [现在发生什么 —— 具体说明]
- **期望行为:** [变更后应该发生什么]
- **根本原因:** [仅限错误修复 —— 当前行为为何是错误的]
## 测试用例
### TC-1: [简短的描述性名称]
- **Given:** [前置条件 —— 具体值]
- **When:** [操作 —— 确切的调用、请求或用户操作]
- **Then:** [预期结果 —— 状态码、返回值、状态变化]
- **类型:** happy
### TC-2: [简短的描述性名称]
- **Given:** [前置条件]
- **When:** [操作]
- **Then:** [预期的错误/拒绝]
- **类型:** error
### TC-3: [简短的描述性名称]
- **Given:** [前置条件 —— 现有工作场景]
- **When:** [与变更前相同的有效操作]
- **Then:** [未改变的行为 —— 与之前相同的结果]
- **类型:** regression
## 覆盖率
- 成功路径: N
- 错误用例: N
- 回归防护: N
- 边界用例: N
- 总计: N
查看 references/examples.md 以获取不同粒度(API 端点、工具函数、UI 行为)的详细示例。
每周安装数
1
代码仓库
GitHub 星标数
4
首次出现
1 天前
安全审计
安装于
amp1
cline1
trae1
qoder1
trae-cn1
opencode1
Define what "done" looks like before writing code. Produces a structured test case spec that makes implementation and review unambiguous.
Lifecycle position:
Infer bug fix, small feature, or refactor from the user's request and the code context whenever the answer is obvious. Do not stop to ask the user to classify the work when the phrasing already makes the scope clear.
Examples:
Ask only when multiple classifications are plausibly correct and the choice would materially change the test plan. If you must ask, make the question specific to the ambiguity rather than forcing the user to do meta-labeling.
Fallback question:
"Should I treat this as a bug fix, a small feature, or a refactor for test coverage?"
This determines test case shape:
Find affected code. Read it. Understand current behavior before asking more questions.
- Grep/Glob for the relevant code
- Read the files, note function signatures, existing tests, error handling
- Identify the boundary: where does input enter, where does output leave?
Now that you've read the code, ask targeted questions. Batch only independent questions. Apply ask-cascade rules:
Typical rounds:
R2 — Core behavior (1-2 questions, depends on inferred or confirmed scope):
R3 — Boundaries (batch independent questions, depends on R2):
R4+ — Only if R3 reveals complexity. Re-evaluate after each answer. A strategic answer may eliminate questions or spawn new ones.
Produce the test case spec using the output template below.
Rules for test cases:
Test case types:
| Type | When to use |
|---|---|
happy | Core success path — the thing that should work |
error | Invalid input, missing data, permission denied |
regression | Existing behavior that must NOT break |
edge | Boundary values, empty collections, concurrent access |
Minimum coverage by scope:
Present the spec. Ask one final question (not generic approval):
"I have N test cases covering [summary]. Is there a scenario I'm missing, or should we adjust any expected values?"
If the user adds cases or corrections, update the spec and re-present.
Use this structure. Adjust sections as needed but keep the Given/When/Then format strict.
# Describe: [Change Title]
## Context
- **Type:** bug-fix | feature | refactor
- **Affected:** path/to/file.py:L42, path/to/other.py:L15
- **Current behavior:** [what happens now — be specific]
- **Desired behavior:** [what should happen after the change]
- **Root cause:** [bug fixes only — why the current behavior is wrong]
## Test Cases
### TC-1: [Short descriptive name]
- **Given:** [preconditions — concrete values]
- **When:** [action — exact call, request, or user action]
- **Then:** [expected result — status code, return value, state change]
- **Type:** happy
### TC-2: [Short descriptive name]
- **Given:** [preconditions]
- **When:** [action]
- **Then:** [expected error/rejection]
- **Type:** error
### TC-3: [Short descriptive name]
- **Given:** [preconditions — existing working scenario]
- **When:** [same action that worked before the change]
- **Then:** [unchanged behavior — same result as before]
- **Type:** regression
## Coverage
- Happy paths: N
- Error cases: N
- Regression guards: N
- Edge cases: N
- Total: N
See references/examples.md for worked examples at different granularities (API endpoint, utility function, UI behavior).
Weekly Installs
1
Repository
GitHub Stars
4
First Seen
1 day ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
amp1
cline1
trae1
qoder1
trae-cn1
opencode1
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
31,600 周安装