npx skills add https://github.com/parcadei/continuous-claude-v3 --skill tdd严格的 TDD 工作流:测试先行,然后实现。
先写测试。看着它失败。编写最少的代码使其通过。
核心原则: 如果你没有看到测试失败,你就不知道它是否测试了正确的东西。
违反规则的字面意思就是违反规则的精神。
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
在测试之前写代码?删除它。重新开始。
没有例外:
根据测试重新实现。完毕。
编写一个最小的测试,展示应该发生什么。
好的:
test('retries failed operations 3 times', async () => {
let attempts = 0;
const operation = () => {
attempts++;
if (attempts < 3) throw new Error('fail');
return 'success';
};
const result = await retryOperation(operation);
expect(result).toBe('success');
expect(attempts).toBe(3);
});
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
名称清晰,测试真实行为,一件事。
差的:
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
名称模糊,测试的是模拟而不是代码。
要求:
强制性的。永远不要跳过。
npm test path/to/test.test.ts
# or
pytest path/to/test_file.py
确认:
测试通过? 你正在测试现有行为。修复测试。测试出错? 修复错误,重新运行直到它正确失败。
编写最简单的代码使测试通过。
好的:
async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
for (let i = 0; i < 3; i++) {
try {
return await fn();
} catch (e) {
if (i === 2) throw e;
}
}
throw new Error('unreachable');
}
刚好够通过。
差的:
async function retryOperation<T>(
fn: () => Promise<T>,
options?: {
maxRetries?: number;
backoff?: 'linear' | 'exponential';
onRetry?: (attempt: number) => void;
}
): Promise<T> {
// YAGNI - 过度设计
}
不要添加功能、重构其他代码,或在测试范围之外进行"改进"。
强制性的。
npm test path/to/test.test.ts
确认:
测试失败? 修复代码,而不是测试。其他测试失败? 立即修复。
仅在绿之后:
保持测试为绿。不要添加行为。
| 借口 | 现实 |
|---|---|
| "太简单了,不需要测试" | 简单的代码也会出错。测试只需 30 秒。 |
| "我之后会测试" | 测试立即通过证明不了任何东西。 |
| "之后测试能达到相同目标" | 测试后置 = "这代码做了什么?" 测试先行 = "这代码应该做什么?" |
| "已经手动测试过了" | 临时测试 ≠ 系统测试。没有记录,无法重新运行。 |
| "删除 X 小时的工作是浪费" | 沉没成本谬误。保留未经验证的代码是技术债务。 |
| "保留作为参考,先写测试" | 你会调整它。那还是测试后置。删除就是删除。 |
| "需要先探索一下" | 可以。扔掉探索,用 TDD 重新开始。 |
| "测试困难 = 设计不清晰" | 倾听测试。难以测试 = 难以使用。 |
| "TDD 会拖慢我" | TDD 比调试更快。务实 = 测试先行。 |
| "手动测试更快" | 手动测试无法证明边界情况。每次更改你都需要重新测试。 |
所有这些都意味着:删除代码。用 TDD 重新开始。
在标记工作完成之前:
无法勾选所有选项?你跳过了 TDD。重新开始。
| 问题 | 解决方案 |
|---|---|
| 不知道如何测试 | 写下期望的 API。先写断言。询问你的真人伙伴。 |
| 测试太复杂 | 设计太复杂。简化接口。 |
| 必须模拟所有东西 | 代码耦合度太高。使用依赖注入。 |
| 测试设置庞大 | 提取辅助函数。仍然复杂?简化设计。 |
┌────────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐
│ plan- │───▶│ arbiter │───▶│ kraken │───▶│ arbiter │
│ agent │ │ │ │ │ │ │
└────────────┘ └──────────┘ └──────────┘ └───────────┘
设计 编写 实现 验证
方法 失败的 最少的 所有测试
测试 代码 通过
---|---|---|---
1 | plan-agent | 设计测试用例和实现方法 | 测试计划
2 | arbiter | 编写失败的测试(红阶段) | 测试文件
3 | kraken | 实现最少的代码使其通过(绿阶段) | 实现代码
4 | arbiter | 运行所有测试,验证没有破坏任何东西 | 测试报告
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
每个代理都遵循 TDD 契约:
Task(
subagent_type="plan-agent",
prompt="""
Design TDD approach for: [FEATURE_NAME]
Define:
1. What behaviors need to be tested
2. Edge cases to cover
3. Expected test structure
DO NOT write any implementation code.
Output: Test plan document
"""
)
Task(
subagent_type="arbiter",
prompt="""
Write failing tests for: [FEATURE_NAME]
Test plan: [from phase 1]
Requirements:
- Write tests FIRST
- Run tests to confirm they FAIL
- Tests must fail because feature is missing (not syntax errors)
- Create clear test names describing expected behavior
DO NOT write any implementation code.
"""
)
Task(
subagent_type="kraken",
prompt="""
Implement MINIMAL code to pass tests: [FEATURE_NAME]
Tests location: [test file path]
Requirements:
- Write ONLY enough code to make tests pass
- No additional features beyond what tests require
- No "improvements" or "enhancements"
- Run tests after each change
Follow Red-Green-Refactor strictly.
"""
)
Task(
subagent_type="arbiter",
prompt="""
Validate TDD implementation: [FEATURE_NAME]
- Run full test suite
- Verify all new tests pass
- Verify no existing tests broke
- Check test coverage if available
"""
)
User: /tdd Add email validation to the signup form
Claude: Starting /tdd workflow for email validation...
Phase 1: Planning test cases...
[Spawns plan-agent]
Test plan:
- Valid email formats
- Invalid email formats
- Empty email rejection
- Edge cases (unicode, long emails)
Phase 2: Writing failing tests (RED)...
[Spawns arbiter]
✅ 8 tests written, all failing as expected
Phase 3: Implementing minimal code (GREEN)...
[Spawns kraken]
✅ All 8 tests now passing
Phase 4: Validating...
[Spawns arbiter]
✅ 247 tests passing (8 new), 0 failing
TDD workflow complete!
在绿之后,你可以添加一个重构阶段:
Task(
subagent_type="kraken",
prompt="""
Refactor: [FEATURE_NAME]
- Clean up code while keeping tests green
- Remove duplication
- Improve naming
- Extract helpers if needed
DO NOT add new behavior. Keep all tests passing.
"""
)
每周安装量
200
代码仓库
GitHub 星标数
3.6K
首次出现
2026年1月22日
安全审计
安装于
codex190
opencode190
gemini-cli188
cursor186
github-copilot185
amp182
Strict TDD workflow: tests first, then implementation.
Write the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
Implement fresh from tests. Period.
Write one minimal test showing what should happen.
Good:
test('retries failed operations 3 times', async () => {
let attempts = 0;
const operation = () => {
attempts++;
if (attempts < 3) throw new Error('fail');
return 'success';
};
const result = await retryOperation(operation);
expect(result).toBe('success');
expect(attempts).toBe(3);
});
Clear name, tests real behavior, one thing.
Bad:
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
Vague name, tests mock not code.
Requirements:
MANDATORY. Never skip.
npm test path/to/test.test.ts
# or
pytest path/to/test_file.py
Confirm:
Test passes? You're testing existing behavior. Fix test. Test errors? Fix error, re-run until it fails correctly.
Write simplest code to pass the test.
Good:
async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
for (let i = 0; i < 3; i++) {
try {
return await fn();
} catch (e) {
if (i === 2) throw e;
}
}
throw new Error('unreachable');
}
Just enough to pass.
Bad:
async function retryOperation<T>(
fn: () => Promise<T>,
options?: {
maxRetries?: number;
backoff?: 'linear' | 'exponential';
onRetry?: (attempt: number) => void;
}
): Promise<T> {
// YAGNI - over-engineered
}
Don't add features, refactor other code, or "improve" beyond the test.
MANDATORY.
npm test path/to/test.test.ts
Confirm:
Test fails? Fix code, not test. Other tests fail? Fix now.
After green only:
Keep tests green. Don't add behavior.
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
All of these mean: Delete code. Start over with TDD.
Before marking work complete:
Can't check all boxes? You skipped TDD. Start over.
| Problem | Solution |
|---|---|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
┌────────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐
│ plan- │───▶│ arbiter │───▶│ kraken │───▶│ arbiter │
│ agent │ │ │ │ │ │ │
└────────────┘ └──────────┘ └──────────┘ └───────────┘
Design Write Implement Verify
approach failing minimal all tests
tests code pass
---|---|---|---
1 | plan-agent | Design test cases and implementation approach | Test plan
2 | arbiter | Write failing tests (RED phase) | Test files
3 | kraken | Implement minimal code to pass (GREEN phase) | Implementation
4 | arbiter | Run all tests, verify nothing broken | Test report
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Each agent follows the TDD contract:
Task(
subagent_type="plan-agent",
prompt="""
Design TDD approach for: [FEATURE_NAME]
Define:
1. What behaviors need to be tested
2. Edge cases to cover
3. Expected test structure
DO NOT write any implementation code.
Output: Test plan document
"""
)
Task(
subagent_type="arbiter",
prompt="""
Write failing tests for: [FEATURE_NAME]
Test plan: [from phase 1]
Requirements:
- Write tests FIRST
- Run tests to confirm they FAIL
- Tests must fail because feature is missing (not syntax errors)
- Create clear test names describing expected behavior
DO NOT write any implementation code.
"""
)
Task(
subagent_type="kraken",
prompt="""
Implement MINIMAL code to pass tests: [FEATURE_NAME]
Tests location: [test file path]
Requirements:
- Write ONLY enough code to make tests pass
- No additional features beyond what tests require
- No "improvements" or "enhancements"
- Run tests after each change
Follow Red-Green-Refactor strictly.
"""
)
Task(
subagent_type="arbiter",
prompt="""
Validate TDD implementation: [FEATURE_NAME]
- Run full test suite
- Verify all new tests pass
- Verify no existing tests broke
- Check test coverage if available
"""
)
User: /tdd Add email validation to the signup form
Claude: Starting /tdd workflow for email validation...
Phase 1: Planning test cases...
[Spawns plan-agent]
Test plan:
- Valid email formats
- Invalid email formats
- Empty email rejection
- Edge cases (unicode, long emails)
Phase 2: Writing failing tests (RED)...
[Spawns arbiter]
✅ 8 tests written, all failing as expected
Phase 3: Implementing minimal code (GREEN)...
[Spawns kraken]
✅ All 8 tests now passing
Phase 4: Validating...
[Spawns arbiter]
✅ 247 tests passing (8 new), 0 failing
TDD workflow complete!
After GREEN, you can add a refactor phase:
Task(
subagent_type="kraken",
prompt="""
Refactor: [FEATURE_NAME]
- Clean up code while keeping tests green
- Remove duplication
- Improve naming
- Extract helpers if needed
DO NOT add new behavior. Keep all tests passing.
"""
)
Weekly Installs
200
Repository
GitHub Stars
3.6K
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykPass
Installed on
codex190
opencode190
gemini-cli188
cursor186
github-copilot185
amp182
Vue 3 调试指南:解决响应式、计算属性与监听器常见错误
11,400 周安装