typescript-e2e-testing by bmad-labs/skills
npx skills add https://github.com/bmad-labs/skills --skill typescript-e2e-testing广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 改进测试套件性能 |
重要提示:在开始任何 E2E 测试任务之前,请先确定用户的意图并加载相应的工作流程。
| 用户表达/需求 | 要加载的工作流程 | 文件 |
|---|---|---|
| "设置 E2E 测试"、"配置 docker-compose"、"为项目添加 E2E"、"创建测试助手" | 设置 | workflows/setup/workflow.md |
| "编写 E2E 测试"、"添加集成测试"、"测试此端点"、"创建 e2e-spec" | 编写 | workflows/writing/workflow.md |
| "评审 E2E 测试"、"检查测试质量"、"审计测试"、"这个测试正确吗?" | 评审 | workflows/review/workflow.md |
| "运行 E2E 测试"、"执行测试"、"启动 Docker 并测试"、"检查测试是否通过" | 运行 | workflows/running/workflow.md |
| "修复 E2E 测试"、"调试测试"、"测试失败"、"不稳定的测试"、"连接错误" | 调试 | workflows/debugging/workflow.md |
| "加速 E2E 测试"、"优化测试"、"测试很慢"、"减少测试时间" | 优化 | workflows/optimize/workflow.md |
references/ 文件重要提示:每个工作流程都包含在完成任务前后从 references/ 文件夹加载相关知识的说明。
references/
├── common/ # 共享的测试基础知识
│ ├── knowledge.md # 核心 E2E 概念和测试金字塔
│ ├── rules.md # 强制性测试规则(GWT、超时、日志记录)
│ ├── best-practices.md # 测试设计和清理模式
│ ├── test-case-creation-guide.md # 所有场景的 GWT 模板
│ ├── nestjs-setup.md # NestJS 应用引导和 Jest 配置
│ ├── debugging.md # VS Code 配置和日志分析
│ └── examples.md # 按类别划分的综合示例
│
├── kafka/ # Kafka 特定测试
│ ├── knowledge.md # 常见方法失败的原因、架构
│ ├── rules.md # Kafka 特定测试规则
│ ├── test-helper.md # KafkaTestHelper 实现
│ ├── docker-setup.md # Redpanda/Kafka Docker 配置
│ ├── performance.md # 优化技术
│ ├── isolation.md # 预订阅模式详情
│ └── examples.md # Kafka 测试示例
│
├── postgres/ # PostgreSQL 特定测试
│ ├── knowledge.md # PostgreSQL 测试概念
│ ├── rules.md # 清理、事务、断言规则
│ ├── test-helper.md # PostgresTestHelper 实现
│ └── examples.md # CRUD、事务、约束示例
│
├── mongodb/ # MongoDB 特定测试
│ ├── knowledge.md # MongoDB 测试概念
│ ├── rules.md # 文档清理和断言规则
│ ├── test-helper.md # MongoDbTestHelper 实现
│ ├── docker-setup.md # Docker 和内存服务器设置
│ └── examples.md # 文档和聚合示例
│
├── redis/ # Redis 特定测试
│ ├── knowledge.md # Redis 测试概念
│ ├── rules.md # TTL 和发布/订阅规则
│ ├── test-helper.md # RedisTestHelper 实现
│ ├── docker-setup.md # Docker 配置
│ └── examples.md # 缓存、会话、速率限制示例
│
└── api/ # API 测试(REST、GraphQL、gRPC)
├── knowledge.md # API 测试概念
├── rules.md # 请求/响应断言规则
├── test-helper.md # 身份验证和 Supertest 助手
├── examples.md # REST、GraphQL、验证示例
└── mocking.md # MSW 和 Nock 外部 API 模拟
提示:如需详细的分步指导,请使用上面的工作流程部分。
工作流程:设置 E2E 测试
references/common/knowledge.md - 了解 E2E 基础知识references/common/nestjs-setup.md - 项目设置docker-setup.md 文件工作流程:编写 E2E 测试
references/common/rules.md - GWT 模式、超时references/common/test-case-creation-guide.md - 模板references/kafka/knowledge.md → test-helper.md → isolation.mdreferences/postgres/rules.md → test-helper.mdreferences/mongodb/rules.md → test-helper.mdreferences/redis/rules.md → test-helper.mdreferences/api/rules.md → test-helper.md工作流程:评审 E2E 测试
references/common/rules.md - 对照强制性模式进行检查references/common/best-practices.md - 质量标准rules.md 文件工作流程:运行 E2E 测试
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 顺序运行测试工作流程:调试 E2E 测试
references/common/debugging.md/tmp/e2e-${E2E_SESSION}-failures.md 跟踪文件工作流程:优化 E2E 测试
references/common/best-practices.md - 性能模式references/kafka/performance.mdreferences/common/examples.md 了解通用模式examples.md 了解详细场景始终将 E2E 测试输出重定向到临时文件,而不是控制台。E2E 输出冗长,会膨胀代理上下文。
重要提示:仅将输出重定向到临时文件(无控制台输出)。使用唯一的会话 ID 以防止冲突。
# 在调试会话开始时生成唯一的会话 ID
export E2E_SESSION=$(date +%s)-$$
# 标准模式 - 仅重定向到文件(无控制台输出)
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1
# 仅读取摘要(最后 50 行)
tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# 获取失败详情
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# 完成后清理
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
临时文件(每个代理使用唯一的 ${E2E_SESSION}):
/tmp/e2e-${E2E_SESSION}-output.log - 完整的测试输出/tmp/e2e-${E2E_SESSION}-failures.log - 过滤后的失败输出/tmp/e2e-${E2E_SESSION}-failures.md - 用于逐个修复的跟踪文件/tmp/e2e-${E2E_SESSION}-debug.log - 调试运行/tmp/e2e-${E2E_SESSION}-verify.log - 验证运行通过 Docker 针对实际服务进行测试。切勿在 E2E 测试中模拟数据库或消息代理。
所有 E2E 测试必须遵循 Given-When-Then:
it('should create user and return 201', async () => {
// GIVEN: 有效的用户数据
const userData = { email: 'test@example.com', name: 'Test' };
// WHEN: 创建用户
const response = await request(httpServer)
.post('/users')
.send(userData)
.expect(201);
// THEN: 用户已创建且数据正确
expect(response.body.data.email).toBe('test@example.com');
});
每个测试必须是独立的:
beforeEach 中清理数据库状态断言确切的值,而不仅仅是存在性:
// 错误
expect(response.body.data).toBeDefined();
// 正确
expect(response.body).toMatchObject({
code: 'SUCCESS',
data: { email: 'test@example.com', name: 'Test' }
});
project-root/
├── src/
├── test/
│ ├── e2e/
│ │ ├── feature.e2e-spec.ts
│ │ ├── setup.ts
│ │ └── helpers/
│ │ ├── test-app.helper.ts
│ │ ├── postgres.helper.ts
│ │ ├── mongodb.helper.ts
│ │ ├── redis.helper.ts
│ │ └── kafka.helper.ts
│ └── jest-e2e.config.ts
├── docker-compose.e2e.yml
├── .env.e2e
└── package.json
// test/jest-e2e.config.ts
const config: Config = {
preset: 'ts-jest',
testEnvironment: 'node',
testMatch: ['**/*.e2e-spec.ts'],
testTimeout: 25000,
maxWorkers: 1, // 关键:顺序执行
clearMocks: true,
forceExit: true,
detectOpenHandles: true,
};
| 技术 | 等待时间 | 策略 |
|---|---|---|
| Kafka | 最多 10-20 秒(轮询) | 使用 50 毫秒间隔的智能轮询 |
| PostgreSQL | <1 秒 | 直接查询 |
| MongoDB | <1 秒 | 直接查询 |
| Redis | <100 毫秒 | 内存操作 |
| 外部 API | 1-5 秒 | 网络延迟 |
关键:一次修复一个测试。修复时切勿重复运行完整套件。
当 E2E 测试失败时:
初始化会话(开始时一次):
export E2E_SESSION=$(date +%s)-$$
创建跟踪文件:/tmp/e2e-${E2E_SESSION}-failures.md,包含所有失败的测试
选择一个失败的测试 - 仅处理此测试
仅运行该测试(切勿运行完整套件):
npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-debug.log 2>&1
tail -50 /tmp/e2e-${E2E_SESSION}-debug.log
修复问题 - 分析错误,进行针对性修复
验证修复 - 运行同一测试 3-5 次:
for i in {1..5}; do npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-run$i.log 2>&1 && echo "Run $i: PASS" || echo "Run $i: FAIL"; done
在跟踪文件中标记为已修复
移至下一个失败的测试 - 重复步骤 3-7
在所有单个测试通过后,仅运行一次完整套件
清理:rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
原因:运行完整套件会浪费时间和上下文。每个失败的测试都会污染输出,使调试更加困难。
beforeEach(async () => {
await new Promise(r => setTimeout(r, 500)); // 等待进行中的操作
await repository.clear(); // PostgreSQL
// 或
await model.deleteMany({}); // MongoDB
});
// 使用预订阅 + 缓冲区清除(而非 fromBeginning: true)
const kafkaHelper = new KafkaTestHelper();
await kafkaHelper.subscribeToTopic(outputTopic, false);
// 在 beforeEach 中:kafkaHelper.clearMessages(outputTopic);
beforeEach(async () => {
await redis.flushdb();
});
mockServer.use(
http.post('https://api.external.com/endpoint', () => {
return HttpResponse.json({ status: 'success' });
})
);
// 使用智能轮询而非固定等待
await kafkaHelper.publishEvent(inputTopic, event, event.id);
const messages = await kafkaHelper.waitForMessages(outputTopic, 1, 20000);
expect(messages[0].value).toMatchObject({ id: event.id });
所有命令仅将输出重定向到临时文件(无控制台输出)。
# 初始化会话(开始时一次)
export E2E_SESSION=$(date +%s)-$$
# 运行特定测试(无控制台输出)
npm run test:e2e -- -t "should create user" > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# 运行特定文件
npm run test:e2e -- test/e2e/user.e2e-spec.ts > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# 运行完整套件
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# 从上次运行中获取失败详情
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# 使用断点调试(交互式调试需要控制台)
node --inspect-brk node_modules/.bin/jest --config test/jest-e2e.config.ts --runInBand
# 查看应用日志(有限)
tail -100 logs/e2e-test.log
grep -i error logs/e2e-test.log | tail -50
# 清理会话文件
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
每周安装次数
1.5K
仓库
GitHub 星标数
3
首次出现
2026 年 1 月 26 日
安全审计
已安装于
opencode1.2K
codex1.2K
gemini-cli1.2K
github-copilot1.2K
kimi-cli1.1K
amp1.1K
E2E testing validates complete workflows from user perspective, using real infrastructure via Docker.
For comprehensive step-by-step guidance, use the appropriate workflow:
| Workflow | When to Use |
|---|---|
| Setup E2E Test | Setting up E2E infrastructure for a new or existing project |
| Writing E2E Test | Creating new E2E test cases with proper GWT pattern |
| Review E2E Test | Reviewing existing tests for quality and correctness |
| Running E2E Test | Executing tests with proper verification |
| Debugging E2E Test | Systematically fixing failing tests |
| Optimize E2E Test | Improving test suite performance |
IMPORTANT : Before starting any E2E testing task, identify the user's intent and load the appropriate workflow.
| User Says / Wants | Workflow to Load | File |
|---|---|---|
| "Set up E2E tests", "configure docker-compose", "add E2E to project", "create test helpers" | Setup | workflows/setup/workflow.md |
| "Write E2E tests", "add integration tests", "test this endpoint", "create e2e-spec" | Writing | workflows/writing/workflow.md |
| "Review E2E tests", "check test quality", "audit tests", "is this test correct?" | Reviewing | workflows/review/workflow.md |
| "Run E2E tests", "execute tests", "start docker and test", "check if tests pass" | Running | workflows/running/workflow.md |
references/ files to readImportant : Each workflow includes instructions to load relevant knowledge from the references/ folder before and after completing tasks.
references/
├── common/ # Shared testing fundamentals
│ ├── knowledge.md # Core E2E concepts and test pyramid
│ ├── rules.md # Mandatory testing rules (GWT, timeouts, logging)
│ ├── best-practices.md # Test design and cleanup patterns
│ ├── test-case-creation-guide.md # GWT templates for all scenarios
│ ├── nestjs-setup.md # NestJS app bootstrap and Jest config
│ ├── debugging.md # VS Code config and log analysis
│ └── examples.md # Comprehensive examples by category
│
├── kafka/ # Kafka-specific testing
│ ├── knowledge.md # Why common approaches fail, architecture
│ ├── rules.md # Kafka-specific testing rules
│ ├── test-helper.md # KafkaTestHelper implementation
│ ├── docker-setup.md # Redpanda/Kafka Docker configs
│ ├── performance.md # Optimization techniques
│ ├── isolation.md # Pre-subscription pattern details
│ └── examples.md # Kafka test examples
│
├── postgres/ # PostgreSQL-specific testing
│ ├── knowledge.md # PostgreSQL testing concepts
│ ├── rules.md # Cleanup, transaction, assertion rules
│ ├── test-helper.md # PostgresTestHelper implementation
│ └── examples.md # CRUD, transaction, constraint examples
│
├── mongodb/ # MongoDB-specific testing
│ ├── knowledge.md # MongoDB testing concepts
│ ├── rules.md # Document cleanup and assertion rules
│ ├── test-helper.md # MongoDbTestHelper implementation
│ ├── docker-setup.md # Docker and Memory Server setup
│ └── examples.md # Document and aggregation examples
│
├── redis/ # Redis-specific testing
│ ├── knowledge.md # Redis testing concepts
│ ├── rules.md # TTL and pub/sub rules
│ ├── test-helper.md # RedisTestHelper implementation
│ ├── docker-setup.md # Docker configuration
│ └── examples.md # Cache, session, rate limit examples
│
└── api/ # API testing (REST, GraphQL, gRPC)
├── knowledge.md # API testing concepts
├── rules.md # Request/response assertion rules
├── test-helper.md # Auth and Supertest helpers
├── examples.md # REST, GraphQL, validation examples
└── mocking.md # MSW and Nock external API mocking
Tip : For detailed step-by-step guidance, use the Workflows section above.
Workflow : Setup E2E Test
references/common/knowledge.md - Understand E2E fundamentalsreferences/common/nestjs-setup.md - Project setupdocker-setup.md files as neededWorkflow : Writing E2E Test
references/common/rules.md - GWT pattern, timeoutsreferences/common/test-case-creation-guide.md - Templatesreferences/kafka/knowledge.md → test-helper.md → isolation.mdreferences/postgres/rules.md → test-helper.mdreferences/mongodb/rules.md → test-helper.mdWorkflow : Review E2E Test
references/common/rules.md - Check against mandatory patternsreferences/common/best-practices.md - Quality standardsrules.md filesWorkflow : Running E2E Test
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1Workflow : Debugging E2E Test
references/common/debugging.md/tmp/e2e-${E2E_SESSION}-failures.md tracking fileWorkflow : Optimize E2E Test
references/common/best-practices.md - Performance patternsreferences/kafka/performance.md for Kafka testsreferences/common/examples.md for general patternsexamples.md for detailed scenariosALWAYS redirect E2E test output to temp files, NOT console. E2E output is verbose and bloats agent context.
IMPORTANT : Redirect output to temp files only (NO console output). Use unique session ID to prevent conflicts.
# Generate unique session ID at start of debugging session
export E2E_SESSION=$(date +%s)-$$
# Standard pattern - redirect to file only (no console output)
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1
# Read summary only (last 50 lines)
tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# Cleanup when done
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
Temp Files (with ${E2E_SESSION} unique per agent):
/tmp/e2e-${E2E_SESSION}-output.log - Full test output/tmp/e2e-${E2E_SESSION}-failures.log - Filtered failure output/tmp/e2e-${E2E_SESSION}-failures.md - Tracking file for one-by-one fixing/tmp/e2e-${E2E_SESSION}-debug.log - Debug runs/tmp/e2e-${E2E_SESSION}-verify.log - Verification runsTest against actual services via Docker. Never mock databases or message brokers for E2E tests.
ALL E2E tests MUST follow Given-When-Then:
it('should create user and return 201', async () => {
// GIVEN: Valid user data
const userData = { email: 'test@example.com', name: 'Test' };
// WHEN: Creating user
const response = await request(httpServer)
.post('/users')
.send(userData)
.expect(201);
// THEN: User created with correct data
expect(response.body.data.email).toBe('test@example.com');
});
Each test MUST be independent:
beforeEachAssert exact values, not just existence:
// WRONG
expect(response.body.data).toBeDefined();
// CORRECT
expect(response.body).toMatchObject({
code: 'SUCCESS',
data: { email: 'test@example.com', name: 'Test' }
});
project-root/
├── src/
├── test/
│ ├── e2e/
│ │ ├── feature.e2e-spec.ts
│ │ ├── setup.ts
│ │ └── helpers/
│ │ ├── test-app.helper.ts
│ │ ├── postgres.helper.ts
│ │ ├── mongodb.helper.ts
│ │ ├── redis.helper.ts
│ │ └── kafka.helper.ts
│ └── jest-e2e.config.ts
├── docker-compose.e2e.yml
├── .env.e2e
└── package.json
// test/jest-e2e.config.ts
const config: Config = {
preset: 'ts-jest',
testEnvironment: 'node',
testMatch: ['**/*.e2e-spec.ts'],
testTimeout: 25000,
maxWorkers: 1, // CRITICAL: Sequential execution
clearMocks: true,
forceExit: true,
detectOpenHandles: true,
};
| Technology | Wait Time | Strategy |
|---|---|---|
| Kafka | 10-20s max (polling) | Smart polling with 50ms intervals |
| PostgreSQL | <1s | Direct queries |
| MongoDB | <1s | Direct queries |
| Redis | <100ms | In-memory operations |
| External API | 1-5s | Network latency |
CRITICAL: Fix ONE test at a time. NEVER run full suite repeatedly while fixing.
When E2E tests fail:
Initialize session (once at start):
export E2E_SESSION=$(date +%s)-$$
Create tracking file : /tmp/e2e-${E2E_SESSION}-failures.md with all failing tests
Select ONE failing test - work on only this test
Run ONLY that test (never full suite):
npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-debug.log 2>&1
tail -50 /tmp/e2e-${E2E_SESSION}-debug.log
Fix the issue - analyze error, make targeted fix
Verify fix - run same test 3-5 times:
for i in {1..5}; do npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-run$i.log 2>&1 && echo "Run $i: PASS" || echo "Run $i: FAIL"; done
Mark as FIXED in tracking file
WHY : Running full suite wastes time and context. Each failing test pollutes output, making debugging harder.
beforeEach(async () => {
await new Promise(r => setTimeout(r, 500)); // Wait for in-flight
await repository.clear(); // PostgreSQL
// OR
await model.deleteMany({}); // MongoDB
});
// Use pre-subscription + buffer clearing (NOT fromBeginning: true)
const kafkaHelper = new KafkaTestHelper();
await kafkaHelper.subscribeToTopic(outputTopic, false);
// In beforeEach: kafkaHelper.clearMessages(outputTopic);
beforeEach(async () => {
await redis.flushdb();
});
mockServer.use(
http.post('https://api.external.com/endpoint', () => {
return HttpResponse.json({ status: 'success' });
})
);
// Use smart polling instead of fixed waits
await kafkaHelper.publishEvent(inputTopic, event, event.id);
const messages = await kafkaHelper.waitForMessages(outputTopic, 1, 20000);
expect(messages[0].value).toMatchObject({ id: event.id });
All commands redirect output to temp files only (no console output).
# Initialize session (once at start)
export E2E_SESSION=$(date +%s)-$$
# Run specific test (no console output)
npm run test:e2e -- -t "should create user" > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run specific file
npm run test:e2e -- test/e2e/user.e2e-spec.ts > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run full suite
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details from last run
grep -B 2 -A 15 "FAIL\|✕" /tmp/e2e-${E2E_SESSION}-output.log
# Debug with breakpoints (requires console for interactive debugging)
node --inspect-brk node_modules/.bin/jest --config test/jest-e2e.config.ts --runInBand
# View application logs (limited)
tail -100 logs/e2e-test.log
grep -i error logs/e2e-test.log | tail -50
# Cleanup session files
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
Weekly Installs
1.5K
Repository
GitHub Stars
3
First Seen
Jan 26, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode1.2K
codex1.2K
gemini-cli1.2K
github-copilot1.2K
kimi-cli1.1K
amp1.1K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装
| "Fix E2E tests", "debug tests", "tests are failing", "flaky test", "connection error" | Debugging | workflows/debugging/workflow.md |
| "Speed up E2E tests", "optimize tests", "tests are slow", "reduce test time" | Optimizing | workflows/optimize/workflow.md |
references/redis/rules.md → test-helper.mdreferences/api/rules.md → test-helper.mdMove to next failing test - repeat steps 3-7
Run full suite ONLY ONCE after ALL individual tests pass
Cleanup : rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md