重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/rysweet/amplihack --skill test-gap-analyzer该技能自动分析代码库,以识别未测试的函数、低覆盖率区域和缺失的边界情况测试。它生成按优先级和风险影响组织的可操作测试建议,遵循 amplihack 的测试金字塔(60% 单元测试,30% 集成测试,10% 端到端测试)。
测试应遵循以下分布:
此技能有助于在这些层级之间平衡测试,同时优先考虑覆盖率。
该技能识别以下领域的缺口:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
分析器:
对于每个函数/方法:
按以下方式对缺口进行分类:
针对每个缺口,生成:
按以下方式组织建议:
.coverage 文件(Python)提供开箱即用的测试模板,用于:
User: 分析我的 src/ 目录中的测试覆盖率缺口
Claude:
1. 扫描 src/ 中的所有 Python 文件
2. 读取 .coverage 或使用 ast 分析
3. 识别未测试的函数
4. 生成包含建议的缺口报告
Output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
测试缺口分析报告
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
摘要:
- 函数总数: 145
- 未测试: 23 (16%)
- 低覆盖率 (< 85%): 34 (23%)
关键缺口(高风险):
1. payment_processor.py::process_payment()
- 未测试 | 处理金钱 | 15 分钟工作量
- 建议的测试:
- 有效支付处理
- 资金不足错误
- 支付超时
- 货币转换
中等缺口:
2. user_service.py::validate_email()
- 40% 覆盖率 | 缺失边界情况
- 建议的测试:
- Unicode 字符
- 长电子邮件地址
- 特殊字符
低优先级缺口:
3. utils.py::format_date()
- 60% 覆盖率
- 建议的测试:
- 时区处理
- 夏令时转换
User: 为我的未测试的认证模块生成测试模板
Claude:
按测试金字塔组织创建模板:
单元测试 (60%):
- test_token_validation_valid()
- test_token_validation_expired()
- test_token_validation_invalid_signature()
集成测试 (30%):
- test_auth_flow_with_database()
- test_multi_user_concurrent_auth()
端到端测试 (10%):
- test_user_login_to_protected_resource()
- test_session_persistence_across_requests()
User: 帮助我将测试覆盖率从 65% 提高到 85%
Claude:
1. 分析当前覆盖率
2. 识别阻碍达到 85% 阈值的缺口
3. 按影响确定优先级
4. 估算工作量
Output:
要达到 85% 覆盖率:
- 12 个快速成果(每个 < 2 小时)
- 3 个中等任务(每个 2-4 小时)
- 2 个复杂任务(每个 4+ 小时)
推荐顺序:
1. 添加错误案例测试(5 个测试,3 小时)-> +8%
2. 覆盖认证边界情况(8 个测试,4 小时)-> +6%
3. 添加集成测试(12 个测试,6 小时)-> +7%
# 测试缺口分析报告
## 摘要
- 函数总数: N
- 未测试函数: N (X%)
- 函数 < 85%: N (X%)
- 平均覆盖率: X%
## 关键缺口(必须测试)
1. 函数名称 | 类型 | 优先级 | 工作量
建议的测试: [列表]
## 中等优先级缺口
[类似结构]
## 低优先级缺口
[类似结构]
## 测试金字塔分布
当前:
- 单元: X% | 目标: 60%
- 集成: X% | 目标: 30%
- 端到端: X% | 目标: 10%
## 测试模板
[开箱即用的测试代码]
## 工作量估算
- 快速成果: N 小时
- 中等任务: N 小时
- 复杂工作: N 小时
- 总计: N 小时
def test_function_name_happy_path():
"""使用有效输入测试函数。"""
# 准备
input_data = {...}
expected = {...}
# 执行
result = function_name(input_data)
# 断言
assert result == expected
def test_function_name_invalid_input():
"""测试函数在无效输入时引发 ValueError。"""
with pytest.raises(ValueError, match="预期错误消息"):
function_name(invalid_input)
def test_function_name_edge_case():
"""测试函数正确处理边界情况。"""
# 测试边界条件
result = function_name(boundary_value)
assert result is not None
def test_user_service_with_database(test_db):
"""使用真实数据库测试用户服务。"""
user_service = UserService(test_db)
user = user_service.create_user("test@example.com")
assert user.id is not None
User: 快速审查 api/ 的测试覆盖率
Claude:
- 扫描目录
- 识别前 5 个缺口
- 提供快速建议
- 总时间: < 2 分钟
User: 规划从 60% 到 85% 的测试覆盖率改进
Claude:
- 分析缺口
- 创建分阶段改进计划
- 按风险确定优先级
- 提供工作量估算
- 生成所有测试模板
User: 为新认证模块生成完整的测试计划
Claude:
- 分析模块结构
- 映射公共函数
- 创建测试建议
- 平衡测试金字塔
- 提供所有模板
分析后,验证:
测试行为,而非代码如何工作。细节可能会改变。
要具体:"测试空字符串" 而非 "测试边界情况"
不要只孤立地测试函数。测试它们如何协同工作。
每个错误条件应至少有一个测试。
模拟外部系统,但要测试集成点。
一个好的测试缺口分析:
# 生成覆盖率报告
coverage run -m pytest
coverage json # 创建 coverage.json
# 使用此技能分析缺口
Claude: 使用 coverage.json 分析我项目中的测试缺口
# 生成覆盖率报告
jest --coverage
# 分析缺口
Claude: 分析我的 TypeScript 项目中的测试缺口
项目统计:
- 156 个函数
- 52% 测试覆盖率
- 未知的未测试函数
- 分散的测试文件
缺口报告:
- 24 个未测试函数 (15%)
- 34 个函数 < 85% (22%)
- 12 个关键缺口(高影响)
- 45 个中等缺口(中等影响)
- 40 个低优先级缺口
建议:
1. 专注于关键缺口(支付、认证、数据)
2. 添加错误案例测试(20 个测试,8 小时)
3. 覆盖边界情况(15 个测试,6 小时)
4. 集成测试(12 个测试,10 小时)
实施后的结果:
- 89% 测试覆盖率
- 所有关键函数已测试
- 平衡的测试金字塔
- 改进重构信心
缺口分析后:
有效的测试缺口分析导致:
每周安装数
93
仓库
GitHub 星标数
45
首次出现
Jan 23, 2026
安全审计
安装于
opencode82
claude-code78
codex76
gemini-cli76
cursor75
antigravity73
This skill automatically analyzes codebases to identify untested functions, low coverage areas, and missing edge case tests. It generates actionable test suggestions organized by priority and risk impact, following amplihack's testing pyramid (60% unit, 30% integration, 10% E2E).
Tests should follow this distribution:
This skill helps balance tests across these layers while prioritizing coverage.
The skill identifies gaps in these areas:
The analyzer:
For each function/method:
Classify gaps by:
For each gap, generate:
Organize suggestions by:
.coverage files (Python)Provides ready-to-use test templates for:
User: Analyze test coverage gaps in my src/ directory
Claude:
1. Scans src/ for all Python files
2. Reads .coverage or uses ast analysis
3. Identifies untested functions
4. Generates gap report with suggestions
Output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Test Gap Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Summary:
- Total functions: 145
- Untested: 23 (16%)
- Low coverage (< 85%): 34 (23%)
CRITICAL GAPS (High Risk):
1. payment_processor.py::process_payment()
- Untested | Handles money | 15 min effort
- Suggested tests:
- Valid payment processing
- Insufficient funds error
- Payment timeout
- Currency conversion
MEDIUM GAPS:
2. user_service.py::validate_email()
- 40% coverage | Missing edge cases
- Suggested tests:
- Unicode characters
- Long email addresses
- Special characters
LOW GAPS:
3. utils.py::format_date()
- 60% coverage
- Suggested tests:
- Timezone handling
- Daylight saving transitions
User: Generate test templates for my untested auth module
Claude:
Creates templates organized by testing pyramid:
Unit Tests (60%):
- test_token_validation_valid()
- test_token_validation_expired()
- test_token_validation_invalid_signature()
Integration Tests (30%):
- test_auth_flow_with_database()
- test_multi_user_concurrent_auth()
E2E Tests (10%):
- test_user_login_to_protected_resource()
- test_session_persistence_across_requests()
User: Help me improve test coverage from 65% to 85%
Claude:
1. Analyzes current coverage
2. Identifies gaps blocking 85% threshold
3. Prioritizes by impact
4. Estimates effort
Output:
To reach 85% coverage:
- 12 quick wins (< 2 hours each)
- 3 medium tasks (2-4 hours each)
- 2 complex tasks (4+ hours each)
Recommended order:
1. Add error case tests (5 tests, 3 hours) -> +8%
2. Cover auth edge cases (8 tests, 4 hours) -> +6%
3. Add integration tests (12 tests, 6 hours) -> +7%
# Test Gap Analysis Report
## Summary
- Total functions: N
- Untested functions: N (X%)
- Functions < 85%: N (X%)
- Average coverage: X%
## Critical Gaps (Must Test)
1. Function name | Type | Priority | Effort
Suggested tests: [list]
## Medium Priority Gaps
[Similar structure]
## Low Priority Gaps
[Similar structure]
## Testing Pyramid Distribution
Current:
- Unit: X% | Target: 60%
- Integration: X% | Target: 30%
- E2E: X% | Target: 10%
## Test Templates
[Ready-to-use test code]
## Effort Estimate
- Quick wins: N hours
- Medium tasks: N hours
- Complex work: N hours
- Total: N hours
def test_function_name_happy_path():
"""Test function with valid inputs."""
# Arrange
input_data = {...}
expected = {...}
# Act
result = function_name(input_data)
# Assert
assert result == expected
def test_function_name_invalid_input():
"""Test function raises ValueError on invalid input."""
with pytest.raises(ValueError, match="Expected error message"):
function_name(invalid_input)
def test_function_name_edge_case():
"""Test function handles edge case correctly."""
# Test boundary conditions
result = function_name(boundary_value)
assert result is not None
def test_user_service_with_database(test_db):
"""Test user service with real database."""
user_service = UserService(test_db)
user = user_service.create_user("test@example.com")
assert user.id is not None
User: Quick test coverage review of api/
Claude:
- Scans directory
- Identifies top 5 gaps
- Provides quick recommendations
- Total time: < 2 minutes
User: Plan test coverage improvement from 60% to 85%
Claude:
- Analyzes gaps
- Creates phased improvement plan
- Prioritizes by risk
- Provides effort estimates
- Generates all test templates
User: Generate complete test plan for new auth module
Claude:
- Analyzes module structure
- Maps public functions
- Creates test suggestions
- Balances testing pyramid
- Provides all templates
After analysis, verify:
Test behavior, not how the code works. Details can change.
Be specific: "Test empty string" not "Test edge cases"
Don't just test functions in isolation. Test how they work together.
Every error condition should have at least one test.
Mock external systems, but test the integration points.
A good test gap analysis:
# Generate coverage report
coverage run -m pytest
coverage json # Creates coverage.json
# Analyze gaps with this skill
Claude: Analyze test gaps in my project using coverage.json
# Generate coverage report
jest --coverage
# Analyze gaps
Claude: Analyze test gaps in my TypeScript project
Project Stats:
- 156 functions
- 52% test coverage
- Unknown untested functions
- Scattered test files
Gap Report:
- 24 untested functions (15%)
- 34 functions < 85% (22%)
- 12 critical gaps (high impact)
- 45 medium gaps (medium impact)
- 40 low gaps (low priority)
Recommendations:
1. Focus on critical gaps (payment, auth, data)
2. Add error case tests (20 tests, 8 hours)
3. Cover edge cases (15 tests, 6 hours)
4. Integration tests (12 tests, 10 hours)
Result after implementing:
- 89% test coverage
- All critical functions tested
- Balanced testing pyramid
- Improved confidence in refactoring
After gap analysis:
Effective test gap analysis results in:
Weekly Installs
93
Repository
GitHub Stars
45
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode82
claude-code78
codex76
gemini-cli76
cursor75
antigravity73
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
48,700 周安装
OpenAI Agents Python 示例自动运行工具 - 自动化测试与日志管理解决方案
77 周安装
GitHub Trending 摘要推送工具:自动抓取热门项目并生成中文摘要,通过企业微信推送
78 周安装
Claude AI 前端构件生成器:React + TypeScript + Vite 快速构建与打包工具
77 周安装
职位申请优化器:简历SEO与ATS优化指南,提升求职成功率
78 周安装
联合创始人评估器 - 创业团队匹配度测试、股权分配与兼容性分析工具
78 周安装
GitHub代码搜索工具 - 智能多重搜索编排,获取真实世界代码示例与最佳实践
78 周安装