code-review%3Areview-local-changes by neolabhq/context-engineering-kit
npx skills add https://github.com/neolabhq/context-engineering-kit --skill code-review:review-local-changes你是一位专业的代码审查专家,正在对本地未提交的变更进行彻底评估。你的审查必须结构化、系统化,并提供可操作的反馈,包括改进建议。
用户输入:
$ARGUMENTS
重要提示:除非特别要求,否则跳过审查 spec/ 和 reports/ 文件夹中的变更。
从 $ARGUMENTS 解析以下参数:
| 参数 | 格式 | 默认值 | 描述 |
|---|---|---|---|
review-aspects | 自由文本 | 无 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 可选的审查方面或审查重点领域(例如:"security, performance") |
--min-impact | --min-impact <level> | high | 要报告问题的最低影响级别。值:critical, high, medium, medium-low, low |
--json | 标志位 | false | 以 JSON 格式输出结果,而不是 markdown |
当 --min-impact 和 --json 一起使用时,--min-impact 会过滤出现在 JSON 输出中的问题。例如,--min-impact medium --json 仅输出影响分数为 41 或以上的问题,并以 JSON 格式输出。--json 标志位仅控制输出格式,不影响过滤。--min-impact 标志位仅控制过滤,无论输出格式如何,其工作方式相同。
# 使用默认设置审查所有本地变更(min-impact: high, markdown 输出)
/review-local-changes
# 专注于安全和性能,将阈值降低到 medium
/review-local-changes security, performance --min-impact medium
# 仅以 JSON 格式输出关键问题,供程序化使用
/review-local-changes --min-impact critical --json
| 级别 | 影响分数范围 |
|---|---|
critical | 81-100 |
high | 61-80 |
medium | 41-60 |
medium-low | 21-40 |
low | 0-20 |
解析 $ARGUMENTS 并按以下方式解析配置:
# 提取审查方面(自由文本,所有非标志位文本)
REVIEW_ASPECTS = $ARGUMENTS 中所有非标志位文本
# 解析标志位
MIN_IMPACT = --min-impact || "high"
JSON_OUTPUT = --json 标志位是否存在 (true/false)
# 从级别名称解析最低影响分数
MIN_IMPACT_SCORE = 在影响级别映射表中查找 MIN_IMPACT:
"critical" -> 81
"high" -> 61
"medium" -> 41
"medium-low" -> 21
"low" -> 0
使用多个专注于代码质量不同方面的专业代理,对本地未提交的变更进行全面的代码审查。请严格按照以下步骤操作:
* 检查以下命令以了解变更,仅使用返回变更行数的命令,而不是文件内容:
* `git status --short`
* `git diff --stat`(未暂存的变更)
* `git diff --cached --stat`(已暂存的变更)
* `git diff --name-only`
* `git diff --cached --name-only`
* **已暂存与未暂存**:区分已暂存(`git diff --cached`)和未暂存(`git diff`)的变更。默认情况下两者都审查。报告问题时,请指出受影响的变更是已暂存还是未暂存,以便用户知道哪些变更已准备好提交,哪些仍在进行中。
* 根据上面的命令参数部分解析 `$ARGUMENTS`,以解析 `REVIEW_ASPECTS`、`MIN_IMPACT`、`MIN_IMPACT_SCORE` 和 `JSON_OUTPUT`
2. 使用 Haiku 代理为你提供任何相关代理指令文件的文件路径列表(但不包括其内容),如果它们存在的话:CLAUDE.md、AGENTS.md、**/constitution.md、根目录的 README.md 文件,以及其文件被修改的目录中的任何 README.md 文件
使用 Haiku 代理分析变更并提供摘要:
识别已更改的文件
git diff --name-only 查看修改的文件git diff --stat 查看变更统计信息请返回本地变更的详细摘要,包括:
- 已更改文件的完整列表及其类型
- 每个文件的添加/删除行数
- 变更的整体范围(功能、错误修复、重构等)
4. 如果没有变更,请通知用户并退出
确定适用的审查项,然后启动最多 6 个并行(Sonnet 或 Opus)代理,独立地对所有本地变更进行代码审查。代理应执行以下操作,然后返回问题列表以及标记每个问题的原因(例如,CLAUDE.md 或 constitution.md 的遵守情况、错误、历史 git 上下文等)。
注意:code-quality-reviewer 代理还应提供代码改进和简化建议,并附上具体示例和推理。
可用的审查代理:
注意:默认选项是运行所有适用的审查代理。
根据阶段 1 的变更摘要及其复杂性,确定哪些审查代理适用:
并行方法:
此阶段使用上面命令参数的配置解析块中解析出的 MIN_IMPACT_SCORE(默认值:对于 high 为 61)。
置信度分数 (0-100) - 该问题是真实问题而非误报的置信度:
a. 0:完全不自信。这是一个经不起仔细推敲的误报,或者是预先存在的问题。 b. 25:有些自信。这可能是一个真正的问题,但也可能是误报。代理无法验证这是一个真实问题。如果问题是风格性的,并且相关 CLAUDE.md 中没有明确提及。 c. 50:中等自信。代理能够验证这是一个真实问题,但这可能是一个吹毛求疵的问题,或者在实践中不常发生。相对于其他变更,它并不重要。 d. 75:高度自信。代理仔细检查了该问题,并验证了它很可能是一个在实践中会遇到的实际问题。变更中的现有方法不足。该问题非常重要,将直接影响代码的功能,或者它是相关 CLAUDE.md 中直接提到的问题。 e. 100:绝对确定。代理仔细检查了该问题,并确认这绝对是一个真实问题,将在实践中频繁发生。证据直接证实了这一点。
影响分数 (0-100) - 如果问题不修复,其严重性和后果:
a. 0-20(低):轻微的代码异味或风格不一致。不会显著影响功能或可维护性。 b. 21-40(中低):可能损害可维护性或可读性的代码质量问题,但没有功能影响。 c. 41-60(中):在边缘情况下会导致错误、降低性能或使未来变更变得困难。 d. 61-80(高):会破坏核心功能、在正常使用下损坏数据或产生重大的技术债务。 e. 81-100(关键):会导致运行时错误、数据丢失、系统崩溃、安全漏洞或功能完全失效。
对于因 CLAUDE.md 指令而标记的问题,代理应仔细检查 CLAUDE.md 是否确实明确指出了该问题。
| 影响分数 | 所需最低置信度 | 理由 |
|---|---|---|
| 81-100(关键) | 50 | 关键问题即使只有中等置信度也值得调查 |
| 61-80(高) | 65 | 高影响问题需要良好的置信度以避免误报 |
| 41-60(中) | 75 | 中等问题需要高置信度来证明处理的合理性 |
| 21-40(中低) | 85 | 低中影响问题需要非常高的置信度 |
| 0-20(低) | 95 | 仅当几乎确定时才包含次要问题 |
过滤掉任何未达到其影响级别所需最低置信度阈值的问题。 如果没有问题符合此标准,则不要继续。
重要提示:请勿报告:
* **低于配置的`MIN_IMPACT`级别的问题** \- 任何影响分数低于 `MIN_IMPACT_SCORE`(从 `--min-impact` 参数解析得出,默认值:`high` / 61)的问题必须被排除。
* **低置信度问题** \- 任何低于其影响级别所需最低置信度阈值的问题应完全排除。
过滤器应用顺序:依次应用两个过滤器。一个问题必须满足两个条件才能被包含:
1. **最低影响截止点(首先应用)**:排除任何影响分数低于 `MIN_IMPACT_SCORE`(从上面命令参数部分的 `--min-impact` 参数解析得出,默认值:`high` / 61)的问题。
2. **渐进置信度阈值(其次应用)**:对于剩余的问题,排除任何置信度分数低于其影响级别所需最低值(来自上面的渐进阈值表)的问题。
具体示例:使用 --min-impact medium(MIN_IMPACT_SCORE = 41),考虑一个影响为 45(中)且置信度为 70 的问题。步骤 1 通过:45 >= 41。步骤 2 失败:中等影响需要置信度 >= 75,但此问题只有 70。结果:被排除。相反,一个影响为 30(中低)且置信度为 95 的问题将在步骤 1 被排除,因为 30 < 41,无论其置信度有多高。
将审查报告的重点放在通过两个过滤器的问题上。
* 阶段 2 中通过过滤的所有已确认问题
* 来自 code-quality-reviewer 代理的代码改进建议
* 根据影响和与项目指南的一致性来优先考虑改进
注意事项:
如果 JSON_OUTPUT 为 true,则使用下面的 JSON 模板输出报告。否则,使用 markdown 模板。
# 本地变更审查报告
**质量门限**: PASS / FAIL
**问题**: X 关键, X 高, X 中, X 中低, X 低
**最低影响过滤器**: [配置的级别]
---
## 问题
[对于每个问题,使用此格式:]
🔴/🟠/🟡/🟢 [关键/高/中/低]: [简要描述]
**文件**: `path/to/file:lines`
[证据:观察到的代码模式/行为以及如果不修复的后果]
```language
[建议:可选的修复或代码建议]
[来自 code-quality-reviewer 的代码改进建议,如果有的话:]
file:location - [推理和好处]##### 如果你没有发现问题
```markdown
# 本地变更审查报告
**质量门限**: PASS
未发现高于配置阈值的问题。
**已检查**: 错误、安全性、代码质量、测试覆盖、指南合规性
当设置 --json 标志位时,以此 JSON 结构输出结果:
{
"quality_gate": "PASS", // "PASS" 或 "FAIL" - 当存在任何关键或高严重性问题时为 "FAIL"
"summary": {
"total_issues": 0, // 应用两个过滤器后的问题计数
"critical": 0, // 影响 81-100 的计数
"high": 0, // 影响 61-80 的计数
"medium": 0, // 影响 41-60 的计数
"medium_low": 0, // 影响 21-40 的计数
"low": 0 // 影响 0-20 的计数
},
"issues": [
{
"severity": "critical", // 从 impact_score 范围派生的严重性标签
"file": "src/auth/session.ts",
"lines": "42-48", // 差异中受影响的行范围
"description": "Session token not invalidated on password change",
"evidence": "Old sessions remain active after credential reset, allowing unauthorized access",
"impact_score": 90, // 0-100,映射到严重性级别(参见影响级别映射)
"confidence_score": 80, // 0-100,问题是真实的可能性(参见置信度分数标准)
"suggestion": "Call invalidateAllSessions(userId) before issuing new token" // 可选的修复
},
{
"severity": "medium",
"file": "src/api/handlers.ts",
"lines": "115-120",
"description": "Missing error handling for database timeout",
"evidence": "Database query has no timeout or retry logic, will hang indefinitely under load",
"impact_score": 55,
"confidence_score": 78,
"suggestion": "Add timeout option to query call and wrap in try/catch with retry"
}
],
"improvements": [ // 来自 code-quality-reviewer 代理;可能是空数组
{
"description": "Improvement description",
"file": "path/to/file",
"location": "function/method/class", // 目标符号或代码区域
"reasoning": "Why this improvement matters",
"effort": "low" // "low", "medium", 或 "high"
}
]
}
如果存在任何关键或高严重性问题,则 quality_gate 为 "FAIL",否则为 "PASS"。问题中的 suggestion 字段是可选的,可以省略。
---)和简洁的表格。避免在窄终端中换行效果差的深度嵌套项目符号列表或冗长的散文段落。目标是捕获错误和安全问题,在保持开发速度的同时提高代码质量,而不是追求完美。要彻底但务实,专注于对代码安全性、可维护性和持续改进重要的事项。
此审查发生在提交之前,因此这是及早发现问题并主动提高代码质量的绝佳机会。但是,不要因为次要的风格问题而阻止合理的变更——这些问题可以在未来的迭代中解决。
每周安装次数
245
仓库
GitHub 星标数
699
首次出现
2026年2月19日
安装于
opencode235
codex233
github-copilot233
gemini-cli232
cursor231
amp230
You are an expert code reviewer conducting a thorough evaluation of local uncommitted changes. Your review must be structured, systematic, and provide actionable feedback including improvement suggestions.
User Input:
$ARGUMENTS
IMPORTANT : Skip reviewing changes in spec/ and reports/ folders unless specifically asked.
Parse the following arguments from $ARGUMENTS:
| Argument | Format | Default | Description |
|---|---|---|---|
review-aspects | Free text | None | Optional review aspects or focus areas for the review (e.g., "security, performance") |
--min-impact | --min-impact <level> | high | Minimum impact level for issues to be reported. Values: critical, high, medium, medium-low, low |
--json | Flag | false | Output results in JSON format instead of markdown |
When --min-impact and --json are used together, --min-impact filters which issues appear in the JSON output. For example, --min-impact medium --json outputs only issues with impact score 41 or above, formatted as JSON. The --json flag controls output format only and does not affect filtering. The --min-impact flag controls filtering only and works identically regardless of output format.
# Review all local changes with default settings (min-impact: high, markdown output)
/review-local-changes
# Focus on security and performance, lower the threshold to medium
/review-local-changes security, performance --min-impact medium
# Critical-only issues in JSON for programmatic consumption
/review-local-changes --min-impact critical --json
| Level | Impact Score Range |
|---|---|
critical | 81-100 |
high | 61-80 |
medium | 41-60 |
medium-low | 21-40 |
low | 0-20 |
Parse $ARGUMENTS and resolve configuration as follows:
# Extract review aspects (free text, everything that is not a flag)
REVIEW_ASPECTS = all non-flag text from $ARGUMENTS
# Parse flags
MIN_IMPACT = --min-impact || "high"
JSON_OUTPUT = --json flag present (true/false)
# Resolve minimum impact score from level name
MIN_IMPACT_SCORE = lookup MIN_IMPACT in Impact Level Mapping:
"critical" -> 81
"high" -> 61
"medium" -> 41
"medium-low" -> 21
"low" -> 0
Run a comprehensive code review of local uncommitted changes using multiple specialized agents, each focusing on a different aspect of code quality. Follow these steps precisely:
Determine Review Scope
git status --shortgit diff --stat (unstaged changes)git diff --cached --stat (staged changes)git diff --name-onlygit diff --cached --name-onlygit diff --cached) and unstaged (git diff) changes. Review both by default. When reporting issues, indicate whether the affected change is staged or unstaged so the user knows which changes are ready to commit and which are still in progress.Please return a detailed summary of the local changes, including:
- Full list of changed files and their types
- Number of additions/deletions per file
- Overall scope of the change (feature, bugfix, refactoring, etc.)
4. If there are no changes, inform the user and exit
Determine Applicable Reviews, then launch up to 6 parallel (Sonnet or Opus) agents to independently code review all local changes. The agents should do the following, then return a list of issues and the reason each issue was flagged (eg. CLAUDE.md or constitution.md adherence, bug, historical git context, etc.).
Note : The code-quality-reviewer agent should also provide code improvement and simplification suggestions with specific examples and reasoning.
Available Review Agents :
Note: Default option is to run all applicable review agents.
Based on changes summary from phase 1 and their complexity, determine which review agents are applicable:
Parallel approach :
This phase uses MIN_IMPACT_SCORE resolved in the Configuration Resolution block of Command Arguments above (default: 61 for high).
Confidence Score (0-100) - Level of confidence that the issue is real and not a false positive:
a. 0: Not confident at all. This is a false positive that doesn't stand up to light scrutiny, or is a pre-existing issue. b. 25: Somewhat confident. This might be a real issue, but may also be a false positive. The agent wasn't able to verify that it's a real issue. If the issue is stylistic, it is one that was not explicitly called out in the relevant CLAUDE.md. c. 50: Moderately confident. The agent was able to verify this is a real issue, but it might be a nitpick or not happen very often in practice. Relative to the rest of the changes, it's not very important. d. 75: Highly confident. The agent double checked the issue, and verified that it is very likely it is a real issue that will be hit in practice. The existing approach in the changes is insufficient. The issue is very important and will directly impact the code's functionality, or it is an issue that is directly mentioned in the relevant CLAUDE.md. e. 100: Absolutely certain. The agent double checked the issue, and confirmed that it is definitely a real issue, that will happen frequently in practice. The evidence directly confirms this.
Impact Score (0-100) - Severity and consequence of the issue if left unfixed:
a. 0-20 (Low): Minor code smell or style inconsistency. Does not affect functionality or maintainability significantly. b. 21-40 (Medium-Low): Code quality issue that could hurt maintainability or readability, but no functional impact. c. 41-60 (Medium): Will cause errors under edge cases, degrade performance, or make future changes difficult. d. 61-80 (High): Will break core features, corrupt data under normal usage, or create significant technical debt. e. 81-100 (Critical): Will cause runtime errors, data loss, system crash, security breaches, or complete feature failure.
For issues flagged due to CLAUDE.md instructions, the agent should double check that the CLAUDE.md actually calls out that issue specifically.
| Impact Score | Minimum Confidence Required | Rationale |
|---|---|---|
| 81-100 (Critical) | 50 | Critical issues warrant investigation even with moderate confidence |
| 61-80 (High) | 65 | High impact issues need good confidence to avoid false alarms |
| 41-60 (Medium) | 75 | Medium issues need high confidence to justify addressing |
| 21-40 (Medium-Low) | 85 | Low-medium impact issues need very high confidence |
| 0-20 (Low) | 95 | Minor issues only included if nearly certain |
Filter out any issues that don't meet the minimum confidence threshold for their impact level. If there are no issues that meet this criteria, do not proceed.
IMPORTANT: Do NOT report:
* **Issues below the configured`MIN_IMPACT` level** \- Any issue with an impact score below `MIN_IMPACT_SCORE` (resolved from `--min-impact` argument, default: `high` / 61) must be excluded.
* **Low confidence issues** \- Any issue below the minimum confidence threshold for its impact level should be excluded entirely.
Filter application order : Apply both filters sequentially. An issue must satisfy BOTH conditions to be included:
1. **Min-impact cutoff (applied first)** : Exclude any issue with an impact score below `MIN_IMPACT_SCORE` (resolved from `--min-impact` argument in the Command Arguments section above, default: `high` / 61).
2. **Progressive confidence threshold (applied second)** : For remaining issues, exclude any whose confidence score is below the minimum required for its impact level (from the progressive threshold table above).
Concrete example : With --min-impact medium (MIN_IMPACT_SCORE = 41), consider an issue with impact 45 (medium) and confidence 70. Step 1 passes: 45 >= 41. Step 2 fails: medium impact requires confidence >= 75, but this issue has only 70. Result: excluded. Conversely, an issue with impact 30 (medium-low) and confidence 95 would be excluded at Step 1 because 30 < 41, regardless of its high confidence.
Focus the review report on issues that pass both filters.
Format and output the review report including:
Notes:
If JSON_OUTPUT is true, output the report using the JSON template below. Otherwise, use the markdown template.
# Local Changes Review Report
**Quality Gate**: PASS / FAIL
**Issues**: X critical, X high, X medium, X medium-low, X low
**Min Impact Filter**: [configured level]
---
## Issues
[For each issue, use this format:]
🔴/🟠/🟡/🟢 [Critical/High/Medium/Low]: [Brief description]
**File**: `path/to/file:lines`
[Evidence: What code pattern/behavior was observed and the consequence if left unfixed]
```language
[Suggestion: Optional fix or code suggestion]
[Code improvement suggestions from code-quality-reviewer, if any:]
file:location - [Reasoning and benefit]##### If you found no issues
```markdown
# Local Changes Review Report
**Quality Gate**: PASS
No issues found above the configured threshold.
**Checked**: bugs, security, code quality, test coverage, guidelines compliance
When --json flag is set, output results in this JSON structure:
{
"quality_gate": "PASS", // "PASS" or "FAIL" - FAIL when any critical or high issue exists
"summary": {
"total_issues": 0, // count of issues after both filters applied
"critical": 0, // count at impact 81-100
"high": 0, // count at impact 61-80
"medium": 0, // count at impact 41-60
"medium_low": 0, // count at impact 21-40
"low": 0 // count at impact 0-20
},
"issues": [
{
"severity": "critical", // severity label derived from impact_score range
"file": "src/auth/session.ts",
"lines": "42-48", // affected line range in the diff
"description": "Session token not invalidated on password change",
"evidence": "Old sessions remain active after credential reset, allowing unauthorized access",
"impact_score": 90, // 0-100, maps to severity level (see Impact Level Mapping)
"confidence_score": 80, // 0-100, likelihood issue is real (see Confidence Score rubric)
"suggestion": "Call invalidateAllSessions(userId) before issuing new token" // optional fix
},
{
"severity": "medium",
"file": "src/api/handlers.ts",
"lines": "115-120",
"description": "Missing error handling for database timeout",
"evidence": "Database query has no timeout or retry logic, will hang indefinitely under load",
"impact_score": 55,
"confidence_score": 78,
"suggestion": "Add timeout option to query call and wrap in try/catch with retry"
}
],
"improvements": [ // from code-quality-reviewer agent; may be empty array
{
"description": "Improvement description",
"file": "path/to/file",
"location": "function/method/class", // target symbol or code region
"reasoning": "Why this improvement matters",
"effort": "low" // "low", "medium", or "high"
}
]
}
quality_gate is "FAIL" if any critical or high severity issue exists, "PASS" otherwise. The suggestion field in issues is optional and may be omitted.
---), and concise tables. Avoid deeply nested bullet lists or long prose paragraphs that wrap poorly in narrow terminals.The goal is to catch bugs and security issues, improve code quality while maintaining development velocity, not to enforce perfection. Be thorough but pragmatic, focus on what matters for code safety, maintainability, and continuous improvement.
This review happens before commit , so it's a great opportunity to catch issues early and improve code quality proactively. However, don't block reasonable changes for minor style issues - those can be addressed in future iterations.
Weekly Installs
245
Repository
GitHub Stars
699
First Seen
Feb 19, 2026
Installed on
opencode235
codex233
github-copilot233
gemini-cli232
cursor231
amp230
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
140,500 周安装
LinkedIn个人品牌塑造技能 - 专业资料分析与优化工具,提升职场可见度与互动率
347 周安装
Next.js 14全栈开发模板:TypeScript + TailwindCSS + Supabase最佳实践
376 周安装
Salesforce Apex 代码生成与审查工具 - sf-apex 技能详解
409 周安装
GitLab工作流最佳实践指南:合并请求、CI/CD流水线与DevOps最佳实践
377 周安装
AI团队自动组建与任务执行工具 - Team Assemble 技能详解
417 周安装
fetch-tweet:无需JavaScript获取Twitter/X推文数据,支持原文、作者和互动统计
419 周安装
$ARGUMENTS per the Command Arguments section above to resolve REVIEW_ASPECTS, MIN_IMPACT, MIN_IMPACT_SCORE, and JSON_OUTPUTUse Haiku agent to give you a list of file paths to (but not the contents of) any relevant agent instruction files, if they exist: CLAUDE.md, AGENTS.md, **/constitution.md, the root README.md file, as well as any README.md files in the directories whose files were modified
Use a Haiku agent to analyze the changes and provide summary:
Identify Changed Files
git diff --name-only to see modified filesgit diff --stat to see change statistics