code-review-and-quality by addyosmani/agent-skills
npx skills add https://github.com/addyosmani/agent-skills --skill code-review-and-quality多维度的代码审查与质量门控。每个变更在合并前都必须经过审查——无一例外。审查涵盖五个维度:正确性、可读性、架构、安全性和性能。标准是:"高级工程师是否会批准这个差异和验证过程?"
每次审查都会从以下维度评估代码:
代码是否完成了它声称要做的事情?
另一位工程师(或智能体)能否在没有作者解释的情况下理解这段代码?
temp、data、result)广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
_unused)、向后兼容的垫片或 // removed 注释?变更是否符合系统的设计?
变更是否引入了漏洞?
变更是否引入了性能问题?
在查看代码之前,先理解意图:
- 这个变更试图实现什么?
- 它实现了什么规范或任务?
- 预期的行为变化是什么?
测试揭示了意图和覆盖率:
- 变更是否有测试?
- 它们测试的是行为(而不是实现细节)吗?
- 是否覆盖了边界情况?
- 测试名称是否具有描述性?
- 如果代码发生变化,测试能否捕获回归问题?
带着五个维度的考量浏览代码:
对于每个更改的文件:
1. 正确性:这段代码是否做了测试说它应该做的事情?
2. 可读性:我能否在没有帮助的情况下理解它?
3. 架构:这符合系统吗?
4. 安全性:有任何漏洞吗?
5. 性能:有任何瓶颈吗?
| 类别 | 行动 | 示例 |
|---|---|---|
| 严重 | 必须在合并前修复 | 安全漏洞、数据丢失风险、功能损坏 |
| 重要 | 应在合并前修复 | 缺少测试、错误的抽象、错误处理不当 |
| 建议 | 考虑改进 | 命名改进、代码风格偏好、可选优化 |
| 吹毛求疵 | 可采纳也可忽略 | 格式、注释措辞(在 AI 审查中跳过这些) |
检查作者的验证过程:
- 运行了哪些测试?
- 构建是否通过?
- 是否进行了手动测试?
- UI 变更是否有截图?
- 是否有前后对比?
使用不同的模型以获得不同的审查视角:
Model A 编写代码
│
▼
Model B 审查正确性和架构
│
▼
Model A 处理反馈
│
▼
人类做出最终决定
这可以捕获单个模型可能遗漏的问题——不同的模型有不同的盲点。
审查智能体的示例提示:
Review this code change for correctness, security, and adherence to
our project conventions. The spec says [X]. The change should [Y].
Flag any issues as Critical, Important, or Suggestion.
在任何重构或实现变更之后,检查是否有孤立的代码:
不要留下死代码——它会混淆未来的读者和智能体。但不要在不清楚的情况下静默删除。如有疑问,请询问。
DEAD CODE IDENTIFIED:
- formatLegacyDate() in src/utils/date.ts — replaced by formatDate()
- OldTaskCard component in src/components/ — replaced by TaskCard
- LEGACY_API_URL constant in src/config.ts — no remaining references
→ Safe to remove these?
在审查代码时——无论是由你、另一个智能体还是人类编写的:
代码审查的一部分是依赖审查:
在添加任何依赖之前:
npm audit)规则: 优先使用标准库和现有工具,而不是新的依赖。每个依赖都是一项负债。
## Review: [PR/Change title]
### Context
- [ ] I understand what this change does and why
### Correctness
- [ ] Change matches spec/task requirements
- [ ] Edge cases handled
- [ ] Error paths handled
- [ ] Tests cover the change adequately
### Readability
- [ ] Names are clear and consistent
- [ ] Logic is straightforward
- [ ] No unnecessary complexity
### Architecture
- [ ] Follows existing patterns
- [ ] No unnecessary coupling or dependencies
- [ ] Appropriate abstraction level
### Security
- [ ] No secrets in code
- [ ] Input validated at boundaries
- [ ] No injection vulnerabilities
- [ ] Auth checks in place
### Performance
- [ ] No N+1 patterns
- [ ] No unbounded operations
- [ ] Pagination on list endpoints
### Verification
- [ ] Tests pass
- [ ] Build succeeds
- [ ] Manual verification done (if applicable)
### Verdict
- [ ] **Approve** — Ready to merge
- [ ] **Request changes** — Issues must be addressed
| 合理化借口 | 现实 |
|---|---|
| "它能用,这就够了" | 能用但不可读、不安全或架构错误的代码会产生不断累积的技术债务。 |
| "我写的,所以我知道它是正确的" | 作者对自己的假设是盲目的。每个变更都能从另一双眼睛的审视中受益。 |
| "我们以后会清理它" | "以后"永远不会来。审查就是质量门控——利用好它。 |
| "AI 生成的代码可能没问题" | AI 代码需要更多的审查,而不是更少。即使错了,它也显得自信且合理。 |
| "测试通过了,所以它是好的" | 测试是必要的,但并不充分。它们无法捕获架构问题、安全问题或可读性问题。 |
审查完成后:
每周安装次数
24
仓库
GitHub 星标数
74
首次出现
2026 年 2 月 16 日
安全审计
安装于
codex24
gemini-cli23
github-copilot23
amp23
kimi-cli23
opencode23
Multi-dimensional code review with quality gates. Every change gets reviewed before merge — no exceptions. Review covers five axes: correctness, readability, architecture, security, and performance. The standard is: "Would a staff engineer approve this diff and the verification story?"
Every review evaluates code across these dimensions:
Does the code do what it claims to do?
Can another engineer (or agent) understand this code without the author explaining it?
temp, data, result without context)_unused), backwards-compat shims, or // removed comments?Does the change fit the system's design?
Does the change introduce vulnerabilities?
Does the change introduce performance problems?
Before looking at code, understand the intent:
- What is this change trying to accomplish?
- What spec or task does it implement?
- What is the expected behavior change?
Tests reveal intent and coverage:
- Do tests exist for the change?
- Do they test behavior (not implementation details)?
- Are edge cases covered?
- Do tests have descriptive names?
- Would the tests catch a regression if the code changed?
Walk through the code with the five axes in mind:
For each file changed:
1. Correctness: Does this code do what the test says it should?
2. Readability: Can I understand this without help?
3. Architecture: Does this fit the system?
4. Security: Any vulnerabilities?
5. Performance: Any bottlenecks?
| Category | Action | Example |
|---|---|---|
| Critical | Must fix before merge | Security vulnerability, data loss risk, broken functionality |
| Important | Should fix before merge | Missing test, wrong abstraction, poor error handling |
| Suggestion | Consider for improvement | Naming improvement, code style preference, optional optimization |
| Nitpick | Take it or leave it | Formatting, comment wording (skip these in AI reviews) |
Check the author's verification story:
- What tests were run?
- Did the build pass?
- Was the change tested manually?
- Are there screenshots for UI changes?
- Is there a before/after comparison?
Use different models for different review perspectives:
Model A writes the code
│
▼
Model B reviews for correctness and architecture
│
▼
Model A addresses the feedback
│
▼
Human makes the final call
This catches issues that a single model might miss — different models have different blind spots.
Example prompt for a review agent:
Review this code change for correctness, security, and adherence to
our project conventions. The spec says [X]. The change should [Y].
Flag any issues as Critical, Important, or Suggestion.
After any refactoring or implementation change, check for orphaned code:
Don't leave dead code lying around — it confuses future readers and agents. But don't silently delete things you're not sure about. When in doubt, ask.
DEAD CODE IDENTIFIED:
- formatLegacyDate() in src/utils/date.ts — replaced by formatDate()
- OldTaskCard component in src/components/ — replaced by TaskCard
- LEGACY_API_URL constant in src/config.ts — no remaining references
→ Safe to remove these?
When reviewing code — whether written by you, another agent, or a human:
Part of code review is dependency review:
Before adding any dependency:
npm audit)Rule: Prefer standard library and existing utilities over new dependencies. Every dependency is a liability.
## Review: [PR/Change title]
### Context
- [ ] I understand what this change does and why
### Correctness
- [ ] Change matches spec/task requirements
- [ ] Edge cases handled
- [ ] Error paths handled
- [ ] Tests cover the change adequately
### Readability
- [ ] Names are clear and consistent
- [ ] Logic is straightforward
- [ ] No unnecessary complexity
### Architecture
- [ ] Follows existing patterns
- [ ] No unnecessary coupling or dependencies
- [ ] Appropriate abstraction level
### Security
- [ ] No secrets in code
- [ ] Input validated at boundaries
- [ ] No injection vulnerabilities
- [ ] Auth checks in place
### Performance
- [ ] No N+1 patterns
- [ ] No unbounded operations
- [ ] Pagination on list endpoints
### Verification
- [ ] Tests pass
- [ ] Build succeeds
- [ ] Manual verification done (if applicable)
### Verdict
- [ ] **Approve** — Ready to merge
- [ ] **Request changes** — Issues must be addressed
| Rationalization | Reality |
|---|---|
| "It works, that's good enough" | Working code that's unreadable, insecure, or architecturally wrong creates debt that compounds. |
| "I wrote it, so I know it's correct" | Authors are blind to their own assumptions. Every change benefits from another set of eyes. |
| "We'll clean it up later" | Later never comes. The review is the quality gate — use it. |
| "AI-generated code is probably fine" | AI code needs more scrutiny, not less. It's confident and plausible, even when wrong. |
| "The tests pass, so it's good" | Tests are necessary but not sufficient. They don't catch architecture problems, security issues, or readability concerns. |
After review is complete:
Weekly Installs
24
Repository
GitHub Stars
74
First Seen
Feb 16, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex24
gemini-cli23
github-copilot23
amp23
kimi-cli23
opencode23
AI绩效改进计划PIP技能:提升AI任务执行主动性、交付质量与问题解决能力
967 周安装
design-system-doc 设计系统文档工具 - 基于 derklinke/codex-config 的代码库配置方案
1 周安装
design-optimize 代码配置优化工具 - 提升开发效率与代码质量
1 周安装
design-normalize:代码规范与设计系统配置工具,提升开发一致性
1 周安装
Trellis 元技能:AI 开发平台兼容性指南与项目结构管理工具
84 周安装
design-delight - 提升代码编辑器设计体验的配置工具,优化开发工作流
1 周安装
design-colorize 代码着色工具 - 提升代码可读性的设计插件 | derklinke/codex-config
1 周安装