ln-650-persistence-performance-auditor by levnikolaevich/claude-code-skills
npx skills add https://github.com/levnikolaevich/claude-code-skills --skill ln-650-persistence-performance-auditorPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
协调 4 个专门的审计工作器,执行数据库效率、事务正确性、运行时性能和资源生命周期分析。
docs/project/persistence_audit.md(基于文件,不创建任务)广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
必须阅读: 加载 shared/references/two_layer_detection.md 以了解检测方法。
docs/project/persistence_audit.md加载项目元数据:
docs/project/tech_stack.md - 检测 DB、ORM、异步框架requirements.txt、pyproject.toml、package.json、go.moddocs/tasks/kanban_board.md 自动发现团队 ID提取数据库特定元数据:
| 元数据 | 来源 | 示例 |
|---|---|---|
| 数据库类型 | tech_stack.md, docker-compose.yml | PostgreSQL 16 |
| ORM | 导入, requirements.txt | SQLAlchemy 2.0 |
| 异步框架 | 导入, requirements.txt | asyncio, FastAPI |
| 会话配置 | grep create_async_engine, sessionmaker | expire_on_commit=False |
| 触发器/NOTIFY | 迁移文件 | pg_notify('job_events', ...) |
| 连接池 | 引擎配置 | pool_size=10, max_overflow=20 |
扫描触发器和事件通道:
Grep("pg_notify|NOTIFY|CREATE TRIGGER", path="alembic/versions/")
OR path="migrations/"
→ 存储: db_config.triggers = [{table, event, function, channel_name}]
Grep("LISTEN\s+\w+|\.subscribe\(|\.on\(.*channel|redis.*subscribe", path="src/")
OR path="app/"
→ 存储: db_config.event_subscribers = [{channel_name, file, line, technology}]
针对每个检测到的技术:
| 技术 | 研究重点 |
|---|---|
| SQLAlchemy | 会话生命周期、expire_on_commit、批量操作、急切/惰性加载 |
| PostgreSQL | NOTIFY/LISTEN 语义、事务隔离、批量操作 |
| asyncio | to_thread、阻塞检测、事件循环最佳实践 |
| FastAPI | 依赖注入作用域、后台任务、异步端点 |
构建 contextStore:
{
"tech_stack": {"db": "postgresql", "orm": "sqlalchemy", "async": "asyncio"},
"best_practices": {"sqlalchemy": {...}, "postgresql": {...}, "asyncio": {...}},
"db_config": {
"expire_on_commit": false,
"triggers": [{"table": "jobs", "event": "UPDATE", "function": "notify_job_events", "channel_name": "job_events"}],
"event_subscribers": [{"channel_name": "job_events", "file": "src/listeners/job_listener.py", "line": 12, "technology": "postgresql"}],
"pool_size": 10
},
"codebase_root": "/project",
"output_dir": "docs/project/.audit/ln-650/{YYYY-MM-DD}"
}
mkdir -p {output_dir} # 工作器文件在聚合后清理(阶段 8)
必须阅读: 加载 shared/references/task_delegation_pattern.md 和 shared/references/audit_worker_core_contract.md。
工作器(全部 4 个并行执行):
---|---|---|---
1 | ln-651-query-efficiency-auditor | 高 | 冗余查询、N-UPDATE 循环、过度获取、缓存范围
2 | ln-652-transaction-correctness-auditor | 高 | 提交模式、触发器交互、事务范围、回滚
3 | ln-653-runtime-performance-auditor | 中 | 异步中的阻塞 IO、分配、同步休眠、字符串连接
4 | ln-654-resource-lifecycle-auditor | 高 | 会话范围不匹配、流资源持有、池配置、清理
调用(4 个工作器并行执行):
FOR EACH worker IN [ln-651, ln-652, ln-653, ln-654]:
Agent(description: "Audit via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(contextStore),
subagent_type: "general-purpose")
工作器输出契约(基于文件):
工作器遵循共享的基于文件的审计契约,将报告写入 {output_dir}/,并返回紧凑的分数/严重性摘要以供聚合。
预期的摘要格式:
Report written: docs/project/.audit/ln-650/{YYYY-MM-DD}/651-query-efficiency.md
Score: 6.0/10 | Issues: 8 (C:0 H:3 M:4 L:1)
必须阅读: 加载 shared/references/audit_coordinator_aggregation.md 和 shared/references/context_validation.md。
使用共享的聚合模式来解析工作器摘要、汇总严重性总数、读取工作器文件并组装最终报告。
本协调器的本地规则:
Skill=ln-650、Metric=overall_score、Scale=0-10。上下文验证:
对合并的发现应用规则 1 和 6:
FOR EACH finding WHERE severity IN (HIGH, MEDIUM):
# 规则 1: ADR/计划覆盖
IF finding matches ADR → advisory "[Planned: ADR-XXX]"
# 规则 6: 执行上下文
IF finding.check IN (blocking_io, redundant_fetch, transaction_wide, cpu_bound):
context = 0
- Function in __init__/setup/bootstrap/migrate → context += 1
- File in tasks/jobs/cron/ → context += 1
- Has timeout/safeguard nearby → context += 1
- Small data (<100KB file, <100 items dataset) → context += 1
IF context >= 3 → advisory
IF context >= 1 → severity -= 1
Downgraded findings → "Advisory Findings" section in report.
Recalculate overall score excluding advisory findings from penalty.
豁免: 关键路径中缺少回滚 CRITICAL、N-UPDATE 循环。
## 持久化与性能审计报告 - [DATE]
### 执行摘要
[关于整体持久化/性能健康状况的 2-3 句话]
### 合规性分数
| 类别 | 分数 | 备注 |
|----------|-------|-------|
| 查询效率 | X/10 | ... |
| 事务正确性 | X/10 | ... |
| 运行时性能 | X/10 | ... |
| 资源生命周期 | X/10 | ... |
| **总体** | **X/10** | |
### 严重性摘要
| 严重性 | 数量 |
|----------|-------|
| 严重 | X |
| 高 | X |
| 中 | X |
| 低 | X |
### 按类别列出的发现
#### 1. 查询效率
| 严重性 | 位置 | 问题 | 建议 | 工作量 |
|----------|----------|-------|----------------|--------|
| 高 | job_processor.py:434 | 冗余实体获取 | 传递对象而非 ID | 小 |
#### 2. 事务正确性
| 严重性 | 位置 | 问题 | 建议 | 工作量 |
|----------|----------|-------|----------------|--------|
| 严重 | job_processor.py:412 | 缺少中间提交 | 在里程碑处添加提交 | 小 |
#### 3. 运行时性能
| 严重性 | 位置 | 问题 | 建议 | 工作量 |
|----------|----------|-------|----------------|--------|
| 高 | job_processor.py:444 | 异步中的阻塞 read_bytes() | 使用 aiofiles/to_thread | 小 |
#### 4. 资源生命周期
| 严重性 | 位置 | 问题 | 建议 | 工作量 |
|----------|----------|-------|----------------|--------|
| 严重 | sse_stream.py:112 | DbSession 在整个 SSE 流期间被持有 | 将会话范围限制在仅限身份验证检查 | 中 |
### 推荐操作(按优先级排序)
| 优先级 | 类别 | 位置 | 问题 | 建议 | 工作量 |
|----------|----------|----------|-------|----------------|--------|
| 严重 | 事务 | ... | 缺少提交 | 添加策略性提交 | 小 |
| 高 | 查询 | ... | 冗余获取 | 传递对象而非 ID | 小 |
### 参考来源
- SQLAlchemy 最佳实践: [URL]
- PostgreSQL NOTIFY 文档: [URL]
- Python asyncio-dev: [URL]
将综合报告写入 docs/project/persistence_audit.md,使用上述输出格式。
必须阅读: 加载 shared/references/results_log_pattern.md
向 docs/project/.audit/results_log.md 追加一行,包含:Skill=ln-650、Metric=overall_score、Scale=0-10、来自阶段 6 报告的分数。计算与上一个 ln-650 行的差值。如果文件缺失,则创建带标题的文件。滚动窗口:最多 50 条条目。
rm -rf {output_dir}
删除带日期的输出目录 (docs/project/.audit/ln-650/{YYYY-MM-DD}/)。综合报告和结果日志已经保存了所有审计数据。
docs/project/.audit/ln-650/{YYYY-MM-DD}{output_dir}/docs/project/persistence_audit.md必须阅读: 加载 shared/references/meta_analysis_protocol.md
技能类型:review-coordinator(仅限工作器)。在所有阶段完成后运行。使用 review-coordinator — workers only 格式输出到聊天。
docs/project/tech_stack.mddocs/tasks/kanban_board.mdshared/references/task_delegation_pattern.mdshared/references/audit_coordinator_aggregation.mdshared/references/research_tool_fallback.md版本: 1.1.0 最后更新: 2026-03-15
每周安装
141
仓库
GitHub 星标
245
首次出现
2026年2月7日
安全审计
安装于
cursor131
opencode131
claude-code131
codex130
gemini-cli130
github-copilot129
Paths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
Coordinates 4 specialized audit workers to perform database efficiency, transaction correctness, runtime performance, and resource lifecycle analysis.
docs/project/persistence_audit.md (file-based, no task creation)MANDATORY READ: Load shared/references/two_layer_detection.md for detection methodology.
docs/project/persistence_audit.mdLoad project metadata:
docs/project/tech_stack.md - detect DB, ORM, async frameworkrequirements.txt, pyproject.toml, package.json, go.moddocs/tasks/kanban_board.mdExtract DB-specific metadata:
| Metadata | Source | Example |
|---|---|---|
| Database type | tech_stack.md, docker-compose.yml | PostgreSQL 16 |
| ORM | imports, requirements.txt | SQLAlchemy 2.0 |
| Async framework | imports, requirements.txt | asyncio, FastAPI |
| Session config | grep create_async_engine, sessionmaker | expire_on_commit=False |
| Triggers/NOTIFY | migration files | pg_notify('job_events', ...) |
| Connection pooling |
Scan for triggers and event channels:
Grep("pg_notify|NOTIFY|CREATE TRIGGER", path="alembic/versions/")
OR path="migrations/"
→ Store: db_config.triggers = [{table, event, function, channel_name}]
Grep("LISTEN\s+\w+|\.subscribe\(|\.on\(.*channel|redis.*subscribe", path="src/")
OR path="app/"
→ Store: db_config.event_subscribers = [{channel_name, file, line, technology}]
For each detected technology:
| Technology | Research Focus |
|---|---|
| SQLAlchemy | Session lifecycle, expire_on_commit, bulk operations, eager/lazy loading |
| PostgreSQL | NOTIFY/LISTEN semantics, transaction isolation, batch operations |
| asyncio | to_thread, blocking detection, event loop best practices |
| FastAPI | Dependency injection scopes, background tasks, async endpoints |
Build contextStore:
{
"tech_stack": {"db": "postgresql", "orm": "sqlalchemy", "async": "asyncio"},
"best_practices": {"sqlalchemy": {...}, "postgresql": {...}, "asyncio": {...}},
"db_config": {
"expire_on_commit": false,
"triggers": [{"table": "jobs", "event": "UPDATE", "function": "notify_job_events", "channel_name": "job_events"}],
"event_subscribers": [{"channel_name": "job_events", "file": "src/listeners/job_listener.py", "line": 12, "technology": "postgresql"}],
"pool_size": 10
},
"codebase_root": "/project",
"output_dir": "docs/project/.audit/ln-650/{YYYY-MM-DD}"
}
mkdir -p {output_dir} # Worker files cleaned up after consolidation (Phase 8)
MANDATORY READ: Load shared/references/task_delegation_pattern.md and shared/references/audit_worker_core_contract.md.
Workers (ALL 4 in PARALLEL):
---|---|---|---
1 | ln-651-query-efficiency-auditor | HIGH | Redundant queries, N-UPDATE loops, over-fetching, caching scope
2 | ln-652-transaction-correctness-auditor | HIGH | Commit patterns, trigger interaction, transaction scope, rollback
3 | ln-653-runtime-performance-auditor | MEDIUM | Blocking IO in async, allocations, sync sleep, string concat
4 | ln-654-resource-lifecycle-auditor | HIGH | Session scope mismatch, streaming resource holding, pool config, cleanup
Invocation (4 workers in PARALLEL):
FOR EACH worker IN [ln-651, ln-652, ln-653, ln-654]:
Agent(description: "Audit via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(contextStore),
subagent_type: "general-purpose")
Worker Output Contract (File-Based):
Workers follow the shared file-based audit contract, write reports to {output_dir}/, and return compact score/severity summaries for aggregation.
Expected summary format:
Report written: docs/project/.audit/ln-650/{YYYY-MM-DD}/651-query-efficiency.md
Score: 6.0/10 | Issues: 8 (C:0 H:3 M:4 L:1)
MANDATORY READ: Load shared/references/audit_coordinator_aggregation.md and shared/references/context_validation.md.
Use the shared aggregation pattern for parsing worker summaries, rolling up severity totals, reading worker files, and assembling the final report.
Local rules for this coordinator:
Skill=ln-650, Metric=overall_score, Scale=0-10.Context Validation:
Apply Rules 1, 6 to merged findings:
FOR EACH finding WHERE severity IN (HIGH, MEDIUM):
# Rule 1: ADR/Planned Override
IF finding matches ADR → advisory "[Planned: ADR-XXX]"
# Rule 6: Execution Context
IF finding.check IN (blocking_io, redundant_fetch, transaction_wide, cpu_bound):
context = 0
- Function in __init__/setup/bootstrap/migrate → context += 1
- File in tasks/jobs/cron/ → context += 1
- Has timeout/safeguard nearby → context += 1
- Small data (<100KB file, <100 items dataset) → context += 1
IF context >= 3 → advisory
IF context >= 1 → severity -= 1
Downgraded findings → "Advisory Findings" section in report.
Recalculate overall score excluding advisory findings from penalty.
Exempt: Missing rollback CRITICAL, N-UPDATE loops in hot paths.
## Persistence & Performance Audit Report - [DATE]
### Executive Summary
[2-3 sentences on overall persistence/performance health]
### Compliance Score
| Category | Score | Notes |
|----------|-------|-------|
| Query Efficiency | X/10 | ... |
| Transaction Correctness | X/10 | ... |
| Runtime Performance | X/10 | ... |
| Resource Lifecycle | X/10 | ... |
| **Overall** | **X/10** | |
### Severity Summary
| Severity | Count |
|----------|-------|
| Critical | X |
| High | X |
| Medium | X |
| Low | X |
### Findings by Category
#### 1. Query Efficiency
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| HIGH | job_processor.py:434 | Redundant entity fetch | Pass object not ID | S |
#### 2. Transaction Correctness
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| CRITICAL | job_processor.py:412 | Missing intermediate commits | Add commit at milestones | S |
#### 3. Runtime Performance
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| HIGH | job_processor.py:444 | Blocking read_bytes() in async | Use aiofiles/to_thread | S |
#### 4. Resource Lifecycle
| Severity | Location | Issue | Recommendation | Effort |
|----------|----------|-------|----------------|--------|
| CRITICAL | sse_stream.py:112 | DbSession held for entire SSE stream | Scope session to auth check only | M |
### Recommended Actions (Priority-Sorted)
| Priority | Category | Location | Issue | Recommendation | Effort |
|----------|----------|----------|-------|----------------|--------|
| CRITICAL | Transaction | ... | Missing commits | Add strategic commits | S |
| HIGH | Query | ... | Redundant fetch | Pass object not ID | S |
### Sources Consulted
- SQLAlchemy best practices: [URL]
- PostgreSQL NOTIFY docs: [URL]
- Python asyncio-dev: [URL]
Write consolidated report to docs/project/persistence_audit.md with the Output Format above.
MANDATORY READ: Load shared/references/results_log_pattern.md
Append one row to docs/project/.audit/results_log.md with: Skill=ln-650, Metric=overall_score, Scale=0-10, Score from Phase 6 report. Calculate Delta vs previous ln-650 row. Create file with header if missing. Rolling window: max 50 entries.
rm -rf {output_dir}
Delete the dated output directory (docs/project/.audit/ln-650/{YYYY-MM-DD}/). The consolidated report and results log already preserve all audit data.
docs/project/.audit/ln-650/{YYYY-MM-DD}{output_dir}/docs/project/persistence_audit.mdMANDATORY READ: Load shared/references/meta_analysis_protocol.md
Skill type: review-coordinator (workers only). Run after all phases complete. Output to chat using the review-coordinator — workers only format.
docs/project/tech_stack.mddocs/tasks/kanban_board.mdshared/references/task_delegation_pattern.mdshared/references/audit_coordinator_aggregation.mdshared/references/research_tool_fallback.mdVersion: 1.1.0 Last Updated: 2026-03-15
Weekly Installs
141
Repository
GitHub Stars
245
First Seen
Feb 7, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
cursor131
opencode131
claude-code131
codex130
gemini-cli130
github-copilot129
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
36,300 周安装
| engine config |
pool_size=10, max_overflow=20 |