ln-640-pattern-evolution-auditor by levnikolaevich/claude-code-skills
npx skills add https://github.com/levnikolaevich/claude-code-skills --skill ln-640-pattern-evolution-auditorPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
L2 协调器,用于分析已实现的架构模式是否符合当前最佳实践,并跟踪其随时间演进的情况。
docs/project/patterns_catalog.md,记录已实现的模式docs/project/patterns_catalog.md(基于文件)广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 评分项 | 衡量内容 | 阈值 |
|---|---|---|
| 合规性 | 行业标准、命名、技术栈约定、层边界 | 70% |
| 完整性 | 所有组件、错误处理、可观测性、测试 | 70% |
| 质量 | 可读性、可维护性、无代码异味、SOLID 原则、无重复 | 70% |
| 实现度 | 代码存在、生产使用、已集成、已监控 | 70% |
必读: 加载 shared/references/task_delegation_pattern.md。
| 工作器 | 目的 | 阶段 |
|---|---|---|
| ln-641-pattern-analyzer | 为每个模式计算 4 项评分 | 阶段 5 |
| ln-642-layer-boundary-auditor | 检测层违规 | 阶段 4 |
| ln-643-api-contract-auditor | 审计 API 契约、DTO、层泄漏 | 阶段 4 |
| ln-644-dependency-graph-auditor | 构建依赖图、检测循环、验证边界、计算指标 | 阶段 4 |
| ln-645-open-source-replacer | 通过 MCP Research 为自定义模块搜索 OSS 替代品 | 阶段 4 |
| ln-646-project-structure-auditor | 审计物理结构、整洁度、命名约定 | 阶段 4 |
| ln-647-env-config-auditor | 审计环境变量配置、同步、命名、启动验证 | 阶段 4 |
所有委托都使用 subagent_type: "general-purpose" 的 Agent。在输入独立的情况下,保持阶段 4 的工作器并行执行;将 ln-641 保持在阶段 5,因为模式评分依赖于早期的边界和图证据。
必读: 加载 shared/references/two_layer_detection.md 以了解检测方法。
1. 加载 docs/project/patterns_catalog.md
如果缺失 → 从 shared/templates/patterns_template.md 创建
如果存在 → 验证模板符合性:
required_sections = ["Score Legend", "Pattern Inventory", "Discovered Patterns",
"Layer Boundary Status", "API Contract Status", "Quick Wins",
"Patterns Requiring Attention", "Pattern Recommendations",
"Excluded Patterns", "Summary"]
对于 required_sections 中的每个 section:
如果在目录中未找到 section:
→ 从 shared/templates/patterns_template.md 追加 section
验证表格列与模板匹配(例如,Quick Wins 中的 Recommendation)
如果列不匹配 → 更新表头,保留现有数据行
2. 加载 docs/reference/adrs/*.md → 将模式链接到 ADR
加载 docs/reference/guides/*.md → 将模式链接到指南
3. 自动检测基线模式
对于 pattern_library.md 中 "Pattern Detection" 表中的每个 pattern:
在代码库上执行 Grep(detection_keywords)
如果找到但不在目录中 → 添加为 "Undocumented (Baseline)"
必读: 加载 references/pattern_library.md — 使用 "Discovery Heuristics" 部分。
预定义的模式是种子,而非上限。发现超出基线的项目特定模式。
# 结构性启发式(来自 pattern_library.md)
1. 类命名:Grep GoF 后缀 (Factory|Builder|Strategy|Adapter|Observer|...)
2. 抽象层次结构:ABC/Protocol 有 2+ 个实现 → Template Method/Strategy
3. 流式接口:返回 self 链 → Builder
4. 注册字典:_registry + register() → Registry
5. 中间件链:app.use/add_middleware → Chain of Responsibility
6. 事件监听器:@on_event/@receiver/signal → Observer
7. 装饰器包装器:@wraps/functools.wraps → Decorator
# 基于文档的启发式
8. ADR/指南文件名 + H1 标题 → 提取不在库中的模式名称
9. Architecture.md → grep 模式术语
10. 代码注释 → "pattern:|@pattern|design pattern"
# 每个发现模式的输出:
{name, evidence: [files], confidence: HIGH|MEDIUM|LOW, status: "Discovered"}
→ 添加到目录的 "Discovered Patterns (Adaptive)" 部分
建议可能改进架构的模式(建议性,不评分)。
# 检查 pattern_library.md 中 "Pattern Recommendations" 表的条件
# 例如,外部 API 调用没有重试 → 推荐 Resilience
# 例如,5+ 个构造函数参数 → 推荐 Builder/Parameter Object
# 例如,API 层直接访问数据库 → 推荐 Repository
→ 添加到目录的 "Pattern Recommendations" 部分
验证每个检测到的模式是否实际已实现,而不仅仅是关键词误报。
必读: 加载 references/scoring_rules.md — 使用 "Required components by pattern" 表。
对于 (baseline_detected + adaptive_discovered) 中的每个 detected_pattern:
如果 pattern.source == "adaptive":
# 自适应模式:检查置信度 + 证据量
如果 pattern.confidence == "LOW" 且 len(pattern.evidence.files) < 3:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Low confidence, insufficient evidence"
→ 添加到目录的 "Excluded Patterns" 部分
继续
否则:
# 基线模式:检查至少 2 个结构组件
components = get_required_components(pattern, scoring_rules.md)
found_count = 0
对于 components 中的每个 component:
如果 Grep(component.detection_grep, codebase) 有匹配项:
found_count += 1
如果 found_count < 2:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Found {found_count}/{len(components)} components"
→ 添加到目录的 "Excluded Patterns" 部分
继续
pattern.status = "VERIFIED"
# 步骤 2:通过 MCP Ref 进行语义适用性检查(在结构检查通过后)
对于每个 pattern.status == "VERIFIED" 的 pattern:
ref_search_documentation("{pattern.name} {tech_stack.language} idiom vs architectural pattern")
WebSearch("{pattern.name} {tech_stack.language} — language feature or design pattern?")
如果证据显示模式是语言习惯用法 / 标准库特性 / 框架内置功能:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Language idiom / built-in feature, not architectural pattern"
→ 添加到目录的 "Excluded Patterns" 部分
# 清理:从之前的审计中移除过时的模式
对于现有目录中当前扫描未检测到的每个 pattern:
→ 从 Pattern Inventory 中移除
→ 添加到 "Excluded Patterns",原因 "No longer detected in codebase"
对于每个 last_audit > 30 天 或 从未审计过的 pattern:
# MCP Ref + Context7 + WebSearch
ref_search_documentation("{pattern} best practices {tech_stack}")
如果 pattern.library: query-docs(library_id, "{pattern}")
WebSearch("{pattern} implementation best practices 2026")
→ 存储:contextStore.bestPractices[pattern]
必读: 加载 shared/references/audit_coordinator_domain_mode.md。
使用共享的领域发现模式来设置 domain_mode 和 all_domains。然后创建 docs/project/.audit/ln-640/{YYYY-MM-DD}/。工作器文件在整合后会被清理(见阶段 11)。
如果 domain_mode == "domain-aware":
对于 all_domains 中的每个 domain:
domain_context = {
...contextStore,
domain_mode: "domain-aware",
current_domain: { name: domain.name, path: domain.path }
}
对于 [ln-642, ln-643, ln-644, ln-645, ln-646, ln-647] 中的每个 worker:
Agent(description: "Audit " + domain.name + " via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(domain_context),
subagent_type: "general-purpose")
否则:
对于 [ln-642, ln-643, ln-644, ln-645, ln-646, ln-647] 中的每个 worker:
Agent(description: "Pattern evolution audit via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(contextStore),
subagent_type: "general-purpose")
# 应用来自 ln-642 返回值的层扣分(分数 + 问题计数)
# 详细的违规情况在阶段 6 从文件中读取
# ln-641 保持全局性(模式是跨领域的,非按领域)
# 仅处理阶段 1d 中 VERIFIED 的模式(跳过 EXCLUDED)
对于目录中 pattern.status == "VERIFIED" 的每个 pattern:
Agent(description: "Analyze " + pattern.name + " via ln-641",
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"ln-641-pattern-analyzer\")
CONTEXT:
" + JSON.stringify({...contextStore, pattern: pattern}),
subagent_type: "general-purpose")
工作器输出契约(基于文件):
必读: 加载 shared/references/audit_worker_core_contract.md 和 shared/templates/audit_worker_report_template.md。
所有工作器都将报告写入 {output_dir}/ 并返回最小摘要:
| 工作器 | 返回格式 | 文件 |
|---|---|---|
| ln-641 | `Score: X.X/10 (C:N K:N Q:N I:N) | Issues: N` |
| ln-642 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-643 | `Score: X.X/10 (C:N K:N Q:N I:N) | Issues: N` |
| ln-644 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-645 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-646 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-647 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
协调器从返回值解析分数/计数(聚合表无需读取文件)。仅在跨领域聚合(阶段 6)和报告组装(阶段 8)时读取文件。
如果 domain_mode == "domain-aware":
# 步骤 1:从 ln-642 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/642-layer-boundary-*.md") 中的每个 file:
读取文件 → 提取 <!-- DATA-EXTENDED ... --> JSON + Findings 表
# 按问题类型跨领域分组发现
对于 unique(ln642_findings.issue) 中的每个 issue_type:
domains_with_issue = ln642_findings.filter(f => f.issue == issue_type).map(f => f.domain)
如果 len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "CRITICAL",
issue: f"Systemic layer violation: {issue_type} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Address at architecture level, not per-domain"
})
# 步骤 2:从 ln-643 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/643-api-contract-*.md") 中的每个 file:
读取文件 → 提取 <!-- DATA-EXTENDED ... --> JSON(包含 principle + domain 的问题)
# 按规则跨领域分组发现
对于 unique(ln643_issues.principle) 中的每个 rule:
domains_with_issue = ln643_issues.filter(i => i.principle == rule).map(i => i.domain)
如果 len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic API contract issue: {rule} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Create cross-cutting architectural fix"
})
# 步骤 3:从 ln-644 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/644-dep-graph-*.md") 中的每个 file:
读取文件 → 提取 <!-- DATA-EXTENDED ... --> JSON(cycles, sdp_violations)
# 跨领域循环依赖
对于 ln644_cycles 中的每个 cycle:
domains_in_cycle = unique(cycle.path.map(m => m.domain))
如果 len(domains_in_cycle) >= 2:
systemic_findings.append({
severity: "CRITICAL",
issue: f"Cross-domain dependency cycle: {cycle.path} spans {len(domains_in_cycle)} domains",
domains: domains_in_cycle,
recommendation: "Decouple via domain events or extract shared module"
})
# 步骤 4:从 ln-645 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/645-open-source-replacer-*.md") 中的每个 file:
读取文件 → 提取 <!-- DATA-EXTENDED ... --> JSON(replacements array)
# 按目标/替代方案跨领域分组发现
对于 unique(ln645_replacements.goal) 中的每个 goal:
domains_with_same = ln645_replacements.filter(r => r.goal == goal).map(r => r.domain)
如果 len(domains_with_same) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic custom implementation: {goal} duplicated in {len(domains_with_same)} domains",
domains: domains_with_same,
recommendation: "Single migration across all domains using recommended OSS package"
})
# 步骤 5:从 ln-646 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/646-structure-*.md") 中的每个 file:
读取文件 -> 提取 <!-- DATA-EXTENDED ... --> JSON
# 按维度跨领域分组发现
对于 ["junk_drawers", "naming_violations"] 中的每个 dimension:
domains_with_issue = ln646_data.filter(d => d.dimensions[dimension].issues > 0).map(d => d.domain)
如果 len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "MEDIUM",
issue: f"Systemic structure issue: {dimension} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Standardize project structure conventions across domains"
})
# 步骤 6:从 ln-647 文件读取 DATA-EXTENDED
对于 Glob("{output_dir}/647-env-config-*.md") 中的每个 file:
读取文件 → 提取 <!-- DATA-EXTENDED ... --> JSON
# 跨领域分组环境同步问题
对于 ["missing_from_example", "dead_in_example", "default_desync"] 中的每个 issue_type:
domains_with_issue = ln647_data.filter(d => d.sync_stats[issue_type] > 0).map(d => d.domain)
如果 len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic env config issue: {issue_type} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Centralize env configuration management"
})
# 跨领域 SDP 违规
对于 ln644_sdp_violations 中的每个 sdp:
如果 sdp.from.domain != sdp.to.domain:
systemic_findings.append({
severity: "HIGH",
issue: f"Cross-domain stability violation: {sdp.from} (I={sdp.I_from}) depends on {sdp.to} (I={sdp.I_to})",
domains: [sdp.from.domain, sdp.to.domain],
recommendation: "Apply DIP: extract interface at domain boundary"
})
gaps = {
undocumentedPatterns: found in code but not in catalog,
missingComponents: required components not found per scoring_rules.md,
layerViolations: code in wrong architectural layers,
consistencyIssues: conflicting patterns,
systemicIssues: systemic_findings from Phase 6
}
必读: 加载 shared/references/audit_coordinator_aggregation.md 和 shared/references/context_validation.md。
# 步骤 1:从工作器返回值解析分数(已在上下文中)
# ln-641: "Score: 7.9/10 (C:72 K:85 Q:68 I:90) | Issues: 3 (H:1 M:2 L:0)"
# ln-642: "Score: 4.5/10 | Issues: 8 (C:1 H:3 M:4 L:0)"
# ln-643: "Score: 6.75/10 (C:65 K:70 Q:55 I:80) | Issues: 4 (H:2 M:1 L:1)"
# ln-644: "Score: 6.5/10 | Issues: 8 (C:1 H:3 M:3 L:1)"
pattern_scores = [parse_score(r) for r in ln641_returns] # 每个 0-10
layer_score = parse_score(ln642_return) # 0-10
api_score = parse_score(ln643_return) # 0-10
graph_score = parse_score(ln644_return) # 0-10
structure_score = parse_score(ln646_return) # 0-10
env_config_score = parse_score(ln647_return) # 0-10
# 步骤 2:计算 architecture_health_score(ln-645 不包含在内 — 单独指标)
all_scores = pattern_scores + [layer_score, api_score, graph_score, structure_score, env_config_score]
architecture_health_score = round(average(all_scores) * 10) # 0-100 分制
# 步骤 2b:单独的重用机会分数(信息性,不强制执行 SLA)
reuse_opportunity_score = parse_score(ln645_return) # 0-10,不计入 architecture_health_score
# 状态映射:
# >= 80: "healthy"
# 70-79: "warning"
# < 70: "critical"
# 步骤 3:上下文验证(后过滤)
# 将规则 1, 3 应用于 ln-641 反模式发现:
# Rule 1: 将 god_class/large_file 发现与 ADR 列表匹配
# Rule 3: 对于 god_class 扣分 (-5 compliance):
# 读取标记的文件一次,检查 4 个内聚性指标
# (public_func_count <= 2, subdirs, shared_state > 60%, CC=1)
# 如果内聚性 >= 3: 恢复扣除的 -5 分
# 在 patterns_catalog.md 中注明: "[Advisory: high cohesion module]"
# 使用恢复的分数重新计算 architecture_health_score
1. 更新 patterns_catalog.md:
- 模式分数、日期
- Layer Boundary Status 部分
- Quick Wins 部分
- Patterns Requiring Attention 部分
2. 计算趋势:比较当前与之前的分数
3. 输出摘要(见下面的返回结果)
{
"audit_date": "2026-02-04",
"architecture_health_score": 78,
"trend": "improving",
"patterns_analyzed": 5,
"layer_audit": {
"architecture_type": "Layered",
"violations_total": 5,
"violations_by_severity": {"high": 2, "medium": 3, "low": 0},
"coverage": {"http_abstraction": 85, "error_centralization": true}
},
"patterns": [
{
"name": "Job Processing",
"scores": {"compliance": 72, "completeness": 85, "quality": 68, "implementation": 90},
"avg_score": 79,
"status": "warning",
"issues_count": 3
}
],
"quick_wins": [
{"pattern": "Caching", "issue": "Add TTL config", "effort": "2h", "impact": "+10 completeness"}
],
"requires_attention": [
{"pattern": "Event-Driven", "avg_score": 58, "critical_issues": ["No DLQ", "No schema versioning"]}
],
"project_structure": {
"structure_score": 7.5,
"tech_stack_detected": "react",
"dimensions": {
"file_hygiene": {"checks": 6, "issues": 1},
"ignore_files": {"checks": 4, "issues": 0},
"framework_conventions": {"checks": 3, "issues": 1},
"domain_organization": {"checks": 3, "issues": 1},
"naming_conventions": {"checks": 3, "issues": 0}
},
"junk_drawers": 1,
"naming_violations_pct": 3
},
"env_config": {
"env_config_score": 8.0,
"code_vars_count": 25,
"example_vars_count": 22,
"sync_stats": {"missing_from_example": 3, "dead_in_example": 1, "default_desync": 0},
"validation_framework": "pydantic-settings"
},
"reuse_opportunities": {
"reuse_opportunity_score": 6.5,
"modules_scanned": 15,
"high_confidence_replacements": 3,
"medium_confidence_replacements": 5,
"systemic_custom_implementations": 1
},
"dependency_graph": {
"architecture_detected": "hybrid",
"architecture_confidence": "MEDIUM",
"modules_analyzed": 12,
"cycles_detected": 2,
"boundary_violations": 3,
"sdp_violations": 1,
"nccd": 1.3,
"score": 6.5
},
"cross_domain_issues": [
{
"severity": "CRITICAL",
"issue": "Systemic layer violation: HTTP client in domain layer in 3 domains",
"domains": ["users", "billing", "orders"],
"recommendation": "Address at architecture level"
}
]
}
必读: 加载 shared/references/results_log_pattern.md
向 docs/project/.audit/results_log.md 追加一行,包含:Skill=ln-640, Metric=architecture_health_score, Scale=0-100, 来自阶段 9 输出的 Score。计算与之前 ln-640 行的差值。如果文件缺失,则创建带表头的文件。滚动窗口:最多 50 条记录。
rm -rf {output_dir}
删除带日期的输出目录 (docs/project/.audit/ln-640/{YYYY-MM-DD}/)。整合的报告和结果日志已保存所有审计数据。
{output_dir}/必读: 加载 shared/references/meta_analysis_protocol.md
技能类型:review-coordinator(仅工作器)。在所有阶段完成后运行。使用 review-coordinator — workers only 格式输出到聊天。
shared/references/task_delegation_pattern.mdshared/references/audit_coordinator_domain_mode.mdshared/references/audit_coordinator_aggregation.mdshared/templates/patterns_template.mdreferences/pattern_library.mdreferences/layer_rules.mdreferences/scoring_rules.md../ln-641-pattern-analyzer/SKILL.md../ln-642-layer-boundary-auditor/SKILL.md../ln-643-api-contract-auditor/SKILL.md../ln-644-dependency-graph-auditor/SKILL.md../ln-645-open-source-replacer/SKILL.md../ln-646-project-structure-auditor/SKILL.md../ln-647-env-config-auditor/SKILL.mdshared/references/research_tool_fallback.mdshared/references/tools_config_guide.mdshared/references/storage_mode_detection.md版本: 2.0.0 最后更新: 2026-02-08
每周安装数
142
仓库
GitHub 星标数
245
首次出现
2026年2月3日
安全审计
安装于
claude-code131
cursor130
gemini-cli129
opencode129
codex129
github-copilot127
Paths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Ifshared/is missing, fetch files via WebFetch fromhttps://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path}.
L2 Coordinator that analyzes implemented architectural patterns against current best practices and tracks evolution over time.
docs/project/patterns_catalog.md with implemented patternsdocs/project/patterns_catalog.md (file-based)| Score | What it measures | Threshold |
|---|---|---|
| Compliance | Industry standards, naming, tech stack conventions, layer boundaries | 70% |
| Completeness | All components, error handling, observability, tests | 70% |
| Quality | Readability, maintainability, no smells, SOLID, no duplication | 70% |
| Implementation | Code exists, production use, integrated, monitored | 70% |
MANDATORY READ: Load shared/references/task_delegation_pattern.md.
| Worker | Purpose | Phase |
|---|---|---|
| ln-641-pattern-analyzer | Calculate 4 scores per pattern | Phase 5 |
| ln-642-layer-boundary-auditor | Detect layer violations | Phase 4 |
| ln-643-api-contract-auditor | Audit API contracts, DTOs, layer leakage | Phase 4 |
| ln-644-dependency-graph-auditor | Build dependency graph, detect cycles, validate boundaries, calculate metrics | Phase 4 |
| ln-645-open-source-replacer | Search OSS replacements for custom modules via MCP Research | Phase 4 |
| ln-646-project-structure-auditor | Audit physical structure, hygiene, naming conventions | Phase 4 |
| ln-647-env-config-auditor | Audit env var config, sync, naming, startup validation | Phase 4 |
All delegations use Agent with subagent_type: "general-purpose". Keep Phase 4 workers parallel where inputs are independent; keep ln-641 in Phase 5 because pattern scoring depends on earlier boundary and graph evidence.
MANDATORY READ: Load shared/references/two_layer_detection.md for detection methodology.
1. Load docs/project/patterns_catalog.md
IF missing → create from shared/templates/patterns_template.md
IF exists → verify template conformance:
required_sections = ["Score Legend", "Pattern Inventory", "Discovered Patterns",
"Layer Boundary Status", "API Contract Status", "Quick Wins",
"Patterns Requiring Attention", "Pattern Recommendations",
"Excluded Patterns", "Summary"]
FOR EACH section IN required_sections:
IF section NOT found in catalog:
→ Append section from shared/templates/patterns_template.md
Verify table columns match template (e.g., Recommendation in Quick Wins)
IF columns mismatch → update table headers, preserve existing data rows
2. Load docs/reference/adrs/*.md → link patterns to ADRs
Load docs/reference/guides/*.md → link patterns to Guides
3. Auto-detect baseline patterns
FOR EACH pattern IN pattern_library.md "Pattern Detection" table:
Grep(detection_keywords) on codebase
IF found but not in catalog → add as "Undocumented (Baseline)"
MANDATORY READ: Load references/pattern_library.md — use "Discovery Heuristics" section.
Predefined patterns are a seed, not a ceiling. Discover project-specific patterns beyond the baseline.
# Structural heuristics (from pattern_library.md)
1. Class naming: Grep GoF suffixes (Factory|Builder|Strategy|Adapter|Observer|...)
2. Abstract hierarchy: ABC/Protocol with 2+ implementations → Template Method/Strategy
3. Fluent interface: return self chains → Builder
4. Registration dict: _registry + register() → Registry
5. Middleware chain: app.use/add_middleware → Chain of Responsibility
6. Event listeners: @on_event/@receiver/signal → Observer
7. Decorator wrappers: @wraps/functools.wraps → Decorator
# Document-based heuristics
8. ADR/Guide filenames + H1 headers → extract pattern names not in library
9. Architecture.md → grep pattern terminology
10. Code comments → "pattern:|@pattern|design pattern"
# Output per discovered pattern:
{name, evidence: [files], confidence: HIGH|MEDIUM|LOW, status: "Discovered"}
→ Add to catalog "Discovered Patterns (Adaptive)" section
Suggest patterns that COULD improve architecture (advisory, NOT scored).
# Check conditions from pattern_library.md "Pattern Recommendations" table
# E.g., external API calls without retry → recommend Resilience
# E.g., 5+ constructor params → recommend Builder/Parameter Object
# E.g., direct DB access from API layer → recommend Repository
→ Add to catalog "Pattern Recommendations" section
Verify each detected pattern is actually implemented, not just a keyword false positive.
MANDATORY READ: Load references/scoring_rules.md — use "Required components by pattern" table.
FOR EACH detected_pattern IN (baseline_detected + adaptive_discovered):
IF pattern.source == "adaptive":
# Adaptive patterns: check confidence + evidence volume
IF pattern.confidence == "LOW" AND len(pattern.evidence.files) < 3:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Low confidence, insufficient evidence"
→ Add to catalog "Excluded Patterns" section
CONTINUE
ELSE:
# Baseline patterns: check minimum 2 structural components
components = get_required_components(pattern, scoring_rules.md)
found_count = 0
FOR EACH component IN components:
IF Grep(component.detection_grep, codebase) has matches:
found_count += 1
IF found_count < 2:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Found {found_count}/{len(components)} components"
→ Add to catalog "Excluded Patterns" section
CONTINUE
pattern.status = "VERIFIED"
# Step 2: Semantic applicability via MCP Ref (after structural check passes)
FOR EACH pattern WHERE pattern.status == "VERIFIED":
ref_search_documentation("{pattern.name} {tech_stack.language} idiom vs architectural pattern")
WebSearch("{pattern.name} {tech_stack.language} — language feature or design pattern?")
IF evidence shows pattern is language idiom / stdlib feature / framework built-in:
pattern.status = "EXCLUDED"
pattern.exclusion_reason = "Language idiom / built-in feature, not architectural pattern"
→ Add to catalog "Excluded Patterns" section
# Cleanup: remove stale patterns from previous audits
FOR EACH pattern IN existing_catalog WHERE NOT detected in current scan:
→ REMOVE from Pattern Inventory
→ Add to "Excluded Patterns" with reason "No longer detected in codebase"
FOR EACH pattern WHERE last_audit > 30 days OR never:
# MCP Ref + Context7 + WebSearch
ref_search_documentation("{pattern} best practices {tech_stack}")
IF pattern.library: query-docs(library_id, "{pattern}")
WebSearch("{pattern} implementation best practices 2026")
→ Store: contextStore.bestPractices[pattern]
MANDATORY READ: Load shared/references/audit_coordinator_domain_mode.md.
Use the shared domain discovery pattern to set domain_mode and all_domains. Then create docs/project/.audit/ln-640/{YYYY-MM-DD}/. Worker files are cleaned up after consolidation (see Phase 11).
IF domain_mode == "domain-aware":
FOR EACH domain IN all_domains:
domain_context = {
...contextStore,
domain_mode: "domain-aware",
current_domain: { name: domain.name, path: domain.path }
}
FOR EACH worker IN [ln-642, ln-643, ln-644, ln-645, ln-646, ln-647]:
Agent(description: "Audit " + domain.name + " via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(domain_context),
subagent_type: "general-purpose")
ELSE:
FOR EACH worker IN [ln-642, ln-643, ln-644, ln-645, ln-646, ln-647]:
Agent(description: "Pattern evolution audit via " + worker,
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"" + worker + "\")
CONTEXT:
" + JSON.stringify(contextStore),
subagent_type: "general-purpose")
# Apply layer deductions from ln-642 return values (score + issue counts)
# Detailed violations read from files in Phase 6
# ln-641 stays GLOBAL (patterns are cross-cutting, not per-domain)
# Only VERIFIED patterns from Phase 1d (skip EXCLUDED)
FOR EACH pattern IN catalog WHERE pattern.status == "VERIFIED":
Agent(description: "Analyze " + pattern.name + " via ln-641",
prompt: "Execute audit worker.
Step 1: Invoke worker:
Skill(skill: \"ln-641-pattern-analyzer\")
CONTEXT:
" + JSON.stringify({...contextStore, pattern: pattern}),
subagent_type: "general-purpose")
Worker Output Contract (file-based):
MANDATORY READ: Load shared/references/audit_worker_core_contract.md and shared/templates/audit_worker_report_template.md.
All workers write reports to {output_dir}/ and return minimal summary:
| Worker | Return Format | File |
|---|---|---|
| ln-641 | `Score: X.X/10 (C:N K:N Q:N I:N) | Issues: N` |
| ln-642 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-643 | `Score: X.X/10 (C:N K:N Q:N I:N) | Issues: N` |
| ln-644 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-645 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-646 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
| ln-647 | `Score: X.X/10 | Issues: N (C:N H:N M:N L:N)` |
Coordinator parses scores/counts from return values (0 file reads for aggregation tables). Reads files only for cross-domain aggregation (Phase 6) and report assembly (Phase 8).
IF domain_mode == "domain-aware":
# Step 1: Read DATA-EXTENDED from ln-642 files
FOR EACH file IN Glob("{output_dir}/642-layer-boundary-*.md"):
Read file → extract <!-- DATA-EXTENDED ... --> JSON + Findings table
# Group findings by issue type across domains
FOR EACH issue_type IN unique(ln642_findings.issue):
domains_with_issue = ln642_findings.filter(f => f.issue == issue_type).map(f => f.domain)
IF len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "CRITICAL",
issue: f"Systemic layer violation: {issue_type} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Address at architecture level, not per-domain"
})
# Step 2: Read DATA-EXTENDED from ln-643 files
FOR EACH file IN Glob("{output_dir}/643-api-contract-*.md"):
Read file → extract <!-- DATA-EXTENDED ... --> JSON (issues with principle + domain)
# Group findings by rule across domains
FOR EACH rule IN unique(ln643_issues.principle):
domains_with_issue = ln643_issues.filter(i => i.principle == rule).map(i => i.domain)
IF len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic API contract issue: {rule} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Create cross-cutting architectural fix"
})
# Step 3: Read DATA-EXTENDED from ln-644 files
FOR EACH file IN Glob("{output_dir}/644-dep-graph-*.md"):
Read file → extract <!-- DATA-EXTENDED ... --> JSON (cycles, sdp_violations)
# Cross-domain cycles
FOR EACH cycle IN ln644_cycles:
domains_in_cycle = unique(cycle.path.map(m => m.domain))
IF len(domains_in_cycle) >= 2:
systemic_findings.append({
severity: "CRITICAL",
issue: f"Cross-domain dependency cycle: {cycle.path} spans {len(domains_in_cycle)} domains",
domains: domains_in_cycle,
recommendation: "Decouple via domain events or extract shared module"
})
# Step 4: Read DATA-EXTENDED from ln-645 files
FOR EACH file IN Glob("{output_dir}/645-open-source-replacer-*.md"):
Read file → extract <!-- DATA-EXTENDED ... --> JSON (replacements array)
# Group findings by goal/alternative across domains
FOR EACH goal IN unique(ln645_replacements.goal):
domains_with_same = ln645_replacements.filter(r => r.goal == goal).map(r => r.domain)
IF len(domains_with_same) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic custom implementation: {goal} duplicated in {len(domains_with_same)} domains",
domains: domains_with_same,
recommendation: "Single migration across all domains using recommended OSS package"
})
# Step 5: Read DATA-EXTENDED from ln-646 files
FOR EACH file IN Glob("{output_dir}/646-structure-*.md"):
Read file -> extract <!-- DATA-EXTENDED ... --> JSON
# Group findings by dimension across domains
FOR EACH dimension IN ["junk_drawers", "naming_violations"]:
domains_with_issue = ln646_data.filter(d => d.dimensions[dimension].issues > 0).map(d => d.domain)
IF len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "MEDIUM",
issue: f"Systemic structure issue: {dimension} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Standardize project structure conventions across domains"
})
# Step 6: Read DATA-EXTENDED from ln-647 files
FOR EACH file IN Glob("{output_dir}/647-env-config-*.md"):
Read file → extract <!-- DATA-EXTENDED ... --> JSON
# Group env sync issues across domains
FOR EACH issue_type IN ["missing_from_example", "dead_in_example", "default_desync"]:
domains_with_issue = ln647_data.filter(d => d.sync_stats[issue_type] > 0).map(d => d.domain)
IF len(domains_with_issue) >= 2:
systemic_findings.append({
severity: "HIGH",
issue: f"Systemic env config issue: {issue_type} in {len(domains_with_issue)} domains",
domains: domains_with_issue,
recommendation: "Centralize env configuration management"
})
# Cross-domain SDP violations
FOR EACH sdp IN ln644_sdp_violations:
IF sdp.from.domain != sdp.to.domain:
systemic_findings.append({
severity: "HIGH",
issue: f"Cross-domain stability violation: {sdp.from} (I={sdp.I_from}) depends on {sdp.to} (I={sdp.I_to})",
domains: [sdp.from.domain, sdp.to.domain],
recommendation: "Apply DIP: extract interface at domain boundary"
})
gaps = {
undocumentedPatterns: found in code but not in catalog,
missingComponents: required components not found per scoring_rules.md,
layerViolations: code in wrong architectural layers,
consistencyIssues: conflicting patterns,
systemicIssues: systemic_findings from Phase 6
}
MANDATORY READ: Load shared/references/audit_coordinator_aggregation.md and shared/references/context_validation.md.
# Step 1: Parse scores from worker return values (already in-context)
# ln-641: "Score: 7.9/10 (C:72 K:85 Q:68 I:90) | Issues: 3 (H:1 M:2 L:0)"
# ln-642: "Score: 4.5/10 | Issues: 8 (C:1 H:3 M:4 L:0)"
# ln-643: "Score: 6.75/10 (C:65 K:70 Q:55 I:80) | Issues: 4 (H:2 M:1 L:1)"
# ln-644: "Score: 6.5/10 | Issues: 8 (C:1 H:3 M:3 L:1)"
pattern_scores = [parse_score(r) for r in ln641_returns] # Each 0-10
layer_score = parse_score(ln642_return) # 0-10
api_score = parse_score(ln643_return) # 0-10
graph_score = parse_score(ln644_return) # 0-10
structure_score = parse_score(ln646_return) # 0-10
env_config_score = parse_score(ln647_return) # 0-10
# Step 2: Calculate architecture_health_score (ln-645 NOT included — separate metric)
all_scores = pattern_scores + [layer_score, api_score, graph_score, structure_score, env_config_score]
architecture_health_score = round(average(all_scores) * 10) # 0-100 scale
# Step 2b: Separate reuse opportunity score (informational, no SLA enforcement)
reuse_opportunity_score = parse_score(ln645_return) # 0-10, NOT in architecture_health_score
# Status mapping:
# >= 80: "healthy"
# 70-79: "warning"
# < 70: "critical"
# Step 3: Context Validation (Post-Filter)
# Apply Rules 1, 3 to ln-641 anti-pattern findings:
# Rule 1: Match god_class/large_file findings against ADR list
# Rule 3: For god_class deductions (-5 compliance):
# Read flagged file ONCE, check 4 cohesion indicators
# (public_func_count <= 2, subdirs, shared_state > 60%, CC=1)
# IF cohesion >= 3: restore deducted -5 points
# Note in patterns_catalog.md: "[Advisory: high cohesion module]"
# Recalculate architecture_health_score with restored points
1. Update patterns_catalog.md:
- Pattern scores, dates
- Layer Boundary Status section
- Quick Wins section
- Patterns Requiring Attention section
2. Calculate trend: compare current vs previous scores
3. Output summary (see Return Result below)
{
"audit_date": "2026-02-04",
"architecture_health_score": 78,
"trend": "improving",
"patterns_analyzed": 5,
"layer_audit": {
"architecture_type": "Layered",
"violations_total": 5,
"violations_by_severity": {"high": 2, "medium": 3, "low": 0},
"coverage": {"http_abstraction": 85, "error_centralization": true}
},
"patterns": [
{
"name": "Job Processing",
"scores": {"compliance": 72, "completeness": 85, "quality": 68, "implementation": 90},
"avg_score": 79,
"status": "warning",
"issues_count": 3
}
],
"quick_wins": [
{"pattern": "Caching", "issue": "Add TTL config", "effort": "2h", "impact": "+10 completeness"}
],
"requires_attention": [
{"pattern": "Event-Driven", "avg_score": 58, "critical_issues": ["No DLQ", "No schema versioning"]}
],
"project_structure": {
"structure_score": 7.5,
"tech_stack_detected": "react",
"dimensions": {
"file_hygiene": {"checks": 6, "issues": 1},
"ignore_files": {"checks": 4, "issues": 0},
"framework_conventions": {"checks": 3, "issues": 1},
"domain_organization": {"checks": 3, "issues": 1},
"naming_conventions": {"checks": 3, "issues": 0}
},
"junk_drawers": 1,
"naming_violations_pct": 3
},
"env_config": {
"env_config_score": 8.0,
"code_vars_count": 25,
"example_vars_count": 22,
"sync_stats": {"missing_from_example": 3, "dead_in_example": 1, "default_desync": 0},
"validation_framework": "pydantic-settings"
},
"reuse_opportunities": {
"reuse_opportunity_score": 6.5,
"modules_scanned": 15,
"high_confidence_replacements": 3,
"medium_confidence_replacements": 5,
"systemic_custom_implementations": 1
},
"dependency_graph": {
"architecture_detected": "hybrid",
"architecture_confidence": "MEDIUM",
"modules_analyzed": 12,
"cycles_detected": 2,
"boundary_violations": 3,
"sdp_violations": 1,
"nccd": 1.3,
"score": 6.5
},
"cross_domain_issues": [
{
"severity": "CRITICAL",
"issue": "Systemic layer violation: HTTP client in domain layer in 3 domains",
"domains": ["users", "billing", "orders"],
"recommendation": "Address at architecture level"
}
]
}
MANDATORY READ: Load shared/references/results_log_pattern.md
Append one row to docs/project/.audit/results_log.md with: Skill=ln-640, Metric=architecture_health_score, Scale=0-100, Score from Phase 9 output. Calculate Delta vs previous ln-640 row. Create file with header if missing. Rolling window: max 50 entries.
rm -rf {output_dir}
Delete the dated output directory (docs/project/.audit/ln-640/{YYYY-MM-DD}/). The consolidated report and results log already preserve all audit data.
{output_dir}/MANDATORY READ: Load shared/references/meta_analysis_protocol.md
Skill type: review-coordinator (workers only). Run after all phases complete. Output to chat using the review-coordinator — workers only format.
shared/references/task_delegation_pattern.mdshared/references/audit_coordinator_domain_mode.mdshared/references/audit_coordinator_aggregation.mdshared/templates/patterns_template.mdreferences/pattern_library.mdreferences/layer_rules.mdreferences/scoring_rules.md../ln-641-pattern-analyzer/SKILL.mdVersion: 2.0.0 Last Updated: 2026-02-08
Weekly Installs
142
Repository
GitHub Stars
245
First Seen
Feb 3, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
claude-code131
cursor130
gemini-cli129
opencode129
codex129
github-copilot127
代码库搜索技能指南:精准查找函数、追踪依赖、理解架构与定位错误
10,900 周安装
../ln-642-layer-boundary-auditor/SKILL.md../ln-643-api-contract-auditor/SKILL.md../ln-644-dependency-graph-auditor/SKILL.md../ln-645-open-source-replacer/SKILL.md../ln-646-project-structure-auditor/SKILL.md../ln-647-env-config-auditor/SKILL.mdshared/references/research_tool_fallback.mdshared/references/tools_config_guide.mdshared/references/storage_mode_detection.md