重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/workersio/spec --skill solana-audit当用户要求以下操作时激活此技能:
solana_program、anchor_lang、pinocchio、#[program] 或 #[derive(Accounts)] 的代码进行安全分析在扫描之前,确定审计范围并检查先前数据。
步骤 1 — 向用户询问 3 个配置问题(如果用户说“直接审计”或类似表达,则跳过):
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
步骤 2 — 存储配置:
如果 ${CLAUDE_PLUGIN_DATA} 可用:
${CLAUDE_SKILL_DIR}/references/templates/config-template.json 以获取模式${CLAUDE_PLUGIN_DATA}/config.json${CLAUDE_PLUGIN_DATA}/audit-log.jsonl — 如果存在且包含同一程序的条目,则通知用户:"Found prior audit from [date]. [N] findings were reported."${CLAUDE_PLUGIN_DATA}/accepted-risks.json — 如果存在,则加载已接受的风险,以便在阶段 3 进行交叉引用如果 ${CLAUDE_PLUGIN_DATA} 不可用,则继续执行而不持久化。不要报错或警告 — 直接静默跳过。
默认值(如果用户跳过配置):范围=所有程序,深度=标准,无已知约束。
读取 references/agents/explorer.md 并使用 Agent 工具生成探索器代理。
如何构建代理调用:
读取文件 ${CLAUDE_SKILL_DIR}/references/agents/explorer.md。代理提示是 ## Agent Prompt 部分中三重反引号之间的文本。
在提示中,将 [Insert: repository path or "full codebase scan"] 替换为实际的目标路径(来自阶段 0 的范围,或仓库根目录)。
生成代理:
Agent(subagent_type="Explore", prompt="<the filled-in prompt>")
它返回:程序映射、指令列表、账户结构、PDA 映射、CPI 图、协议类型分类、LOC 计数和威胁模型。
在继续之前,您必须生成此代理并等待其输出。探索器的输出将作为共享上下文传递给每个扫描器代理。
探索器代理返回后,确定程序大小并选择扫描策略:
| 大小 | 标准 | 扫描策略 |
|---|---|---|
| 小型 | <500 LOC 且 <5 条指令 | 单个组合扫描代理,覆盖所有类别(A 到 R)。构建一个内联提示,包含来自 CHEATSHEET 的所有漏洞检查。不需要单独的代理文件。 |
| 中型 | 500–2000 LOC | 标准 4 代理并行扫描(见下文阶段 2) |
| 大型 | >2000 LOC | 标准 4 代理并行扫描。通知用户:"This is a large program. Consider running with depth: deep for adversarial cross-validation." |
对于小型程序,构建一个单独的代理提示,该提示:
${CLAUDE_SKILL_DIR}/references/scoring.md 以获取评分规则${CLAUDE_SKILL_DIR}/references/CHEATSHEET.md 以获取所有 30 种漏洞类型对于中型和大型程序,继续执行阶段 2。
在生成扫描器代理之前,运行整合的语法扫描:
${CLAUDE_SKILL_DIR}/references/scripts/scan-commands.md将语法扫描结果传递给所有扫描器代理,并在其上下文中预置以下说明:
第 1 轮(语法扫描)已由编排器完成。结果如下。请直接进行第 2 轮 — 语义审查。
[在此处粘贴语法扫描结果,按类别组织]
这消除了 4 个代理之间重复的 grep 工作。
读取 references/scoring.md 以了解置信度评分规则和误报门。然后读取所有 4 个代理提示文件,并使用 4 个同时的 Agent 工具调用并行生成它们。
如何构建每个代理调用:
${CLAUDE_SKILL_DIR}/references/agents/auth-state-scanner.md)。提示是 ## Agent Prompt 部分中三重反引号之间的所有内容。[INSERT EXPLORER OUTPUT HERE — the full codebase analysis from the explorer agent] 替换为探索器代理输出的完整文本。[references/scoring.md])替换为使用 ${CLAUDE_SKILL_DIR} 作为基础的绝对文件路径(例如,${CLAUDE_SKILL_DIR}/references/scoring.md),以便代理可以直接 Read 它们 — 请不要将文件内容内联粘贴。CPI 扫描器 (references/agents/cpi-math-scanner.md)
在单个响应中生成所有 4 个:
Agent(prompt="<auth-state-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<cpi-math-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<logic-economic-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<framework-scanner prompt with explorer output + syntactic scan results inserted>")
每个代理返回带有分类 ID、文件:行号、证据、攻击路径、置信度评分和误报门结果的候选漏洞。
深度模式(当用户请求彻底/深度审计或在配置中 depth=deep 时):在 4 个扫描器完成后,根据 references/agents/adversarial-scanner.md 额外生成第 5 个对抗性代理。将探索器输出和合并的扫描器发现传递给它进行交叉验证。
accepted-risks.json,则通过 taxonomy_id + file 交叉引用每个发现。将匹配的发现标记为“先前已接受”并附上存储的原因。仍然将它们包含在报告中,但不计入新发现。对于 Anchor 程序,还需查阅 references/anchor-specific.md 以获取框架特定的注意事项。
生成最终的审计报告。每个发现必须包含其分类 ID,来自 references/vulnerability-taxonomy.md 及其置信度评分。
如果存在一个或多个发现,使用标准报告模板:
# Security Audit Report: [Program Name]
## Executive Summary
- Audit date, scope (files, instructions, LOC)
- Framework: Native / Anchor / Pinocchio
- Protocol type: [from explorer classification]
- Methods: Parallel agent scan (4 agents + adversarial), confidence-scored validation
- Finding counts by severity: X Critical, Y High, Z Medium, W Low, V Informational
- Confidence threshold: 75/100
## Methodology
- Phase 0: Scope configuration and prior audit check
- Phase 1: Codebase exploration (program map, CPI graph, threat model)
- Phase 2: Pre-scan syntactic analysis + parallel scan — 4 agents across 30 vulnerability types across 7 categories
- Phase 3: Merge, deduplicate by root cause, accepted risk check, devil's advocate falsification
- Phase 4: Confidence-scored report
- Reference: vulnerability taxonomy based on Wormhole, Cashio, Mango, Neodyme, Crema exploits
## Findings
### [CRITICAL] VULN-001: Title (Confidence: 95/100)
**File:** path/to/file.rs:line
**Category:** A-1 (Missing Signer Check)
**Description:** ...
**Attack Path:** caller → instruction → state change → impact
**Impact:** ...
**Recommendation:** ...
**Fix:**
```rust
// Remediation code (framework-specific)
```
### [HIGH] VULN-002: Title (Confidence: 80/100)
**File:** path/to/file.rs:line
**Category:** S-7 (Reinitialization)
...
---
### Below Confidence Threshold
---
### [MEDIUM] VULN-003: Title (Confidence: 60/100)
**File:** path/to/file.rs:line
**Category:** M-2 (Division Precision Loss)
**Description:** ...
**Impact:** ...
*(No fix recommendation — below confidence threshold)*
## Summary Table
| ID | Title | Severity | Category | Confidence | File | Status |
|---|---|---|---|---|---|---|
| VULN-001 | Missing Signer Check | Critical | A-1 | 95 | lib.rs:16 | Open |
| VULN-002 | Reinitialization | High | S-7 | 80 | lib.rs:11 | Open |
| --- | Below Confidence Threshold | --- | --- | <75 | --- | --- |
| VULN-003 | Division Precision Loss | Medium | M-2 | 60 | math.rs:45 | Open |
## Appendix
- Complete file listing reviewed
- Vulnerability taxonomy reference
- Explorer output (program map, CPI graph, threat model)
如果存在零个发现(干净审计),使用此替代模板:
# Security Audit Report: [Program Name]
## Executive Summary
- **Result: No vulnerabilities identified**
- Audit date, scope (files, instructions, LOC)
- Framework: Native / Anchor / Pinocchio
- Protocol type: [from explorer classification]
- Methods: Parallel agent scan (4 agents), confidence-scored validation
- Finding counts: 0 Critical, 0 High, 0 Medium, 0 Low, 0 Informational
## Methodology
- Phase 0: Scope configuration and prior audit check
- Phase 1: Codebase exploration (program map, CPI graph, threat model)
- Phase 2: Pre-scan syntactic analysis + parallel scan — 4 agents across 30 vulnerability types across 7 categories
- Phase 3: Merge, deduplicate, devil's advocate falsification
- Phase 4: Confidence-scored report
## Categories Reviewed
All 7 categories (30 vulnerability types) were scanned:
| Category | IDs | Types Checked | Findings |
|----------|-----|---------------|----------|
| A: Authentication & Authorization | A-1..A-5 | 5 | 0 |
| S: Account & State Management | S-1..S-8 | 8 | 0 |
| C: Cross-Program Invocation | C-1..C-3 | 3 | 0 |
| M: Arithmetic & Math | M-1..M-4 | 4 | 0 |
| L: Logic & Economic | L-1..L-4 | 4 | 0 |
| T: Token-Specific | T-1..T-3 | 3 | 0 |
| R: Runtime & Deployment | R-1..R-3 | 3 | 0 |
## Scope
- Files reviewed: [list]
- Instructions analyzed: [count]
- Lines of code: [LOC]
## Disclaimer
A clean audit report does not guarantee the absence of vulnerabilities. This audit covers the 30 vulnerability types in the solana-audit taxonomy and is limited to static analysis of the on-chain program source code. It does not cover:
- Off-chain components (frontends, keepers, bots)
- Economic modeling or game-theoretic analysis beyond basic checks
- Deployment configuration (actual on-chain upgrade authority, program data account state)
- Vulnerabilities outside the taxonomy scope
- Bugs introduced after the audit date
## Appendix
- Complete file listing reviewed
- Vulnerability taxonomy reference
- Explorer output (program map, CPI graph, threat model)
报告规则:
**Category:** 并附带分类 ID(例如,A-1, S-7, C-1)**Confidence:** 评分Signer<'info>、Account<'info, T>、close = destination)生成报告后,如果 ${CLAUDE_PLUGIN_DATA} 可用,则持久化结果:
${CLAUDE_SKILL_DIR}/references/templates/audit-log-schema.md 以获取模式${CLAUDE_PLUGIN_DATA}/audit-log.jsonl 追加一行 JSONL,包含:时间戳、程序名称、路径、框架、协议类型、LOC、指令计数、深度、按严重性统计的发现数量、发现 ID 和分类 ID如果 ${CLAUDE_PLUGIN_DATA} 不可用,则静默跳过。
对于涉及算术安全(M-1, M-2, M-3, M-4)、状态不变量(S-1 到 S-8)或授权逻辑(A-1 到 A-5)的严重或高级别发现,建议使用 /kani-proof 进行形式化验证:
形式化验证可用: 发现 VULN-NNN ([taxonomy_id]: [title]) 在
[function_name()]中可以通过/kani-proof进行形式化验证,以证明修复是正确的且漏洞无法重现。
示例:"Finding VULN-001 (M-1: Integer Overflow) in calculate_reward() could be formally verified using /kani-proof to prove all arithmetic operations are safe under bounded inputs."
这只是一个轻量级的建议 — 不要因此阻塞报告。
references/ 目录包含:
核心参考资料:
扫描自动化:
持久化模板:
20 个单独的漏洞文件 — 每个都包含前提条件、易受攻击的模式、检测启发式方法、误报和修复方法
代理提示 (references/agents/):
协议特定参考资料 (references/protocols/) — 根据探索器的分类按需加载:
每周安装次数
57
仓库
GitHub 星标数
9
首次出现
2026年3月5日
安全审计
安装于
claude-code54
opencode53
gemini-cli53
github-copilot53
codex53
amp53
Activate this skill when the user asks to:
solana_program, anchor_lang, pinocchio, #[program], or #[derive(Accounts)]Before scanning, establish the audit scope and check for prior data.
Step 1 — Ask the user 3 configuration questions (skip if user says "just audit it" or equivalent):
Step 2 — Store configuration:
If ${CLAUDE_PLUGIN_DATA} is available:
${CLAUDE_SKILL_DIR}/references/templates/config-template.json for the schema${CLAUDE_PLUGIN_DATA}/config.json${CLAUDE_PLUGIN_DATA}/audit-log.jsonl — if it exists and contains entries for the same program, inform the user: "Found prior audit from [date]. [N] findings were reported."${CLAUDE_PLUGIN_DATA}/accepted-risks.json — if it exists, load accepted risks to cross-reference during Phase 3If ${CLAUDE_PLUGIN_DATA} is not available, proceed without persistence. Do not error or warn — just skip silently.
Defaults (if user skips configuration): scope=all programs, depth=standard, no known constraints.
Read references/agents/explorer.md and spawn the explorer agent using the Agent tool.
How to construct the agent call:
Read the file ${CLAUDE_SKILL_DIR}/references/agents/explorer.md. The agent prompt is the text between the triple-backtick fences in the ## Agent Prompt section.
In the prompt, replace [Insert: repository path or "full codebase scan"] with the actual target path (from Phase 0 scope, or the repo root).
Spawn the agent:
Agent(subagent_type="Explore", prompt="<the filled-in prompt>")
It returns: program map, instruction list, account structures, PDA map, CPI graph, protocol type classification, LOC count, and threat model.
You MUST spawn this agent and wait for its output before proceeding. The explorer output is passed to every scanning agent as shared context.
After the explorer agent returns, determine the program size and select the scan strategy:
| Size | Criteria | Scan Strategy |
|---|---|---|
| Small | <500 LOC AND <5 instructions | Single combined scan agent covering all categories (A through R). Construct one prompt inline that includes all vulnerability checks from the CHEATSHEET. No separate agent files needed. |
| Medium | 500–2000 LOC | Standard 4-agent parallel scan (Phase 2 below) |
| Large | >2000 LOC | Standard 4-agent parallel scan. Inform the user: "This is a large program. Consider running with depth: deep for adversarial cross-validation." |
For small programs , construct a single agent prompt that:
${CLAUDE_SKILL_DIR}/references/scoring.md for scoring rules${CLAUDE_SKILL_DIR}/references/CHEATSHEET.md for all 30 vulnerability typesFor medium and large programs , proceed to Phase 2.
Before spawning the scanner agents, run the consolidated syntactic scan:
${CLAUDE_SKILL_DIR}/references/scripts/scan-commands.mdPass the syntactic scan results to ALL scanner agents with this note prepended to their context:
Pass 1 (syntactic scan) has been completed by the orchestrator. Results below. Proceed directly to Pass 2 — Semantic Review.
[paste syntactic scan results here, organized by category]
This eliminates redundant grep work across 4 agents.
Read references/scoring.md for the confidence scoring rules and False Positive Gate. Then read all 4 agent prompt files and spawn them IN PARALLEL using 4 simultaneous Agent tool calls.
How to construct each agent call:
${CLAUDE_SKILL_DIR}/references/agents/auth-state-scanner.md). The prompt is everything between the triple-backtick fences in the ## Agent Prompt section.[INSERT EXPLORER OUTPUT HERE — the full codebase analysis from the explorer agent] with the literal full text output from the explorer agent.[references/scoring.md]) with absolute file paths using ${CLAUDE_SKILL_DIR} as the base (e.g., ${CLAUDE_SKILL_DIR}/references/scoring.md) so the agent can Read them directly — do NOT paste file contents inline.Auth Scanner (references/agents/auth-state-scanner.md)
CPI Scanner (references/agents/cpi-math-scanner.md)
Logic Scanner (references/agents/logic-economic-scanner.md)
Framework Scanner (references/agents/framework-scanner.md)
Spawn all 4 in a single response:
Agent(prompt="<auth-state-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<cpi-math-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<logic-economic-scanner prompt with explorer output + syntactic scan results inserted>")
Agent(prompt="<framework-scanner prompt with explorer output + syntactic scan results inserted>")
Each agent returns candidates with taxonomy ID, file:line, evidence, attack path, confidence score, and FP gate result.
DEEP mode (when user requests thorough/deep audit or depth=deep in config): After the 4 scanners complete, also spawn a 5th adversarial agent per references/agents/adversarial-scanner.md. Pass it the explorer output AND the merged scanner findings for cross-validation.
accepted-risks.json was loaded in Phase 0, cross-reference each finding by taxonomy_id + file. Mark matching findings as "Previously Accepted" with the stored reason. Still include them in the report but do not count them as new findings.For Anchor programs, also consult references/anchor-specific.md for framework-specific gotchas.
Produce the final audit report. Every finding MUST include its taxonomy ID from references/vulnerability-taxonomy.md and its confidence score.
If there are one or more findings , use the standard report template:
# Security Audit Report: [Program Name]
## Executive Summary
- Audit date, scope (files, instructions, LOC)
- Framework: Native / Anchor / Pinocchio
- Protocol type: [from explorer classification]
- Methods: Parallel agent scan (4 agents + adversarial), confidence-scored validation
- Finding counts by severity: X Critical, Y High, Z Medium, W Low, V Informational
- Confidence threshold: 75/100
## Methodology
- Phase 0: Scope configuration and prior audit check
- Phase 1: Codebase exploration (program map, CPI graph, threat model)
- Phase 2: Pre-scan syntactic analysis + parallel scan — 4 agents across 30 vulnerability types across 7 categories
- Phase 3: Merge, deduplicate by root cause, accepted risk check, devil's advocate falsification
- Phase 4: Confidence-scored report
- Reference: vulnerability taxonomy based on Wormhole, Cashio, Mango, Neodyme, Crema exploits
## Findings
### [CRITICAL] VULN-001: Title (Confidence: 95/100)
**File:** path/to/file.rs:line
**Category:** A-1 (Missing Signer Check)
**Description:** ...
**Attack Path:** caller → instruction → state change → impact
**Impact:** ...
**Recommendation:** ...
**Fix:**
```rust
// Remediation code (framework-specific)
```
### [HIGH] VULN-002: Title (Confidence: 80/100)
**File:** path/to/file.rs:line
**Category:** S-7 (Reinitialization)
...
---
### Below Confidence Threshold
---
### [MEDIUM] VULN-003: Title (Confidence: 60/100)
**File:** path/to/file.rs:line
**Category:** M-2 (Division Precision Loss)
**Description:** ...
**Impact:** ...
*(No fix recommendation — below confidence threshold)*
## Summary Table
| ID | Title | Severity | Category | Confidence | File | Status |
|---|---|---|---|---|---|---|
| VULN-001 | Missing Signer Check | Critical | A-1 | 95 | lib.rs:16 | Open |
| VULN-002 | Reinitialization | High | S-7 | 80 | lib.rs:11 | Open |
| --- | Below Confidence Threshold | --- | --- | <75 | --- | --- |
| VULN-003 | Division Precision Loss | Medium | M-2 | 60 | math.rs:45 | Open |
## Appendix
- Complete file listing reviewed
- Vulnerability taxonomy reference
- Explorer output (program map, CPI graph, threat model)
If there are zero findings (clean audit), use this alternative template:
# Security Audit Report: [Program Name]
## Executive Summary
- **Result: No vulnerabilities identified**
- Audit date, scope (files, instructions, LOC)
- Framework: Native / Anchor / Pinocchio
- Protocol type: [from explorer classification]
- Methods: Parallel agent scan (4 agents), confidence-scored validation
- Finding counts: 0 Critical, 0 High, 0 Medium, 0 Low, 0 Informational
## Methodology
- Phase 0: Scope configuration and prior audit check
- Phase 1: Codebase exploration (program map, CPI graph, threat model)
- Phase 2: Pre-scan syntactic analysis + parallel scan — 4 agents across 30 vulnerability types across 7 categories
- Phase 3: Merge, deduplicate, devil's advocate falsification
- Phase 4: Confidence-scored report
## Categories Reviewed
All 7 categories (30 vulnerability types) were scanned:
| Category | IDs | Types Checked | Findings |
|----------|-----|---------------|----------|
| A: Authentication & Authorization | A-1..A-5 | 5 | 0 |
| S: Account & State Management | S-1..S-8 | 8 | 0 |
| C: Cross-Program Invocation | C-1..C-3 | 3 | 0 |
| M: Arithmetic & Math | M-1..M-4 | 4 | 0 |
| L: Logic & Economic | L-1..L-4 | 4 | 0 |
| T: Token-Specific | T-1..T-3 | 3 | 0 |
| R: Runtime & Deployment | R-1..R-3 | 3 | 0 |
## Scope
- Files reviewed: [list]
- Instructions analyzed: [count]
- Lines of code: [LOC]
## Disclaimer
A clean audit report does not guarantee the absence of vulnerabilities. This audit covers the 30 vulnerability types in the solana-audit taxonomy and is limited to static analysis of the on-chain program source code. It does not cover:
- Off-chain components (frontends, keepers, bots)
- Economic modeling or game-theoretic analysis beyond basic checks
- Deployment configuration (actual on-chain upgrade authority, program data account state)
- Vulnerabilities outside the taxonomy scope
- Bugs introduced after the audit date
## Appendix
- Complete file listing reviewed
- Vulnerability taxonomy reference
- Explorer output (program map, CPI graph, threat model)
Report rules:
**Category:** line with the taxonomy ID (e.g., A-1, S-7, C-1)**Confidence:** scoreSigner<'info>, Account<'info, T>, close = destination)After generating the report, persist the results if ${CLAUDE_PLUGIN_DATA} is available:
${CLAUDE_SKILL_DIR}/references/templates/audit-log-schema.md for the schema${CLAUDE_PLUGIN_DATA}/audit-log.jsonl with: timestamp, program name, path, framework, protocol type, LOC, instruction count, depth, finding counts by severity, finding IDs, and taxonomy IDsIf ${CLAUDE_PLUGIN_DATA} is not available, skip silently.
For CRITICAL or HIGH findings involving arithmetic safety (M-1, M-2, M-3, M-4), state invariants (S-1 through S-8), or authorization logic (A-1 through A-5), suggest formal verification using /kani-proof:
Formal verification available: Finding VULN-NNN ([taxonomy_id]: [title]) in
[function_name()]could be formally verified using/kani-proofto prove the fix is correct and the vulnerability cannot recur.
Example: "Finding VULN-001 (M-1: Integer Overflow) in calculate_reward() could be formally verified using /kani-proof to prove all arithmetic operations are safe under bounded inputs."
This is a lightweight recommendation only — do not block the report on it.
The references/ directory contains:
Core references:
Scan automation:
Persistence templates:
20 individual vulnerability files — Each with preconditions, vulnerable patterns, detection heuristics, false positives, and remediation
Agent prompts (references/agents/):
Protocol-specific references (references/protocols/) — loaded on-demand based on explorer classification:
Weekly Installs
57
Repository
GitHub Stars
9
First Seen
Mar 5, 2026
Security Audits
Gen Agent Trust HubPassSocketWarnSnykPass
Installed on
claude-code54
opencode53
gemini-cli53
github-copilot53
codex53
amp53
Linux云主机安全托管指南:从SSH加固到HTTPS部署
50,200 周安装
Flutter自动化测试指南:单元测试、部件测试、集成测试与插件测试全解析
1,000 周安装
Dependabot 配置与管理指南:GitHub 依赖管理工具详解
1,100 周安装
云设计模式大全:构建可靠、高性能云应用的架构指南与最佳实践
1,100 周安装
Top-Design 世界级数字设计技能:掌握Awwwards获奖设计标准与评分体系
1,100 周安装
企业微信日程管理命令行工具 wecomcli-schedule | 日程查询、创建、修改、取消全攻略
1,300 周安装
UI/UX Pro Max - 智能设计助手:配色、字体、图表与最佳实践数据库
1,100 周安装