npx skills add https://github.com/semgrep/skills --skill llm-security基于 OWASP LLM 应用十大安全风险 2025,构建安全 LLM 应用的安全规则。
主动模式 — 在构建或审查 LLM 应用时,根据应用模式自动检查相关的安全风险。您无需等待用户询问 LLM 安全问题。
响应模式 — 当用户询问 LLM 安全问题时,使用下面的映射关系查找包含详细漏洞/安全代码示例的相关规则文件。
rules/ 目录读取特定规则文件以获取代码示例使用此表快速识别哪些规则对用户的任务最为重要:
| 正在构建... | 优先级规则 |
|---|---|
| 聊天机器人 / 对话式 AI | 提示词注入 (LLM01), 系统提示词泄露 (LLM07), 输出处理 (LLM05), 无限制消耗 (LLM10) |
| RAG 系统 | 向量/嵌入弱点 (LLM08), 提示词注入 (LLM01), 敏感信息泄露 (LLM02), 错误信息 (LLM09) |
| 具备工具的 AI 智能体 | 过度代理 (LLM06), 提示词注入 (LLM01), 输出处理 (LLM05), 敏感信息泄露 (LLM02) |
| 微调 / 训练 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 数据投毒 (LLM04), 供应链 (LLM03), 敏感信息泄露 (LLM02) |
| LLM 驱动的 API | 无限制消耗 (LLM10), 提示词注入 (LLM01), 输出处理 (LLM05), 敏感信息泄露 (LLM02) |
| 内容生成 | 错误信息 (LLM09), 输出处理 (LLM05), 提示词注入 (LLM01) |
rules/prompt-injection.md) - 防止直接和间接的提示词操纵rules/sensitive-disclosure.md) - 保护个人身份信息、凭证和专有数据rules/supply-chain.md) - 保护模型来源、训练数据和依赖项的安全rules/data-poisoning.md) - 防止训练数据操纵和后门rules/output-handling.md) - 在下游使用前对 LLM 输出进行净化处理rules/excessive-agency.md) - 限制 LLM 的权限、功能和自主性rules/system-prompt-leakage.md) - 防止系统提示词泄露rules/vector-embedding.md) - 保护 RAG 系统和嵌入向量的安全rules/misinformation.md) - 缓解幻觉和错误输出rules/unbounded-consumption.md) - 防止拒绝服务攻击、成本攻击和模型窃取完整索引及 OWASP/MITRE 参考请参见 rules/_sections.md。
| 漏洞 | 关键预防措施 |
|---|---|
| 提示词注入 | 输入验证,输出过滤,权限分离 |
| 敏感信息泄露 | 数据净化,访问控制,加密 |
| 供应链 | 验证模型,软件物料清单,仅使用可信来源 |
| 数据投毒 | 数据验证,异常检测,沙箱隔离 |
| 输出处理 | 将 LLM 视为不可信来源,对输出进行编码,参数化查询 |
| 过度代理 | 最小权限原则,人在回路,最小化扩展功能 |
| 系统提示词泄露 | 提示词中不包含秘密信息,外部防护措施 |
| 向量/嵌入 | 访问控制,数据验证,监控 |
| 错误信息 | RAG,微调,人工监督,交叉验证 |
| 无限制消耗 | 速率限制,输入验证,资源监控 |
每周安装量
313
代码仓库
GitHub 星标数
163
首次出现
2026 年 1 月 20 日
安全审计
安装于
gemini-cli279
codex273
opencode268
github-copilot265
amp242
cursor242
Security rules for building secure LLM applications, based on the OWASP Top 10 for LLM Applications 2025.
Proactive mode — When building or reviewing LLM applications, automatically check for relevant security risks based on the application pattern. You don't need to wait for the user to ask about LLM security.
Reactive mode — When the user asks about LLM security, use the mapping below to find relevant rule files with detailed vulnerable/secure code examples.
rules/ for code examplesUse this to quickly identify which rules matter most for the user's task:
| Building... | Priority Rules |
|---|---|
| Chatbot / conversational AI | Prompt Injection (LLM01), System Prompt Leakage (LLM07), Output Handling (LLM05), Unbounded Consumption (LLM10) |
| RAG system | Vector/Embedding Weaknesses (LLM08), Prompt Injection (LLM01), Sensitive Disclosure (LLM02), Misinformation (LLM09) |
| AI agent with tools | Excessive Agency (LLM06), Prompt Injection (LLM01), Output Handling (LLM05), Sensitive Disclosure (LLM02) |
| Fine-tuning / training | Data Poisoning (LLM04), Supply Chain (LLM03), Sensitive Disclosure (LLM02) |
| LLM-powered API | Unbounded Consumption (LLM10), Prompt Injection (LLM01), Output Handling (LLM05), Sensitive Disclosure (LLM02) |
| Content generation | Misinformation (LLM09), Output Handling (LLM05), Prompt Injection (LLM01) |
rules/prompt-injection.md) - Prevent direct and indirect prompt manipulationrules/sensitive-disclosure.md) - Protect PII, credentials, and proprietary datarules/supply-chain.md) - Secure model sources, training data, and dependenciesrules/data-poisoning.md) - Prevent training data manipulation and backdoorsrules/output-handling.md) - Sanitize LLM outputs before downstream userules/excessive-agency.md) - Limit LLM permissions, functionality, and autonomyrules/system-prompt-leakage.md) - Protect system prompts from disclosurerules/vector-embedding.md) - Secure RAG systems and embeddingsrules/misinformation.md) - Mitigate hallucinations and false outputsrules/unbounded-consumption.md) - Prevent DoS, cost attacks, and model theftSee rules/_sections.md for the full index with OWASP/MITRE references.
| Vulnerability | Key Prevention |
|---|---|
| Prompt Injection | Input validation, output filtering, privilege separation |
| Sensitive Disclosure | Data sanitization, access controls, encryption |
| Supply Chain | Verify models, SBOM, trusted sources only |
| Data Poisoning | Data validation, anomaly detection, sandboxing |
| Output Handling | Treat LLM as untrusted, encode outputs, parameterize queries |
| Excessive Agency | Least privilege, human-in-the-loop, minimize extensions |
| System Prompt Leakage | No secrets in prompts, external guardrails |
| Vector/Embedding | Access controls, data validation, monitoring |
| Misinformation | RAG, fine-tuning, human oversight, cross-verification |
| Unbounded Consumption | Rate limiting, input validation, resource monitoring |
Weekly Installs
313
Repository
GitHub Stars
163
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli279
codex273
opencode268
github-copilot265
amp242
cursor242
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
54,900 周安装