重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
nist-ai-rmf by mastepanoski/claude-skills
npx skills add https://github.com/mastepanoski/claude-skills --skill nist-ai-rmf此技能使 AI 代理能够使用美国国家标准与技术研究院于 2023 年 1 月发布的 NIST 人工智能风险管理框架 (AI RMF 1.0) 执行全面的 AI 风险评估。
AI RMF 是一个自愿的、与技术及行业无关的框架,旨在帮助组织管理 AI 系统在其整个生命周期中相关的风险。它通过解决影响个人、组织和社会的风险,促进可信赖 AI 的开发。
使用此技能来识别、评估和管理 AI 风险;建立治理结构;确保可信赖 AI 的特性;并与国际 AI 风险管理最佳实践保持一致。
结合使用“ISO 42001 AI 治理”以获得全面的合规覆盖,或结合“OWASP LLM Top 10”进行以安全为重点的评估。
在以下情况调用此技能:
执行此评估时,收集以下信息:
AI RMF 确定了可信赖 AI 的七个特性,作为所有功能的评估标准:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
AI RMF 核心由四个功能组成,每个功能又分为类别和子类别:
为 AI 风险管理建立组织政策、流程和问责制。治理功能是跨领域的,适用于所有其他功能。
与组织内 AI 风险的映射、衡量和管理相关的政策、流程、程序和实践已到位、透明且有效实施。
问责结构确保适当的团队和个人被授权、负责并接受 AI 风险管理培训。
在 AI 风险管理中优先考虑员工多样性、公平性、包容性和可访问性流程。
组织团队致力于一种考虑并沟通 AI 风险的文化。
已建立与相关 AI 参与者进行有力参与的流程。
政策和程序解决来自第三方软件、数据和供应链的 AI 风险。
在运营环境中识别 AI 系统风险并确定其背景。
背景已建立并理解。
执行 AI 系统的分类。
AI 能力、目标用途、目标、预期收益和成本已理解。
为所有组件(包括第三方)映射风险和收益。
描述对个人、群体、社区、组织和社会的潜在影响。
使用工具、技术和方法来评估、基准测试和监控 AI 风险。
识别并应用适当的方法和指标。
评估 AI 系统的可信赖特性。
已建立随时间跟踪已识别 AI 风险的机制。
收集并评估关于衡量有效性的反馈。
定期为已映射和衡量的风险分配资源。
基于评估的 AI 风险被确定优先级、响应和管理。
计划并记录最大化 AI 收益和最小化负面影响的策略。
管理来自第三方实体的 AI 风险和收益。
风险处理措施和沟通计划已记录并监控。
请系统地遵循以下步骤:
审查 AI 系统:
ai_system_description 和 system_lifecycle_stage理解背景:
organization_context 和监管环境定义范围:
评估组织治理:
评估风险识别和背景:
评估风险衡量和监控:
评估风险响应和处理:
汇总评估结果,包括评级和建议。
生成一份全面的 NIST AI RMF 评估报告:
# NIST AI RMF 评估报告
**AI 系统**: [名称/描述]
**组织**: [名称]
**日期**: [日期]
**生命周期阶段**: [设计/开发/部署/监控]
**评估者**: [AI 代理或人员]
**AI RMF 版本**: 1.0 (2023年1月)
---
## 执行摘要
### 总体风险概况: [低 / 中 / 高 / 严重]
**系统类型**: [分类器 / 生成式 / 推荐系统 / 自主系统 / 其他]
**部署环境**: [内部 / 面向客户 / 公共 / 关键基础设施]
**法规适用性**: [欧盟 AI 法案风险等级、各州法律、行业法规]
### 关键发现
- **问题总数**: [X]
- 严重: [X] (需要立即行动)
- 高: [X] (需要在 30 天内行动)
- 中: [X] (需要在 90 天内行动)
- 低: [X] (建议改进)
### 可信度摘要
| 特性 | 状态 | 评级 |
|---|---|---|
| 有效且可靠 | [状态] | [1-5] |
| 安全 | [状态] | [1-5] |
| 安全且有韧性 | [状态] | [1-5] |
| 可问责且透明 | [状态] | [1-5] |
| 可解释且可理解 | [状态] | [1-5] |
| 隐私增强 | [状态] | [1-5] |
| 公平(偏见受管理) | [状态] | [1-5] |
---
## 治理功能评估
### 治理 1:政策和流程
**评级**: [未实施 / 部分 / 基本 / 完全]
**发现:**
- [发现 1 及证据]
- [发现 2 及证据]
**差距:**
- [ ] [差距描述]
**建议:**
- [建议及优先级]
### 治理 2:问责结构
**评级**: [未实施 / 部分 / 基本 / 完全]
[为所有治理类别继续...]
---
## 映射功能评估
### 映射 1:建立背景
**评级**: [未实施 / 部分 / 基本 / 完全]
**发现:**
- [发现及证据]
**差距:**
- [ ] [差距描述]
**建议:**
- [建议及优先级]
[为所有映射类别继续...]
---
## 衡量功能评估
### 衡量 1:方法与指标
**评级**: [未实施 / 部分 / 基本 / 完全]
**发现:**
- [发现及证据]
**差距:**
- [ ] [差距描述]
**建议:**
- [建议及优先级]
[为所有衡量类别继续...]
---
## 管理功能评估
### 管理 1:风险优先级排序
**评级**: [未实施 / 部分 / 基本 / 完全]
**发现:**
- [发现及证据]
**差距:**
- [ ] [差距描述]
**建议:**
- [建议及优先级]
[为所有管理类别继续...]
---
## 风险登记册
| ID | 风险描述 | 功能 | 可能性 | 影响 | 优先级 | 缓解措施 |
|---|---|---|---|---|---|---|
| R1 | [描述] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [策略] |
| R2 | [描述] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [策略] |
---
## 修复路线图
### 阶段 1:严重 (0-30 天)
1. [行动项,包含负责人和截止日期]
2. [行动项,包含负责人和截止日期]
### 阶段 2:高优先级 (30-90 天)
1. [行动项,包含负责人和截止日期]
### 阶段 3:中优先级 (90-180 天)
1. [行动项,包含负责人和截止日期]
### 阶段 4:持续改进
1. [持续实践]
---
## 合规性对应
### 法规映射
| 法规 | 相关 AI RMF 功能 | 状态 |
|---|---|---|
| 欧盟 AI 法案 | 治理, 映射, 衡量 | [状态] |
| NIST CSF 2.0 | 治理, 管理 | [状态] |
| 各州 AI 法律 | 治理, 映射 | [状态] |
| 行业法规 | [相关功能] | [状态] |
---
## 后续步骤
### 立即行动
1. [ ] 处理严重发现
2. [ ] 分配风险负责人
3. [ ] 建立监控节奏
### 短期 (1-3 个月)
1. [ ] 实施阶段 1 修复
2. [ ] 建立治理结构
3. [ ] 对人员进行 AI RMF 培训
### 长期 (3-12 个月)
1. [ ] 完成所有修复阶段
2. [ ] 进行后续评估
3. [ ] 整合到组织风险管理中
---
## 资源
- [NIST AI RMF 1.0](https://www.nist.gov/itl/ai-risk-management-framework)
- [NIST AI RMF 手册](https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook)
- [NIST AI RMF 生成式 AI 配置文件](https://airc.nist.gov/Docs/1)
- [NIST 可信赖 AI 资源中心](https://airc.nist.gov/)
---
**评估版本**: 1.0
**日期**: [日期]
使用此量表进行子类别评级:
| 评级 | 描述 |
|---|---|
| 未实施 | 没有活动或文档证据 |
| 部分 | 有一些活动但不一致或不完整 |
| 基本 | 大部分已实施,存在微小差距 |
| 完全 | 完全实施并定期维护 |
使用此量表进行可信赖特性评分:
| 分数 | 描述 |
|---|---|
| 1 | 未涉及 |
| 2 | 最低限度涉及 |
| 3 | 部分涉及 |
| 4 | 基本涉及 |
| 5 | 完全涉及并监控 |
对于生成式 AI 系统,额外评估(根据 NIST AI 600-1 GenAI 配置文件,2024 年 7 月):
1.0 - 初始版本(符合 NIST AI RMF 1.0)
请注意 : NIST AI RMF 是自愿的、基于风险的框架。并非所有子类别都适用于每个系统。请根据系统的风险概况和组织背景调整评估深度。
每周安装数
49
代码库
GitHub 星标数
14
首次出现
2026年2月5日
安全审计
安装于
github-copilot47
opencode47
gemini-cli46
codex46
kimi-cli44
amp44
This skill enables AI agents to perform a comprehensive AI risk assessment using the NIST AI Risk Management Framework (AI RMF 1.0) , published January 2023 by the National Institute of Standards and Technology.
The AI RMF is a voluntary, technology- and sector-agnostic framework designed to help organizations manage risks associated with AI systems throughout their lifecycle. It promotes trustworthy AI development by addressing risks that affect individuals, organizations, and society.
Use this skill to identify, assess, and manage AI risks; establish governance structures; ensure trustworthy AI characteristics; and align with international AI risk management best practices.
Combine with "ISO 42001 AI Governance" for comprehensive compliance coverage or "OWASP LLM Top 10" for security-focused assessment.
Invoke this skill when:
When executing this assessment, gather:
The AI RMF identifies seven characteristics of trustworthy AI that serve as evaluation criteria across all functions:
The AI RMF Core is composed of four functions, each broken into categories and subcategories:
Establishes organizational policies, processes, and accountability for AI risk management. GOVERN is cross-cutting and applies across all other functions.
Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
Accountability structures ensure appropriate teams and individuals are empowered, responsible, and trained for AI risk management.
Workforce diversity, equity, inclusion, and accessibility processes are prioritized in AI risk management.
Organizational teams are committed to a culture that considers and communicates AI risk.
Processes are in place for robust engagement with relevant AI actors.
Policies and procedures address AI risks from third-party software, data, and supply chain.
Identifies and contextualizes AI system risks within the operational environment.
Context is established and understood.
Categorization of the AI system is performed.
AI capabilities, targeted usage, goals, expected benefits, and costs are understood.
Risks and benefits are mapped for all components including third-party.
Impacts to individuals, groups, communities, organizations, and society are characterized.
Employs tools, techniques, and methodologies to assess, benchmark, and monitor AI risk.
Appropriate methods and metrics are identified and applied.
AI systems are evaluated for trustworthy characteristics.
Mechanisms for tracking identified AI risks over time are in place.
Feedback about efficacy of measurement is gathered and assessed.
Allocates resources to mapped and measured risks on a regular basis.
AI risks based on assessments are prioritized, responded to, and managed.
Strategies to maximize AI benefits and minimize negative impacts are planned and documented.
AI risks and benefits from third-party entities are managed.
Risk treatments and communication plans are documented and monitored.
Follow these steps systematically:
Review AI system:
ai_system_description and system_lifecycle_stageUnderstand context:
organization_context and regulatory environmentDefine scope:
Evaluate organizational governance:
Evaluate risk identification and context:
Evaluate risk measurement and monitoring:
Evaluate risk response and treatment:
Compile assessment findings with ratings and recommendations.
Generate a comprehensive NIST AI RMF assessment report:
# NIST AI RMF Assessment Report
**AI System**: [Name/Description]
**Organization**: [Name]
**Date**: [Date]
**Lifecycle Stage**: [Design/Development/Deployment/Monitoring]
**Evaluator**: [AI Agent or Human]
**AI RMF Version**: 1.0 (January 2023)
---
## Executive Summary
### Overall Risk Profile: [Low / Medium / High / Critical]
**System Type**: [Classifier / Generative / Recommender / Autonomous / Other]
**Deployment Context**: [Internal / Customer-facing / Public / Critical infrastructure]
**Regulatory Applicability**: [EU AI Act risk level, state laws, sector regulations]
### Key Findings
- **Total Issues**: [X]
- Critical: [X] (immediate action required)
- High: [X] (action required within 30 days)
- Medium: [X] (action required within 90 days)
- Low: [X] (improvements recommended)
### Trustworthiness Summary
| Characteristic | Status | Rating |
|---|---|---|
| Valid & Reliable | [Status] | [1-5] |
| Safe | [Status] | [1-5] |
| Secure & Resilient | [Status] | [1-5] |
| Accountable & Transparent | [Status] | [1-5] |
| Explainable & Interpretable | [Status] | [1-5] |
| Privacy-Enhanced | [Status] | [1-5] |
| Fair (Bias Managed) | [Status] | [1-5] |
---
## GOVERN Function Assessment
### GOVERN 1: Policies and Processes
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Finding 1 with evidence]
- [Finding 2 with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
### GOVERN 2: Accountability Structures
**Rating**: [Not Implemented / Partial / Substantial / Full]
[Continue for all GOVERN categories...]
---
## MAP Function Assessment
### MAP 1: Context Established
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MAP categories...]
---
## MEASURE Function Assessment
### MEASURE 1: Methods and Metrics
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MEASURE categories...]
---
## MANAGE Function Assessment
### MANAGE 1: Risk Prioritization
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MANAGE categories...]
---
## Risk Register
| ID | Risk Description | Function | Likelihood | Impact | Priority | Mitigation |
|---|---|---|---|---|---|---|
| R1 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
| R2 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
---
## Remediation Roadmap
### Phase 1: Critical (0-30 days)
1. [Action item with owner and deadline]
2. [Action item with owner and deadline]
### Phase 2: High Priority (30-90 days)
1. [Action item with owner and deadline]
### Phase 3: Medium Priority (90-180 days)
1. [Action item with owner and deadline]
### Phase 4: Continuous Improvement
1. [Ongoing practices]
---
## Compliance Alignment
### Regulatory Mapping
| Regulation | Relevant AI RMF Functions | Status |
|---|---|---|
| EU AI Act | GOVERN, MAP, MEASURE | [Status] |
| NIST CSF 2.0 | GOVERN, MANAGE | [Status] |
| State AI Laws | GOVERN, MAP | [Status] |
| Sector Regulations | [Relevant functions] | [Status] |
---
## Next Steps
### Immediate Actions
1. [ ] Address critical findings
2. [ ] Assign risk owners
3. [ ] Establish monitoring cadence
### Short-term (1-3 months)
1. [ ] Implement Phase 1 remediation
2. [ ] Establish governance structure
3. [ ] Train personnel on AI RMF
### Long-term (3-12 months)
1. [ ] Complete all remediation phases
2. [ ] Conduct follow-up assessment
3. [ ] Integrate into organizational risk management
---
## Resources
- [NIST AI RMF 1.0](https://www.nist.gov/itl/ai-risk-management-framework)
- [NIST AI RMF Playbook](https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook)
- [NIST AI RMF Generative AI Profile](https://airc.nist.gov/Docs/1)
- [NIST Trustworthy AI Resource Center](https://airc.nist.gov/)
---
**Assessment Version**: 1.0
**Date**: [Date]
Use this scale for subcategory ratings:
| Rating | Description |
|---|---|
| Not Implemented | No evidence of activity or documentation |
| Partial | Some activity but inconsistent or incomplete |
| Substantial | Mostly implemented with minor gaps |
| Full | Fully implemented and regularly maintained |
Use this scale for trustworthiness characteristics:
| Score | Description |
|---|---|
| 1 | Not addressed |
| 2 | Minimally addressed |
| 3 | Partially addressed |
| 4 | Substantially addressed |
| 5 | Fully addressed and monitored |
For generative AI systems, additionally evaluate (per NIST AI 600-1 GenAI Profile, July 2024):
1.0 - Initial release (NIST AI RMF 1.0 compliant)
Remember : The NIST AI RMF is voluntary and risk-based. Not all subcategories apply to every system. Tailor the assessment depth to the system's risk profile and organizational context.
Weekly Installs
49
Repository
GitHub Stars
14
First Seen
Feb 5, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
github-copilot47
opencode47
gemini-cli46
codex46
kimi-cli44
amp44
AI界面设计评审工具 - 全面评估UI/UX设计质量、检测AI生成痕迹与优化用户体验
58,500 周安装