npx skills add https://github.com/daffy0208/ai-dev-standards --skill 'PRP Generator'PRP 生成器帮助您创建全面的产品需求提示——一种结构化的文档,用于捕获构建产品功能或系统所需的一切。PRP 针对 AI 辅助开发进行了优化,提供了人类和 AI 都能理解的清晰需求。
核心目的: 将模糊的想法转化为可操作的、完整的需求。
在以下情况下使用 PRP 生成器:
一个完整的 PRP 包含以下 12 个部分:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
让我们深入了解每一部分:
目的: 提供高层背景和模式分类
包含内容:
示例:
项目:客户支持聊天机器人
模式:C (AI 原生系统)
时间线:10-12 周
用户:客户支持代理 + 终端客户
背景:在保持客户满意度的同时,将支持工单量减少 40%
目的: 明确定义要解决的问题
模板:
[用户类型] 在 [情境] 时面临 [问题]。
这导致了 [负面结果]。
我们知道这一点是因为 [证据]。
示例:
客户支持代理在客户询问有关账单、账户设置和功能使用的常见问题时,面临响应时间长的问题。这导致客户沮丧和代理因处理重复性查询而倦怠。
我们知道这一点是因为:
- 60% 的工单是"我该如何..."的问题
- 平均响应时间为 4 小时
- 代理调查显示 70% 的时间花在重复性问题上
- 过去 6 个月 NPS 从 45 降至 38
目的: 定义可衡量的结果
结构:
示例:
主要指标:
- 在发布后 3 个月内将支持工单量减少 40%
次要指标:
- 80% 的常见问题由 AI 回答且无需升级
- AI 答案响应时间 <2 秒
- AI 回答的用户满意度评分 >4.0/5.0
- 代理花在常见问题上的时间减少 50%
最低成功标准:
- 工单减少 30% + 满意度 4.0/5.0
目的: 以"待完成工作"格式捕获用户需求
模板:
当 [情境] 时,我想要 [行动],以便我可以 [结果]。
示例:
客户故事:
1. 当我有账单问题时,我想要即时答案,以便我无需等待即可解决问题。
2. 当我设置账户时,我想要分步指导,以便我不会卡住。
3. 当我需要重置密码时,我想要一个简单的自助服务流程,以便我不需要联系支持。
代理故事:
1. 当出现复杂问题时,我想要 AI 对话的上下文,以便我可以高效地提供帮助。
2. 当培训新代理时,我想要 AI 处理基础知识,以便我可以专注于教授高级主题。
3. 当客户升级时,我想要对话历史记录,以便我不会问重复的问题。
目的: 系统必须做什么
类别:
示例:
P0 (核心 - MVP):
- FR-001: 系统根据知识库回答常见问题
- FR-002: 当置信度低 (<70%) 时,系统升级到人工处理
- FR-003: 代理可以查看完整的对话历史记录
- FR-004: 系统跟踪对话满意度评分
P1 (重要 - MVP 后):
- FR-005: 系统从代理的纠正中学习
- FR-006: 系统处理带上下文的多轮对话
- FR-007: 代理可以覆盖 AI 建议
P2 (锦上添花 - 未来):
- FR-008: 系统主动建议帮助文章
- FR-009: 系统检测沮丧的客户
- FR-010: 多语言支持
目的: 系统应如何执行
类别:
示例:
性能:
- NFR-001: 第 95 百分位响应时间 <2 秒
- NFR-002: 处理 100 个并发对话
- NFR-003: 知识库搜索 <500 毫秒
安全性:
- NFR-004: 客户数据在静态和传输中加密
- NFR-005: SOC2 Type II 合规性
- NFR-006: 基于角色的访问控制 (RBAC)
- NFR-007: 所有 AI 响应的审计日志
可扩展性:
- NFR-008: 发布时支持 10,000 个对话/天
- NFR-009: 6 个月内扩展到 100,000 个对话/天
可靠性:
- NFR-010: 99.9% 正常运行时间 SLA
- NFR-011: 如果 AI 服务不可用,则优雅降级
可用性:
- NFR-012: 代理经过 <10 分钟培训即可使用
- NFR-013: WCAG 2.1 AA 无障碍合规性
目的: 技术限制和要求
包含内容:
示例:
集成:
- 必须与现有的 Zendesk 系统集成
- 必须使用公司 SSO (Okta)
- 必须记录到现有的 Datadog 监控
技术栈:
- 后端:Python (现有团队专长)
- LLM:OpenAI GPT-4 (已批准的供应商)
- 向量数据库:Pinecone 或 Weaviate (待定)
- 前端:React (现有技术栈)
基础设施:
- 部署在现有的 AWS 基础设施上
- 使用现有的 CI/CD 流水线 (GitHub Actions)
预算:
- OpenAI API 预算:每月最高 $5,000
- 基础设施:每月最高 $2,000
时间线:
- MVP 必须在 10 周内发布
- 完整功能集在 16 周内完成
目的: 需要什么数据以及如何管理
结构:
示例:
数据源:
- 知识库文章 (Notion 中的 500+ 篇文章)
- 历史支持工单 (Zendesk,2 年)
- 产品文档 (GitHub 文档)
- 常见问题页面 (公司网站)
数据模型:
- 对话:id, customer_id, agent_id, messages[], status, satisfaction_rating
- 消息:id, sender, text, timestamp, ai_confidence
- 知识:id, title, content, embeddings, category, last_updated
数据隐私:
- PII 在 AI 处理前必须进行脱敏
- 对话数据保留 90 天
- 分析数据聚合和匿名化
- 符合 GDPR 删除权
数据安全:
- 静态加密客户数据 (AES-256)
- 传输中加密 (TLS 1.3)
- 基于角色的对话数据访问权限
目的: 用户如何与系统交互
包含内容:
示例:
客户界面:
- 右下角的聊天小部件
- 输入指示器和响应时间估算
- 始终可见的清晰的"与人工交谈"按钮
- 可访问 30 天的对话历史记录
代理界面:
- 显示 AI 建议的侧边面板
- 一键"接受 AI 答案"按钮
- 发送前编辑 AI 答案
- 标记错误响应以进行重新训练
- 显示 AI 性能指标的仪表板
用户流程:
1. 客户提问 → AI 检索答案 → 显示置信度
2. 置信度低 → 自动升级到代理 → 代理查看完整上下文
3. 代理纠正 AI → 系统记录以进行改进
设计约束:
- 匹配现有品牌颜色
- 移动端响应式设计
- 符合 WCAG 2.1 AA 标准
- 支持键盘导航
目的: 识别潜在障碍和依赖关系
结构:
示例:
风险:
1. AI 幻觉风险
- 缓解:置信度阈值,对低置信度进行人工审核
2. 知识库质量风险
- 缓解:发布前内容审核,主题专家审核前 100 篇文章
3. 用户采用风险 (代理不信任 AI)
- 缓解:逐步推出,代理培训,显示准确率指标
4. API 成本超支风险
- 缓解:积极缓存,令牌限制,使用监控
假设:
1. 客户将接受 AI 响应 (需要通过 Beta 测试验证)
2. 知识库准确且最新 (需要审核)
3. 80% 的问题可以用现有知识回答
4. OpenAI API 延迟是可接受的 (<2 秒)
依赖关系:
1. 访问 Zendesk API (需要 IT 部门批准)
2. 从 Notion 导出知识库
3. OpenAI API 配额增加 (目前有限制)
4. 代理可用于培训和反馈
目的: 明确声明将不会构建的内容
重要性: 防止范围蔓延并设定期望
示例:
MVP 范围外:
- 语音/电话支持集成 (MVP 后)
- 多语言支持 (第二阶段)
- 与 CRM 系统集成 (未来)
- 自定义 AI 模型训练 (使用 OpenAI)
- 移动应用 (最初仅限网页版)
- 主动外联 (仅限被动支持)
- 情感分析仪表板 (未来分析)
- 代理绩效评分 (v2 功能)
明确不构建:
- 自定义 LLM 训练 (太昂贵)
- 实时翻译 (复杂度与价值比)
- 视频通话集成 (不同项目)
目的: 记录需要答案的未知事项
结构:
示例:
开放性问题:
Q1:客户可接受的 AI 错误率是多少?
- 谁:产品经理 + 客户成功负责人
- 截止日期:第 2 周 (在架构最终确定之前)
- 影响:决定置信度阈值和升级流程
Q2:我们可以访问历史对话情感数据吗?
- 谁:数据团队
- 截止日期:第 3 周 (在训练数据收集之前)
- 影响:改进 AI 语气匹配
Q3:我们的 Zendesk API 速率限制是多少?
- 谁:IT/基础设施
- 截止日期:第 1 周 (对架构至关重要)
- 影响:可能需要缓存策略
Q4:我们有 Pinecone 的预算还是需要开源向量数据库?
- 谁:工程经理
- 截止日期:第 2 周 (影响技术栈)
- 影响:Pinecone 更容易,开源更便宜但工作量更大
Q5:代理是否允许编辑 AI 生成的响应?
- 谁:法律/合规
- 截止日期:第 4 周 (在功能开发之前)
- 影响:影响代理界面设计
项目: 在用户仪表板中添加 CSV 导出
# PRP: CSV 导出功能
## 1. 项目概述
模式 A (简单功能)
时间线:2-3 天
在用户仪表板中添加 CSV 导出按钮
## 2. 问题陈述
用户需要导出数据以进行离线分析。目前他们必须手动复制粘贴。
## 3. 成功标准
- 80% 点击导出的用户成功下载
- 典型数据集 (1000 行) 导出时间 <5 秒
## 4. 用户故事
当查看我的数据时,我想要点击"导出 CSV",以便我可以在 Excel 中分析它。
## 5. 功能性需求
FR-001:仪表板工具栏中的导出按钮
FR-002:导出所有可见列
FR-003:尊重当前过滤器
FR-004:文件名包含时间戳
## 6-12. [为模式 A 缩写]
参见贯穿以上各部分的客户支持聊天机器人示例
项目: 多智能体研究助手
# PRP: 多智能体研究助手
## 1. 项目概述
模式 C (AI 原生系统)
时间线:14-16 周
具有专门用于研究、综合和事实核查的智能体的多智能体系统
## 2. 问题陈述
研究人员花费 60% 的时间寻找和综合论文,而不是进行分析。当前工具返回压倒性的结果,没有质量过滤。
## 3. 成功标准
主要:将每个主题的研究时间从 8 小时减少到 2 小时
次要:
- 论文相关性准确率 90%
- 用户满意度 85%
- 初始结果 <30 秒
- 每个查询综合 5+ 篇论文
## 4. 用户故事
- 当我输入研究主题时,我想要一个带来源的综合摘要,以便我可以快速了解概况
- 当结果模糊时,我想要建议后续问题,以便我可以优化搜索
- 当我找到一篇相关论文时,我想要建议相关论文,以便我可以深入探索
## 5. 功能性需求 (AI 相关)
FR-001:搜索智能体查询多个学术数据库
FR-002:过滤智能体对论文相关性进行评分 (0-100)
FR-003:综合智能体创建 500 字摘要
FR-004:事实核查智能体验证关键主张
FR-005:协调器协调智能体并解决冲突
## 6. 非功能性需求 (AI 相关)
NFR-001:智能体协调延迟 <5 秒
NFR-002:RAG 检索准确率 >90%
NFR-003:处理 50 个并发研究会话
NFR-004:LLM 令牌预算:每个研究会话最高 $10
## 7. 技术约束 (AI 相关)
- LLM:GPT-4 用于综合,GPT-3.5 用于过滤
- 向量数据库:Pinecone (1M 向量)
- 智能体框架:LangChain 或 CrewAI
- 学术 API:Semantic Scholar, arXiv
## 8. 数据需求 (AI 相关)
- 论文嵌入:标题 + 摘要 + 关键词
- 引文网络图用于相关论文
- 用户研究历史用于个性化
- 带有已验证主张的事实核查数据库
## 9-12. [根据结构完成]
在功能性需求之前编写问题陈述。先理解"为什么",再理解"是什么"。
不好:"改善用户体验" 好:"将任务完成率从 60% 提高到 85%"
捕获用户意图,而不仅仅是功能请求。
防止未来关于"我以为我们要构建 X"的争论。
不要隐藏未知事项。让它们可见并跟踪解决情况。
PRP 会演变。当发现新信息时进行更新。
与以下人员验证 PRP:
反模式: "使用 React 和 Redux 进行状态管理" 更好: "系统必须处理跨多个视图的实时更新"
反模式: "用户应该感到满意" 更好: "NPS 分数 >50,任务完成率 >80%"
反模式: 仅列出功能 更好: 包括性能、安全性、可扩展性
反模式: "显然我们需要身份验证" 更好: 明确说明所有需求
反模式: 模式 A 带有模式 C 的协调 更好: 使 PRP 深度与模式复杂度匹配
一份完整的 PRP 文档,包含:
当出现以下情况时,您已创建了一个好的 PRP:
记住: 一个好的 PRP 是有组织的开发和混乱的范围蔓延之间的区别。前期投入时间以节省后期数周时间。
每周安装
0
仓库
GitHub 星标
18
首次出现
Jan 1, 1970
安全审计
The PRP Generator helps you create comprehensive Product Requirements Prompts - structured documents that capture everything needed to build a product feature or system. PRPs are optimized for AI-assisted development, providing clear requirements that both humans and AI can understand.
Core Purpose: Transform vague ideas into actionable, complete requirements.
Use PRP Generator when:
A complete PRP contains these 12 sections:
Let's dive into each:
Purpose: High-level context and pattern classification
What to Include:
Example:
Project: Customer Support Chatbot
Pattern: C (AI-Native System)
Timeline: 10-12 weeks
Users: Customer support agents + end customers
Context: Reduce support ticket volume by 40% while maintaining customer satisfaction
Purpose: Clearly define the problem being solved
Template:
[User type] faces [problem] when [situation].
This causes [negative outcome].
We know this because [evidence].
Example:
Customer support agents face long response times when customers ask common questions about billing, account setup, and feature usage. This causes customer frustration and agent burnout handling repetitive inquiries.
We know this because:
- 60% of tickets are "How do I..." questions
- Average response time is 4 hours
- Agent surveys show 70% of time spent on repetitive questions
- NPS dropped from 45 to 38 in past 6 months
Purpose: Define measurable outcomes
Structure:
Example:
Primary Metric:
- Reduce support ticket volume by 40% within 3 months of launch
Secondary Metrics:
- 80% of common questions answered by AI without escalation
- <2 second response time for AI answers
- >4.0/5.0 user satisfaction rating with AI responses
- 50% reduction in agent time spent on common questions
Minimum Success:
- 30% ticket reduction + 4.0/5.0 satisfaction
Purpose: Capture user needs in job-to-be-done format
Template:
When [situation], I want to [action], so I can [outcome].
Example:
Customer Stories:
1. When I have a billing question, I want instant answers, so I can resolve issues without waiting.
2. When I'm setting up my account, I want step-by-step guidance, so I don't get stuck.
3. When I need to reset my password, I want a simple self-service flow, so I don't need to contact support.
Agent Stories:
1. When a complex issue arrives, I want context from the AI conversation, so I can help efficiently.
2. When training new agents, I want the AI to handle basics, so I can focus on teaching advanced topics.
3. When customers escalate, I want conversation history, so I don't ask redundant questions.
Purpose: What the system must do
Categories:
Example:
P0 (Core - MVP):
- FR-001: System answers common questions from knowledge base
- FR-002: System escalates to human when confidence is low (<70%)
- FR-003: Agents can see full conversation history
- FR-004: System tracks conversation satisfaction ratings
P1 (Important - Post-MVP):
- FR-005: System learns from agent corrections
- FR-006: System handles multi-turn conversations with context
- FR-007: Agents can override AI suggestions
P2 (Nice-to-have - Future):
- FR-008: System proactively suggests help articles
- FR-009: System detects frustrated customers
- FR-010: Multi-language support
Purpose: How the system should perform
Categories:
Example:
Performance:
- NFR-001: Response time <2 seconds for 95th percentile
- NFR-002: Handle 100 concurrent conversations
- NFR-003: Knowledge base search <500ms
Security:
- NFR-004: Customer data encrypted at rest and in transit
- NFR-005: SOC2 Type II compliance
- NFR-006: Role-based access control (RBAC)
- NFR-007: Audit logs for all AI responses
Scalability:
- NFR-008: Support 10,000 conversations/day at launch
- NFR-009: Scale to 100,000 conversations/day within 6 months
Reliability:
- NFR-010: 99.9% uptime SLA
- NFR-011: Graceful degradation if AI service unavailable
Usability:
- NFR-012: Agents can use with <10 minutes training
- NFR-013: WCAG 2.1 AA accessibility compliance
Purpose: Technology limitations and requirements
What to Include:
Example:
Integrations:
- Must integrate with existing Zendesk system
- Must use company SSO (Okta)
- Must log to existing Datadog monitoring
Technology Stack:
- Backend: Python (existing team expertise)
- LLM: OpenAI GPT-4 (approved vendor)
- Vector DB: Pinecone or Weaviate (to be decided)
- Frontend: React (existing stack)
Infrastructure:
- Deploy on existing AWS infrastructure
- Use existing CI/CD pipelines (GitHub Actions)
Budget:
- OpenAI API budget: $5,000/month maximum
- Infrastructure: $2,000/month maximum
Timeline:
- MVP must launch within 10 weeks
- Full feature set within 16 weeks
Purpose: What data is needed and how it's managed
Structure:
Example:
Data Sources:
- Knowledge base articles (500+ articles in Notion)
- Historical support tickets (Zendesk, 2 years)
- Product documentation (GitHub docs)
- FAQ pages (company website)
Data Models:
- Conversations: id, customer_id, agent_id, messages[], status, satisfaction_rating
- Messages: id, sender, text, timestamp, ai_confidence
- Knowledge: id, title, content, embeddings, category, last_updated
Data Privacy:
- PII must be redacted before AI processing
- Conversation data retained for 90 days
- Analytics data aggregated and anonymized
- GDPR right-to-delete compliance
Data Security:
- Encrypt customer data at rest (AES-256)
- Encrypt in transit (TLS 1.3)
- Role-based access to conversation data
Purpose: How users interact with the system
What to Include:
Example:
Customer Interface:
- Chat widget in bottom-right corner
- Typing indicators and response time estimates
- Clear "Talk to a human" button always visible
- Conversation history accessible for 30 days
Agent Interface:
- Side panel showing AI suggestions
- One-click "Accept AI answer" button
- Edit AI answer before sending
- Flag incorrect responses for retraining
- Dashboard showing AI performance metrics
User Flows:
1. Customer asks question → AI retrieves answer → Displays with confidence
2. Low confidence → Auto-escalate to agent → Agent sees full context
3. Agent corrects AI → System logs for improvement
Design Constraints:
- Match existing brand colors
- Mobile-responsive design
- WCAG 2.1 AA compliant
- Support keyboard navigation
Purpose: Identify potential blockers and dependencies
Structure:
Example:
Risks:
1. AI hallucination risk
- Mitigation: Confidence thresholds, human review for low confidence
2. Knowledge base quality risk
- Mitigation: Content audit before launch, SME review of top 100 articles
3. User adoption risk (agents don't trust AI)
- Mitigation: Gradual rollout, agent training, show accuracy metrics
4. API cost overruns
- Mitigation: Aggressive caching, token limits, usage monitoring
Assumptions:
1. Customers will accept AI responses (validate with beta test)
2. Knowledge base is accurate and up-to-date (audit required)
3. 80% of questions can be answered with existing knowledge
4. OpenAI API latency is acceptable (<2s)
Dependencies:
1. Access to Zendesk API (need approval from IT)
2. Knowledge base export from Notion
3. OpenAI API quota increase (currently limited)
4. Agent availability for training and feedback
Purpose: Explicitly state what WON'T be built
Why Important: Prevents scope creep and sets expectations
Example:
Out of Scope for MVP:
- Voice/phone support integration (post-MVP)
- Multi-language support (Phase 2)
- Integration with CRM system (future)
- Custom AI model training (using OpenAI)
- Mobile app (web only initially)
- Proactive outreach (reactive support only)
- Sentiment analysis dashboard (future analytics)
- Agent performance scoring (v2 feature)
Explicitly NOT Building:
- Custom LLM training (too expensive)
- Real-time translation (complexity vs. value)
- Video call integration (different project)
Purpose: Document unknowns that need answers
Structure:
Example:
Open Questions:
Q1: What's acceptable AI error rate for customers?
- Who: Product Manager + Customer Success lead
- Deadline: Week 2 (before architecture finalized)
- Impact: Determines confidence thresholds and escalation flow
Q2: Can we access historical conversation sentiment data?
- Who: Data team
- Deadline: Week 3 (before training data collection)
- Impact: Improves AI tone matching
Q3: What's our Zendesk API rate limit?
- Who: IT/Infrastructure
- Deadline: Week 1 (critical for architecture)
- Impact: May need caching strategy
Q4: Do we have budget for Pinecone or need open-source vector DB?
- Who: Engineering Manager
- Deadline: Week 2 (affects tech stack)
- Impact: Pinecone is easier, open-source is cheaper but more work
Q5: Are agents allowed to edit AI-generated responses?
- Who: Legal/Compliance
- Deadline: Week 4 (before feature development)
- Impact: Affects agent interface design
Project: Add CSV export to user dashboard
# PRP: CSV Export Feature
## 1. Project Overview
Pattern A (Simple Feature)
Timeline: 2-3 days
Add CSV export button to user dashboard
## 2. Problem Statement
Users need to export data for offline analysis. Currently they must manually copy-paste.
## 3. Success Criteria
- 80% of users who click export get successful download
- <5s export time for typical dataset (1000 rows)
## 4. User Stories
When viewing my data, I want to click "Export CSV", so I can analyze it in Excel.
## 5. Functional Requirements
FR-001: Export button in dashboard toolbar
FR-002: Exports all visible columns
FR-003: Respects current filters
FR-004: Filename includes timestamp
## 6-12. [Abbreviated for Pattern A]
See customer support chatbot example throughout sections above
Project: Multi-Agent Research Assistant
# PRP: Multi-Agent Research Assistant
## 1. Project Overview
Pattern C (AI-Native System)
Timeline: 14-16 weeks
Multi-agent system with specialized agents for research, synthesis, and fact-checking
## 2. Problem Statement
Researchers spend 60% of their time finding and synthesizing papers instead of analysis. Current tools return overwhelming results without quality filtering.
## 3. Success Criteria
Primary: Reduce research time from 8 hours to 2 hours per topic
Secondary:
- 90% accuracy in paper relevance
- 85% user satisfaction
- <30s for initial results
- 5+ papers synthesized per query
## 4. User Stories
- When I enter a research topic, I want a synthesized summary with sources, so I can quickly understand the landscape
- When results are vague, I want follow-up questions suggested, so I can refine my search
- When I find a relevant paper, I want related papers suggested, so I can explore deeper
## 5. Functional Requirements (AI-specific)
FR-001: Search agent queries multiple academic databases
FR-002: Filter agent scores paper relevance (0-100)
FR-003: Synthesis agent creates 500-word summary
FR-004: Fact-check agent validates key claims
FR-005: Orchestrator coordinates agents and resolves conflicts
## 6. Non-Functional Requirements (AI-specific)
NFR-001: Agent coordination latency <5s
NFR-002: RAG retrieval accuracy >90%
NFR-003: Handle 50 concurrent research sessions
NFR-004: LLM token budget: $10/research session maximum
## 7. Technical Constraints (AI-specific)
- LLM: GPT-4 for synthesis, GPT-3.5 for filtering
- Vector DB: Pinecone (1M vectors)
- Agent framework: LangChain or CrewAI
- Academic APIs: Semantic Scholar, arXiv
## 8. Data Requirements (AI-specific)
- Paper embeddings: title + abstract + keywords
- Citation network graph for related papers
- User research history for personalization
- Fact-check database with verified claims
## 9-12. [Complete as per structure]
Write problem statement before functional requirements. Understand "why" before "what".
Bad: "Improve user experience" Good: "Increase task completion rate from 60% to 85%"
Captures user intent, not just feature requests.
Prevents future arguments about "I thought we were building X".
Don't hide unknowns. Make them visible and track resolution.
PRPs evolve. Update when you discover new information.
Validate PRP with:
Antipattern: "Use React with Redux for state management" Better: "System must handle real-time updates across multiple views"
Antipattern: "Users should be happy" Better: "NPS score >50, task completion rate >80%"
Antipattern: Only listing features Better: Include performance, security, scalability
Antipattern: "Obviously we need authentication" Better: Explicitly state all requirements
Antipattern: Pattern A with Pattern C orchestration Better: Match PRP depth to pattern complexity
A complete PRP document containing:
You've created a good PRP when:
Remember: A good PRP is the difference between organized development and chaotic scope creep. Invest time upfront to save weeks later.
Weekly Installs
0
Repository
GitHub Stars
18
First Seen
Jan 1, 1970
Security Audits
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
45,100 周安装