deep-research by 199-biotechnologies/claude-deep-research-skill
npx skills add https://github.com/199-biotechnologies/claude-deep-research-skill --skill deep-research目的: 通过8阶段流程(范围界定 → 规划 → 检索 → 三角验证 → 综合 → 批判 → 精炼 → 打包)交付有引文支持、经过验证的研究报告,包含来源可信度评分和渐进式上下文管理。
上下文策略: 此技能采用2025年上下文工程最佳实践:
Request Analysis
├─ Simple lookup? → STOP: Use WebSearch, not this skill
├─ Debugging? → STOP: Use standard tools, not this skill
└─ Complex analysis needed? → CONTINUE
Mode Selection
├─ Initial exploration? → quick (3 phases, 2-5 min)
├─ Standard research? → standard (6 phases, 5-10 min) [DEFAULT]
├─ Critical decision? → deep (8 phases, 10-20 min)
└─ Comprehensive review? → ultradeep (8+ phases, 20-45 min)
Execution Loop (per phase)
├─ Load phase instructions from [methodology](./reference/methodology.md#phase-N)
├─ Execute phase tasks
├─ Spawn parallel agents if applicable
└─ Update progress
Validation Gate
├─ Run `python scripts/validate_report.py --report [path]`
├─ Pass? → Deliver
└─ Fail? → Fix (max 2 attempts) → Still fails? → Escalate
自主性原则: 此技能独立运行。从查询上下文中推断假设。仅在遇到关键错误或无法理解的查询时停止。
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
默认:自主进行。从查询信号中推导假设。
仅在关键性模糊时询问:
如有疑问:使用标准模式继续执行。如果错误,用户会重定向。
默认假设:
模式选择标准:
宣布计划并执行:
所有模式都执行:
标准/深度/超深度执行:
深度/超深度执行:
关键:避免"中间丢失"
渐进式上下文加载:
反幻觉协议(关键):
并行执行要求(速度关键):
阶段3 检索 - 强制并行搜索:
正确执行示例:
[Single message with 8+ parallel tool calls]
WebSearch #1: Core topic semantic
WebSearch #2: Technical keywords
WebSearch #3: Recent 2024-2025 filtered
WebSearch #4: Academic domains
WebSearch #5: Critical analysis
WebSearch #6: Industry trends
Task agent #1: Academic paper analysis
Task agent #2: Technical documentation deep dive
❌ 错误(顺序执行):
WebSearch #1 → wait for results → WebSearch #2 → wait → WebSearch #3...
✅ 正确(并行执行):
All searches + agents launched simultaneously in one message
步骤1:引文验证(捕获捏造来源)
python scripts/verify_citations.py --report [path]
检查:
如果发现可疑引文:
步骤2:结构与质量验证
python scripts/validate_report.py --report [path]
8项自动化检查:
如果失败:
关键:生成全面、详细的Markdown报告
文件组织(关键 - 清晰可访问):
1. 在文档中创建有组织的文件夹:
~/Documents/[主题名称]_研究_[YYYYMMDD]/~/Documents/Psilocybin_Research_20251104/~/Documents/React_vs_Vue_Research_20251104/~/Documents/AI_Safety_Trends_Research_20251104/2. 将所有格式保存到同一文件夹:
Markdown(主要来源):
[文档文件夹]/research_report_[YYYYMMDD]_[主题缩写].md~/.claude/research_output/(内部跟踪)HTML(麦肯锡风格 - 始终生成):
[文档文件夹]/research_report_[YYYYMMDD]_[主题缩写].html<span class="citation">中,并嵌套显示来源详情的工具提示divPDF(专业打印 - 始终生成):
[文档文件夹]/research_report_[YYYYMMDD]_[主题缩写].pdf3. 文件命名约定: 所有文件使用相同的基础名称以便匹配:
research_report_20251104_psilocybin_2025.mdresearch_report_20251104_psilocybin_2025.htmlresearch_report_20251104_psilocybin_2025.pdf长度要求(通过渐进式组装实现无限制):
无限制长度如何工作: 渐进式文件组装允许通过逐节生成实现任何报告长度。每个章节立即写入文件(避免输出令牌限制)。复杂主题有很多发现?生成20、30、50+个发现 - 无限制!
内容要求:
写作标准:
项目符号策略(反疲劳执行):
反疲劳质量检查(应用于每个章节): 在认为章节完成之前,验证:
如果任何检查失败:在移动到下一节之前重新生成该章节。
来源归属标准(防止捏造的关键):
交付给用户:
生成工作流程:渐进式文件组装(无限制长度)
阶段 8.1:设置
# Extract topic slug from research question
# Create folder: ~/Documents/[TopicName]_Research_[YYYYMMDD]/
mkdir -p ~/Documents/[folder_name]
# Create initial markdown file with frontmatter
# File path: [folder]/research_report_[YYYYMMDD]_[slug].md
阶段 8.2:渐进式章节生成
关键策略: 使用Write/Edit工具逐个生成章节内容并写入文件。这允许无限制的报告长度,同时保持每次生成可控。
输出令牌限制保障(关键 - Claude Code 默认值:32K):
Claude Code 默认限制:32,000 输出令牌(≈每次技能执行总计 24,000 字)这是一个硬性限制,无法在技能内更改。
这意味着:
每种模式的现实报告大小:
对于 >20,000 字的报告: 用户必须多次运行技能:
自动延续策略(真正的无限制长度):
当单次运行报告超过 18,000 字时:
这实现了无限制长度,同时尊重每个代理的32K限制
初始化引文跟踪:
citations_used = [] # Maintain this list in working memory throughout
章节生成循环:
模式: 生成章节内容 → 使用Write/Edit工具写入该内容 → 移动到下一章节 每个Write/Edit调用包含一个章节(每次调用≤2,000字)
执行摘要 (200-400 字)
引言 (400-800 字)
发现 1 (600-2,000 字)
发现 2 (600-2,000 字)
... 为所有发现继续(每个发现 = 一个Edit工具调用,≤2,000字)
关键: 如果你有10个发现 × 每个1,500字 = 15,000字的发现 这是可以的,因为每个Edit调用只有1,500字(每次工具调用低于2,000字限制) 文件增长到15,000字,但没有任何单个工具调用超过限制
综合与见解
局限性与注意事项
建议
参考文献(关键 - 所有引文)
方法论附录
阶段 8.3:自动延续决策点
生成章节后,检查字数:
如果总输出 ≤18,000 字: 正常完成
如果总输出将超过 18,000 字: 自动延续协议
步骤1:保存延续状态 创建文件:~/.claude/research_output/continuation_state_[报告ID].json
{
"version": "2.1.1",
"report_id": "[unique_id]",
"file_path": "[absolute_path_to_report.md]",
"mode": "[quick|standard|deep|ultradeep]",
"progress": {
"sections_completed": [list of section IDs done],
"total_planned_sections": [total count],
"word_count_so_far": [current word count],
"continuation_count": [which continuation this is, starts at 1]
},
"citations": {
"used": [1, 2, 3, ..., N],
"next_number": [N+1],
"bibliography_entries": [
"[1] Full citation entry",
"[2] Full citation entry",
...
]
},
"research_context": {
"research_question": "[original question]",
"key_themes": ["theme1", "theme2", "theme3"],
"main_findings_summary": [
"Finding 1: [100-word summary]",
"Finding 2: [100-word summary]",
...
],
"narrative_arc": "[Current position in story: beginning/middle/conclusion]"
},
"quality_metrics": {
"avg_words_per_finding": [calculated average],
"citation_density": [citations per 1000 words],
"prose_vs_bullets_ratio": [e.g., "85% prose"],
"writing_style": "technical-precise-data-driven"
},
"next_sections": [
{"id": N, "type": "finding", "title": "Finding X", "target_words": 1500},
{"id": N+1, "type": "synthesis", "title": "Synthesis", "target_words": 1000},
...
]
}
步骤2:生成延续代理
使用Task工具与通用代理:
Task(
subagent_type="general-purpose",
description="Continue deep-research report generation",
prompt="""
CONTINUATION TASK: You are continuing an existing deep-research report.
CRITICAL INSTRUCTIONS:
1. Read continuation state file: ~/.claude/research_output/continuation_state_[report_id].json
2. Read existing report to understand context: [file_path from state]
3. Read LAST 3 completed sections to understand flow and style
4. Load research context: themes, narrative arc, writing style from state
5. Continue citation numbering from state.citations.next_number
6. Maintain quality metrics from state (avg words, citation density, prose ratio)
CONTEXT PRESERVATION:
- Research question: [from state]
- Key themes established: [from state]
- Findings so far: [summaries from state]
- Narrative position: [from state]
- Writing style: [from state]
YOUR TASK:
Generate next batch of sections (stay under 18,000 words):
[List next_sections from state]
Use Write/Edit tools to append to existing file: [file_path]
QUALITY GATES (verify before each section):
- Words per section: Within ±20% of [avg_words_per_finding]
- Citation density: Match [citation_density] ±0.5 per 1K words
- Prose ratio: Maintain ≥80% prose (not bullets)
- Theme alignment: Section ties to key_themes
- Style consistency: Match [writing_style]
After generating sections:
- If more sections remain: Update state, spawn next continuation agent
- If final sections: Generate complete bibliography, verify report, cleanup state file
HANDOFF PROTOCOL (if spawning next agent):
1. Update continuation_state.json with new progress
2. Add new citations to state
3. Add summaries of new findings to state
4. Update quality metrics
5. Spawn next agent with same instructions
"""
)
步骤3:报告延续状态 告诉用户:
📊 Report Generation: Part 1 Complete (N sections, X words)
🔄 Auto-continuing via spawned agent...
Next batch: [section list]
Progress: [X%] complete
阶段 8.4:延续代理质量协议
当延续代理启动时:
上下文加载(关键):
预生成检查清单:
每章节生成:
交接决策:
最终代理职责:
内置反疲劳: 每个代理生成可管理的块(≤18K字),保持质量。上下文保留确保跨延续边界的连贯性。
生成HTML(麦肯锡风格)
从 ./templates/mckinsey_report_template.html 读取麦肯锡模板
从发现中提取3-4个关键定量指标用于仪表板
使用Python脚本进行MD到HTML转换:
cd ~/.claude/skills/deep-research
python scripts/md_to_html.py [markdown_report_path]
脚本返回两部分:
* **部分A ({{CONTENT}}):** 除参考文献外的所有章节,正确转换为HTML
* **部分B ({{BIBLIOGRAPHY}}):** 仅参考文献章节,格式化为HTML
关键: 脚本自动处理所有转换:
* 标题:## → `<div class="section"><h2 class="section-title">`, ### → `<h3 class="subsection-title">`
* 列表:Markdown项目符号 → `<ul><li>` 带有适当的嵌套
* 表格:Markdown表格 → `<table>` 带有thead/tbody
* 段落:文本包裹在`<p>`标签中
* 粗体/斜体:**text** → `<strong>`, _text_ → `<em>`
* 引文:[N] 保留用于步骤4中的工具提示转换
4. 添加引文工具提示(归属渐变): 对于{{CONTENT}}(非参考文献)中的每个[N]引文,可选添加交互式工具提示:
<span class="citation">[N]
<span class="citation-tooltip">
<div class="tooltip-title">[Source Title]</div>
<div class="tooltip-source">[Author/Publisher]</div>
<div class="tooltip-claim">
<div class="tooltip-claim-label">Supports Claim:</div>
[Extract sentence with this citation]
</div>
</span>
</span>
注意:此步骤对于速度是可选的。基本的[N]引文就足够了。
替换模板中的占位符:
关键:无表情符号 - 从最终HTML中移除任何表情符号字符
保存到:[文件夹]/research_report_[YYYYMMDD]_[缩写].html
验证HTML(强制):
python scripts/verify_html.py --html [html_path] --md [md_path]
在浏览器中打开:open [html_path]
生成PDF
[文件夹]/research_report_[YYYYMMDD]_[缩写].pdf格式: 遵循模板精确的全面Markdown报告
必需章节(所有都必须详细):
参考文献要求(零容忍 - 没有完整参考文献的报告无法使用):
严格禁止:
写作标准(关键):
质量门控(由验证器强制执行):
**立即
Purpose: Deliver citation-backed, verified research reports through 8-phase pipeline (Scope → Plan → Retrieve → Triangulate → Synthesize → Critique → Refine → Package) with source credibility scoring and progressive context management.
Context Strategy: This skill uses 2025 context engineering best practices:
Request Analysis
├─ Simple lookup? → STOP: Use WebSearch, not this skill
├─ Debugging? → STOP: Use standard tools, not this skill
└─ Complex analysis needed? → CONTINUE
Mode Selection
├─ Initial exploration? → quick (3 phases, 2-5 min)
├─ Standard research? → standard (6 phases, 5-10 min) [DEFAULT]
├─ Critical decision? → deep (8 phases, 10-20 min)
└─ Comprehensive review? → ultradeep (8+ phases, 20-45 min)
Execution Loop (per phase)
├─ Load phase instructions from [methodology](./reference/methodology.md#phase-N)
├─ Execute phase tasks
├─ Spawn parallel agents if applicable
└─ Update progress
Validation Gate
├─ Run `python scripts/validate_report.py --report [path]`
├─ Pass? → Deliver
└─ Fail? → Fix (max 2 attempts) → Still fails? → Escalate
AUTONOMY PRINCIPLE: This skill operates independently. Infer assumptions from query context. Only stop for critical errors or incomprehensible queries.
DEFAULT: Proceed autonomously. Derive assumptions from query signals.
ONLY ask if CRITICALLY ambiguous:
When in doubt: PROCEED with standard mode. User will redirect if incorrect.
Default assumptions:
Mode selection criteria:
Announce plan and execute:
All modes execute:
Standard/Deep/UltraDeep execute:
Deep/UltraDeep execute:
Critical: Avoid "Loss in the Middle"
Progressive Context Loading:
Anti-Hallucination Protocol (CRITICAL):
Parallel Execution Requirements (CRITICAL for Speed):
Phase 3 RETRIEVE - Mandatory Parallel Search:
Example correct execution:
[Single message with 8+ parallel tool calls]
WebSearch #1: Core topic semantic
WebSearch #2: Technical keywords
WebSearch #3: Recent 2024-2025 filtered
WebSearch #4: Academic domains
WebSearch #5: Critical analysis
WebSearch #6: Industry trends
Task agent #1: Academic paper analysis
Task agent #2: Technical documentation deep dive
❌ WRONG (sequential execution):
WebSearch #1 → wait for results → WebSearch #2 → wait → WebSearch #3...
✅ RIGHT (parallel execution):
All searches + agents launched simultaneously in one message
Step 1: Citation Verification (Catches Fabricated Sources)
python scripts/verify_citations.py --report [path]
Checks:
If suspicious citations found:
Step 2: Structure & Quality Validation
python scripts/validate_report.py --report [path]
8 automated checks:
If fails:
CRITICAL: Generate COMPREHENSIVE, DETAILED markdown reports
File Organization (CRITICAL - Clean Accessibility):
1. Create Organized Folder in Documents:
~/Documents/[TopicName]_Research_[YYYYMMDD]/~/Documents/Psilocybin_Research_20251104/~/Documents/React_vs_Vue_Research_20251104/~/Documents/AI_Safety_Trends_Research_20251104/2. Save All Formats to Same Folder:
Markdown (Primary Source):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].md~/.claude/research_output/ (internal tracking)HTML (McKinsey Style - ALWAYS GENERATE):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].html<span class="citation"> with nested tooltip div showing source detailsPDF (Professional Print - ALWAYS GENERATE):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].pdf3. File Naming Convention: All files use same base name for easy matching:
research_report_20251104_psilocybin_2025.mdresearch_report_20251104_psilocybin_2025.htmlresearch_report_20251104_psilocybin_2025.pdfLength Requirements (UNLIMITED with Progressive Assembly):
How Unlimited Length Works: Progressive file assembly allows ANY report length by generating section-by-section. Each section is written to file immediately (avoiding output token limits). Complex topics with many findings? Generate 20, 30, 50+ findings - no constraint!
Content Requirements:
Writing Standards:
Bullet Point Policy (Anti-Fatigue Enforcement):
Anti-Fatigue Quality Check (Apply to EVERY Section): Before considering a section complete, verify:
If ANY check fails: Regenerate the section before moving to next.
Source Attribution Standards (Critical for Preventing Fabrication):
Deliver to user:
Generation Workflow: Progressive File Assembly (Unlimited Length)
Phase 8.1: Setup
# Extract topic slug from research question
# Create folder: ~/Documents/[TopicName]_Research_[YYYYMMDD]/
mkdir -p ~/Documents/[folder_name]
# Create initial markdown file with frontmatter
# File path: [folder]/research_report_[YYYYMMDD]_[slug].md
Phase 8.2: Progressive Section Generation
CRITICAL STRATEGY: Generate and write each section individually to file using Write/Edit tools. This allows unlimited report length while keeping each generation manageable.
OUTPUT TOKEN LIMIT SAFEGUARD (CRITICAL - Claude Code Default: 32K):
Claude Code default limit: 32,000 output tokens (≈24,000 words total per skill execution) This is a HARD LIMIT and cannot be changed within the skill.
What this means:
Realistic report sizes per mode:
For reports >20,000 words: User must run skill multiple times:
Auto-Continuation Strategy (TRUE Unlimited Length):
When report exceeds 18,000 words in single run:
This achieves UNLIMITED length while respecting 32K limit per agent
Initialize Citation Tracking:
citations_used = [] # Maintain this list in working memory throughout
Section Generation Loop:
Pattern: Generate section content → Use Write/Edit tool with that content → Move to next section Each Write/Edit call contains ONE section (≤2,000 words per call)
Executive Summary (200-400 words)
Introduction (400-800 words)
Finding 1 (600-2,000 words)
Finding 2 (600-2,000 words)
... Continue for ALL findings (each finding = one Edit tool call, ≤2,000 words)
CRITICAL: If you have 10 findings × 1,500 words each = 15,000 words of findings This is OKAY because each Edit call is only 1,500 words (under 2,000 word limit per tool call) The FILE grows to 15,000 words, but no single tool call exceeds limits
Synthesis & Insights
Limitations & Caveats
Recommendations
Bibliography (CRITICAL - ALL Citations)
Methodology Appendix
Phase 8.3: Auto-Continuation Decision Point
After generating sections, check word count:
If total output ≤18,000 words: Complete normally
If total output will exceed 18,000 words: Auto-Continuation Protocol
Step 1: Save Continuation State Create file: ~/.claude/research_output/continuation_state_[report_id].json
{
"version": "2.1.1",
"report_id": "[unique_id]",
"file_path": "[absolute_path_to_report.md]",
"mode": "[quick|standard|deep|ultradeep]",
"progress": {
"sections_completed": [list of section IDs done],
"total_planned_sections": [total count],
"word_count_so_far": [current word count],
"continuation_count": [which continuation this is, starts at 1]
},
"citations": {
"used": [1, 2, 3, ..., N],
"next_number": [N+1],
"bibliography_entries": [
"[1] Full citation entry",
"[2] Full citation entry",
...
]
},
"research_context": {
"research_question": "[original question]",
"key_themes": ["theme1", "theme2", "theme3"],
"main_findings_summary": [
"Finding 1: [100-word summary]",
"Finding 2: [100-word summary]",
...
],
"narrative_arc": "[Current position in story: beginning/middle/conclusion]"
},
"quality_metrics": {
"avg_words_per_finding": [calculated average],
"citation_density": [citations per 1000 words],
"prose_vs_bullets_ratio": [e.g., "85% prose"],
"writing_style": "technical-precise-data-driven"
},
"next_sections": [
{"id": N, "type": "finding", "title": "Finding X", "target_words": 1500},
{"id": N+1, "type": "synthesis", "title": "Synthesis", "target_words": 1000},
...
]
}
Step 2: Spawn Continuation Agent
Use Task tool with general-purpose agent:
Task(
subagent_type="general-purpose",
description="Continue deep-research report generation",
prompt="""
CONTINUATION TASK: You are continuing an existing deep-research report.
CRITICAL INSTRUCTIONS:
1. Read continuation state file: ~/.claude/research_output/continuation_state_[report_id].json
2. Read existing report to understand context: [file_path from state]
3. Read LAST 3 completed sections to understand flow and style
4. Load research context: themes, narrative arc, writing style from state
5. Continue citation numbering from state.citations.next_number
6. Maintain quality metrics from state (avg words, citation density, prose ratio)
CONTEXT PRESERVATION:
- Research question: [from state]
- Key themes established: [from state]
- Findings so far: [summaries from state]
- Narrative position: [from state]
- Writing style: [from state]
YOUR TASK:
Generate next batch of sections (stay under 18,000 words):
[List next_sections from state]
Use Write/Edit tools to append to existing file: [file_path]
QUALITY GATES (verify before each section):
- Words per section: Within ±20% of [avg_words_per_finding]
- Citation density: Match [citation_density] ±0.5 per 1K words
- Prose ratio: Maintain ≥80% prose (not bullets)
- Theme alignment: Section ties to key_themes
- Style consistency: Match [writing_style]
After generating sections:
- If more sections remain: Update state, spawn next continuation agent
- If final sections: Generate complete bibliography, verify report, cleanup state file
HANDOFF PROTOCOL (if spawning next agent):
1. Update continuation_state.json with new progress
2. Add new citations to state
3. Add summaries of new findings to state
4. Update quality metrics
5. Spawn next agent with same instructions
"""
)
Step 3: Report Continuation Status Tell user:
📊 Report Generation: Part 1 Complete (N sections, X words)
🔄 Auto-continuing via spawned agent...
Next batch: [section list]
Progress: [X%] complete
Phase 8.4: Continuation Agent Quality Protocol
When continuation agent starts:
Context Loading (CRITICAL):
Pre-Generation Checklist:
Per-Section Generation:
Handoff Decision:
Final Agent Responsibilities:
Anti-Fatigue Built-In: Each agent generates manageable chunks (≤18K words), maintaining quality. Context preservation ensures coherence across continuation boundaries.
Generate HTML (McKinsey Style)
Read McKinsey template from ./templates/mckinsey_report_template.html
Extract 3-4 key quantitative metrics from findings for dashboard
Use Python script for MD to HTML conversion:
cd ~/.claude/skills/deep-research
python scripts/md_to_html.py [markdown_report_path]
The script returns two parts:
* **Part A ({{CONTENT}}):** All sections except Bibliography, properly converted to HTML
* **Part B ({{BIBLIOGRAPHY}}):** Bibliography section only, formatted as HTML
CRITICAL: The script handles ALL conversion automatically:
* Headers: ## → `<div class="section"><h2 class="section-title">`, ### → `<h3 class="subsection-title">`
* Lists: Markdown bullets → `<ul><li>` with proper nesting
* Tables: Markdown tables → `<table>` with thead/tbody
* Paragraphs: Text wrapped in `<p>` tags
* Bold/italic: **text** → `<strong>`, _text_ → `<em>`
* Citations: [N] preserved for tooltip conversion in step 4
4. Add Citation Tooltips (Attribution Gradients): For each [N] citation in {{CONTENT}} (not bibliography), optionally add interactive tooltips:
<span class="citation">[N]
<span class="citation-tooltip">
<div class="tooltip-title">[Source Title]</div>
<div class="tooltip-source">[Author/Publisher]</div>
<div class="tooltip-claim">
<div class="tooltip-claim-label">Supports Claim:</div>
[Extract sentence with this citation]
</div>
</span>
</span>
NOTE: This step is optional for speed. Basic [N] citations are sufficient.
Replace placeholders in template:
CRITICAL: NO EMOJIS - Remove any emoji characters from final HTML
Save to: [folder]/research_report_[YYYYMMDD]_[slug].html
Verify HTML (MANDATORY):
python scripts/verify_html.py --html [html_path] --md [md_path]
Open in browser: open [html_path]
Generate PDF
[folder]/research_report_[YYYYMMDD]_[slug].pdfFormat: Comprehensive markdown report following template EXACTLY
Required sections (all must be detailed):
Bibliography Requirements (ZERO TOLERANCE - Report is UNUSABLE without complete bibliography):
Strictly Prohibited:
Writing Standards (Critical):
Quality gates (enforced by validator):
Stop immediately if:
Graceful degradation:
Error format:
⚠️ Issue: [Description]
📊 Context: [What was attempted]
🔍 Tried: [Resolution attempts]
💡 Options:
1. [Option 1]
2. [Option 2]
3. [Option 3]
Every report must:
Priority: Thoroughness over speed. Quality > speed.
Required:
Optional:
Assumptions:
Use when:
Do NOT use:
Location: ./scripts/
No external dependencies required.
Do not inline these - reference only:
Context Management: Load files on-demand for current phase only. Do not preload all content.
User Query Processing: [User research question will be inserted here during execution]
Retrieved Information: [Search results and sources will be accumulated here]
Generated Analysis: [Findings, synthesis, and report content generated here]
Note: This section remains empty in the skill definition. Content populated during runtime only.
Weekly Installs
1.7K
Repository
GitHub Stars
88
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubWarnSocketPassSnykWarn
Installed on
opencode1.5K
gemini-cli1.4K
codex1.4K
github-copilot1.3K
cursor1.2K
kimi-cli1.2K
76,500 周安装