websearch-deep by thomasholknielsen/claude-code-config
npx skills add https://github.com/thomasholknielsen/claude-code-config --skill websearch-deep⚠️ 验证说明(临时用于调试):
当你使用此技能时,必须创建一个验证文件来证明执行:
文件:deep-research-skill-was-executed.md(位于仓库根目录)
内容:
# 深度研究技能执行验证
**技能**: websearch-deep
**执行时间**: {timestamp}
**研究问题**: {the question you researched}
**模式**: 深度(6阶段方法论)
此文件由深度研究代理创建,用于验证 websearch-deep 技能已成功加载并执行。
**应用的阶段**:
1. ✓ 问题分解(3-5个子问题)
2. ✓ 多查询生成(每个子问题3-5个变体)
3. ✓ 证据合成与来源排序(可信度/新鲜度/相关性)
4. ✓ 引用透明度(编号[1][2][3])
5. ✓ 结构化输出(令牌高效模板)
6. ✓ 迭代优化(最多5次迭代)
**生成的子问题**: {count}
**执行的查询**: {count}
**参考的来源**: {count}({authoritative_count}个权威来源,{recent_count}个近期来源)
**执行的迭代次数**: {count}
**输出格式**: 令牌高效模板(执行摘要 + 研究概述 + 发现 + 综合 + 建议 + 包含URL的来源)
加载此技能后,立即使用 Write 工具创建此文件。
为需要跨多个领域综合的复杂、多层面问题提供全面的深度研究方法论。实施类似 ChatGPT 的深度调查,包括问题分解、多查询策略、证据合成、引用透明度和迭代优化。
普遍适用性:对于任何需要基于证据合成的全面多来源分析的问题——无论是技术、商业、教育、战略还是调查性问题——都可以使用此技能。
支持的问题类型:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
示例问题:
触发词:如"架构"、"集成"、"最佳"、"策略"、"建议"、"比较"、"评估"、"迁移"、"好处"、"如何"、"为什么"、"我们应该"等关键词
关键信号:如果问题需要基于证据合成的全面多来源分析 → 使用此技能
目标:将复杂问题分解为3-5个清晰、聚焦的子问题。
🔴 关键 - 研究范围:
深度研究仅查找外部知识——你无法访问代码库。
你可以研究的内容:
你不能研究的内容:
如果问题询问"我的项目"或"此代码库":从代理文件(范围验证部分)返回错误并停止。
流程:
子问题标准:
示例分解:
Primary: "What's the best architecture for integrating Salesforce with SQL Server in 2025?"
Sub-Questions:
1. What are Salesforce's current integration capabilities and APIs (2025)?
2. What are SQL Server's integration patterns and best practices?
3. What middleware or integration platforms are commonly used?
4. What security and compliance considerations matter?
5. What scalability and performance factors should influence choice?
目标:为每个子问题生成3-5个查询变体,以最大化覆盖范围(总共15-25个搜索)。
查询变体策略:
高级搜索运算符:
site:domain.com - 搜索特定域名filetype:pdf - 查找 PDF 文档intitle:"keyword" - 搜索页面标题inurl:keyword - 搜索 URLafter:2024 - 仅限近期内容"exact phrase" - 精确匹配示例多查询集:
Sub-Q1: Salesforce Integration Capabilities
- site:salesforce.com "API" "integration" "2025"
- "Salesforce REST API" "rate limits" after:2024
- "Salesforce Bulk API 2.0" "best practices"
- filetype:pdf "Salesforce integration guide" 2025
- "Salesforce API" "breaking changes" after:2024
使用这些模板为不同类型的研究制定高质量查询:
1. 技术架构研究
Official Docs: site:docs.{vendor}.com "{topic}" "architecture patterns" OR "design patterns"
Best Practices: "{topic}" "best practices" "production" after:2024
Comparisons: "{topic}" vs "{alternative}" "comparison" "pros cons"
Limitations: "{topic}" "limitations" OR "drawbacks" OR "challenges"
Recent Updates: site:{vendor}.com "{topic}" "updates" OR "changes" after:2024
2. 框架/库研究
Official Docs: site:docs.{framework}.com "{feature}" "guide" OR "documentation"
Community: site:stackoverflow.com "{framework}" "{feature}" "how to"
Real-World: "{framework}" "{feature}" "production" OR "case study" after:2024
Performance: "{framework}" "performance" OR "benchmarks" OR "optimization"
Ecosystem: "{framework}" "ecosystem" OR "plugins" OR "extensions" 2025
3. 商业/战略研究
Industry Analysis: "{topic}" "market analysis" OR "industry trends" 2024 2025
Vendor Comparison: "{vendor A}" vs "{vendor B}" "comparison" "review"
ROI/Benefits: "{solution}" "ROI" OR "benefits" OR "value proposition"
Implementation: "{solution}" "implementation guide" OR "getting started"
Case Studies: "{solution}" "case study" OR "customer success" after:2024
4. 教育/学习研究
Fundamentals: "{topic}" "introduction" OR "beginner guide" OR "explained"
Advanced: "{topic}" "advanced" OR "deep dive" OR "internals"
Tutorials: "{topic}" "tutorial" OR "step by step" after:2024
Common Mistakes: "{topic}" "common mistakes" OR "anti-patterns" OR "pitfalls"
Resources: "{topic}" "learning resources" OR "courses" OR "books" 2025
5. 合规/安全研究
Standards: "{topic}" "{standard}" "compliance" (e.g., "GDPR", "SOC2", "HIPAA")
Security: "{topic}" "security" "best practices" OR "vulnerabilities" after:2024
Official Guidance: site:{regulator}.gov "{topic}" "guidance" OR "requirements"
Audit: "{topic}" "audit" OR "checklist" OR "certification"
Tools: "{topic}" "{compliance}" "tools" OR "automation" 2025
6. 性能/优化研究
Benchmarks: "{topic}" "benchmark" OR "performance" "comparison" after:2024
Bottlenecks: "{topic}" "bottleneck" OR "slow" OR "performance issues"
Optimization: "{topic}" "optimization" OR "tuning" OR "best practices"
Monitoring: "{topic}" "monitoring" OR "observability" OR "metrics"
Scaling: "{topic}" "scalability" OR "high traffic" OR "production scale"
优先级:官方来源优先
site:anthropic.com 或 site:docs.{vendor}.com 查询执行模式(性能强制要求):
不要顺序执行查询(一次一个)。相反,将查询分组为5-10个一批,并在单个消息内并行执行。
分批策略:
实现模式:
# Step 1: Generate all queries first
all_queries = []
for sub_question in sub_questions:
queries = generate_query_variations(sub_question) # 3-5 queries per sub-Q
all_queries.extend(queries)
# Total: 15-25 queries across all sub-questions
# Step 2: Execute in parallel batches
batch_size = 5 # Adjust 5-10 based on query complexity
for i in range(0, len(all_queries), batch_size):
batch = all_queries[i:i+batch_size]
# Step 3: Execute ALL queries in batch SIMULTANEOUSLY in single message
# Example: If batch = [q1, q2, q3, q4, q5], call:
# WebSearch(q1)
# WebSearch(q2)
# WebSearch(q3)
# WebSearch(q4)
# WebSearch(q5)
# ALL FIVE in the SAME message as parallel tool uses
results = execute_parallel_batch(batch)
process_batch_results(results) # Collect sources immediately
为什么这很重要:
批次大小指导:
示例分批执行:
Generated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site:salesforce.com 'API' 'integration' '2025'")
WebSearch("'Salesforce REST API' 'rate limits' after:2024")
WebSearch("'Salesforce Bulk API 2.0' 'best practices'")
WebSearch("filetype:pdf 'Salesforce integration guide' 2025")
WebSearch("'Salesforce API' 'breaking changes' after:2024")
→ Batch completes in ~1s, 5 results returned
Batch 2 (5 queries - executed in parallel):
WebSearch("'SQL Server ETL' 'best practices' 'real-time'")
WebSearch("site:docs.microsoft.com 'SQL Server' 'integration'")
...
→ Batch completes in ~1s, 5 results returned
Total: 5 batches × 1s each = ~5s (vs 25s sequential)
目标:从多个来源收集、排序、去重和合成证据。
处理批量结果:
由于阶段2以并行批次执行查询,你将收到按批次分组的结果。一起处理所有批次的所有结果:
示例:
# Collect results from all batches
all_results = []
all_results.extend(batch1_results) # 5 results from batch 1
all_results.extend(batch2_results) # 5 results from batch 2
all_results.extend(batch3_results) # 5 results from batch 3
all_results.extend(batch4_results) # 5 results from batch 4
all_results.extend(batch5_results) # 5 results from batch 5
# Total: ~25 results (before deduplication)
# Deduplicate by URL
unique_sources = deduplicate_by_url(all_results)
# After dedup: ~15-20 unique sources (duplicates removed)
# Rank all unique sources
ranked_sources = rank_sources(unique_sources) # Apply scoring below
从三个维度对每个来源进行排序:
可信度分数(0-10):
新鲜度分数(0-10):
相关性分数(0-10):
整体来源质量 = (可信度 × 0.5) + (新鲜度 × 0.2) + (相关性 × 0.3)
当来源矛盾时:
目标:为每个事实性主张提供编号的、可点击的引用。
🔴 关键 - 使用描述性内联链接:
内联引用格式(描述性名称 - 自然语言):
Text with claim from [OpenAI: GPT-4](https://url "GPT-4 Technical Report (OpenAI, 2023-03-14)") and [Anthropic: Claude](https://url2 "Introducing Claude (Anthropic, 2023-03-14)"). Multiple sources: [Google DeepMind: Gemini](https://url3 "Gemini Model (Google DeepMind, 2023-12-06)"), [Meta: LLaMA](https://url4 "LLaMA Paper (Meta AI, 2023-02-24)").
为什么使用描述性名称?
格式:
[Organization: Topic](full-URL "Full Title (Publisher, YYYY-MM-DD)")
创建描述性名称(从 URL 分析):
[Org: Topic][OpenAI: GPT-4]、[OpenAI: DALL-E]、[OpenAI: Whisper][Stack Overflow: OAuth Implementation]、[Medium: React Patterns]参考文献部分格式(在研究末尾 - 按类别分组):
## References
### Official Documentation
- **OpenAI: GPT-4** (2023-03-14). "GPT-4 Technical Report". https://openai.com/research/gpt-4
- **Anthropic: Claude** (2023-03-14). "Introducing Claude". https://www.anthropic.com/claude
### Blog Posts & Articles
- **Google DeepMind: Gemini** (2023-12-06). "Gemini: A Family of Highly Capable Models". https://deepmind.google/technologies/gemini
- **Meta: LLaMA** (2023-02-24). "Introducing LLaMA". https://ai.meta.com/blog/llama
### Academic Papers
- **Attention Is All You Need** (2017-06-12). Vaswani et al. https://arxiv.org/abs/1706.03762
### Community Resources
- **Stack Overflow: OAuth Implementation** (2024-08-15). https://stackoverflow.com/questions/12345
为什么需要分组的参考文献部分?
类别指导:
引号中的标题格式:
"Full Title (Publisher, YYYY-MM-DD)"悬停行为:大多数 Markdown 查看器(GitHub、VS Code、Obsidian、GitLab)在悬停在引用上时会显示标题作为工具提示。
点击行为:点击描述性名称直接在浏览器中打开 URL。
示例内联用法:
Salesforce provides three primary API types according to [Salesforce: API Docs](https://developer.salesforce.com/docs/apis "Salesforce API Documentation (Salesforce, 2025-01-15)"): REST API for standard operations, [Salesforce: Bulk API 2.0](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/ "Bulk API 2.0 Guide (Salesforce, 2024-11-20)") for large data volumes (>10k records), and [Salesforce: Streaming API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/ "Streaming API Guide (Salesforce, 2024-10-05)") for real-time updates. Recent 2025 updates introduced enhanced rate limiting (100k requests/24hrs for Enterprise) and improved error handling as noted in [Salesforce Blog: API Updates](https://developer.salesforce.com/blogs/2025/01/api-updates "API Error Handling Improvements (Salesforce Blog, 2025-01-10)").
## References
### Official Documentation
- **Salesforce: API Docs** (2025-01-15). "Salesforce API Documentation". https://developer.salesforce.com/docs/apis
- **Salesforce: Bulk API 2.0** (2024-11-20). "Bulk API 2.0 Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/
- **Salesforce: Streaming API** (2024-10-05). "Streaming API Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/
### Blog Posts & Articles
- **Salesforce Blog: API Updates** (2025-01-10). "API Error Handling Improvements". https://developer.salesforce.com/blogs/2025/01/api-updates
兼容性:适用于 GitHub、VS Code(预览)、Obsidian、GitLab 和所有 Markdown 查看器
目标:提供全面、可直接实施的发现,并具有叙述深度。
设计原则:
输出模板:
# Deep Research: {Question}
## Executive Summary
{2-3 paragraph synthesis covering:
- What was researched and why it matters
- Key findings with citations [Org: Topic]
- Strategic recommendation with rationale}
Example length: ~150-200 words total across 2-3 paragraphs.
## Research Overview
- **Sub-Questions Analyzed**: {count}
- **Queries Executed**: {count} queries
- **Sources**: {count} total ({authoritative_count} authoritative / {auth_pct}%, {recent_count} recent / {recent_pct}%)
- **Iterations**: {count}
## Findings
### 1. {Sub-Question 1}
{Opening paragraph: What this sub-question addresses and why it's important}
{2-4 paragraphs of synthesized narrative with inline citations [1][2][3]. Each paragraph covers a specific aspect or theme. Include:
- Core concepts and definitions with citations
- How different sources approach the topic
- Practical implications and examples
- Performance characteristics or trade-offs where relevant}
**Key Insights**:
- {Insight 1: Specific, actionable statement} - {Why it matters and implications} [Org: Topic], [Org: Topic]
- {Insight 2: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
- {Insight 3: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
{Optional: **Common Patterns** or **Best Practices** subsection if relevant with 2-3 bullet points}
### 2. {Sub-Question 2}
{Repeat the same structure: Opening paragraph + 2-4 narrative paragraphs + 3-5 Key Insights}
{...continue for all sub-questions...}
## Synthesis
{2-3 paragraphs integrating findings across sub-questions. Show how the pieces fit together and what the big picture reveals.}
**Consensus** (3+ sources agree):
- {Consensus point 1 with source count} [Org: Topic], [Org: Topic], [Org: Topic]
- {Consensus point 2 with source count} [Org: Topic], [Org: Topic], [Org: Topic], [Org: Topic]
**Contradictions** *(if present)*:
- **{Topic}**: {Source A perspective [Org: Topic]} vs {Source B perspective [Org: Topic]}. {Resolution or context explaining difference}
**Research Gaps** *(if any)*:
- {Gap 1}: {What wasn't found and why it matters}
## Recommendations
### Critical (Do First)
1. **{Action}** - {Detailed rationale explaining why this is critical, what happens if not done, and expected impact} [Org: Topic], [Org: Topic]
2. **{Action}** - {Detailed rationale} [Org: Topic]
3. **{Action}** - {Detailed rationale} [Org: Topic]
### Important (Do Next)
4. **{Action}** - {Rationale with evidence and expected benefit} [Org: Topic]
5. **{Action}** - {Rationale with evidence} [Org: Topic]
6. **{Action}** - {Rationale with evidence} [Org: Topic]
### Optional (Consider)
7. **{Action}** - {Rationale and when/why you might skip this} [Org: Topic]
8. **{Action}** - {Rationale} [Org: Topic]
## References
### Official Documentation
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Blog Posts & Articles
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Academic Papers
- **{Paper Title}** ({YYYY-MM-DD}). {Authors}. {Full URL}
### Community Resources
- **{Platform: Topic}** ({YYYY-MM-DD}). {Full URL}
长度指导:
要求:
不应包含的内容(令牌浪费):
目标:验证完整性并重新查询空白(最多5次迭代)。
完整性验证清单:
自动空白检测逻辑:
gaps = []
completeness_score = 100
# Check citation coverage
for sub_q in sub_questions:
citation_count = count_citations(sub_q)
if citation_count < 3:
gaps.append(f"Sub-Q{i}: Only {citation_count} citations (need 3+)")
completeness_score -= 10
# Check for contradictions exploration
if contradictions_section_empty():
gaps.append("No contradictions explored - search for '{topic} criticisms' OR '{topic} limitations'")
completeness_score -= 10
# Check authoritative source coverage
auth_sources = count_authoritative_sources() # credibility >= 8
if auth_sources < total_sources * 0.5:
gaps.append(f"Only {auth_sources} authoritative sources ({round(auth_sources/total_sources*100)}%) - need 50%+")
completeness_score -= 10
# Check recency
recent_sources = count_recent_sources() # within 6 months
if recent_sources < total_sources * 0.3:
gaps.append(f"Only {recent_sources} recent sources ({round(recent_sources/total_sources*100)}%) - need 30%+")
completeness_score -= 5
# Check recommendation depth
if critical_recommendations < 3:
gaps.append(f"Only {critical_recommendations} Critical recommendations (need 3)")
completeness_score -= 10
# Check for research gaps section
if research_gaps_section_missing():
gaps.append("Research Gaps section missing - document what wasn't found")
completeness_score -= 5
return completeness_score, gaps
重新查询决策逻辑:
iteration_count = 1
completeness_score, gaps = validate_completeness()
# 🔴 MANDATORY: Always perform minimum 2 iterations
# Even if iteration 1 achieves 85%+, iteration 2 improves depth
if iteration_count < 2 or (completeness_score < 85% and iteration_count <= 5):
# Generate targeted re-queries for each gap
requery_list = []
for gap in gaps:
if "citations" in gap:
# Need more sources for specific sub-question
requery_list.append(f"'{sub_question_topic}' 'detailed guide' OR 'comprehensive overview'")
elif "contradictions" in gap:
# Need to explore downsides/criticisms
requery_list.append(f"'{topic}' 'criticism' OR 'limitations' OR 'downsides'")
requery_list.append(f"'{topic}' 'vs' 'alternative' 'when not to use'")
elif "authoritative" in gap:
# Need more official sources
requery_list.append(f"site:docs.{vendor}.com '{topic}' 'official'")
requery_list.append(f"site:{vendor}.com '{topic}' 'documentation'")
elif "recent" in gap:
# Need more recent sources
requery_list.append(f"'{topic}' 'updates' OR 'changes' after:2024")
requery_list.append(f"'{topic}' '2025' OR '2024' 'latest'")
# Execute re-queries in parallel batch (1-5 queries)
# Use smaller batch size for re-queries since they're targeted
requery_batch = requery_list[:5] # Up to 5 re-queries
# Execute ALL re-queries in batch SIMULTANEOUSLY in single message
# Example: If requery_batch = [rq1, rq2, rq3], call:
# WebSearch(rq1)
# WebSearch(rq2)
# WebSearch(rq3)
# ALL THREE in the SAME message as parallel tool uses
execute_parallel_batch(requery_batch)
iteration_count += 1
# Update findings incrementally
append_iteration_findings()
completeness_score, gaps = validate_completeness()
else:
# Either complete (≥85%) or max iterations reached
if completeness_score < 85%:
note_limitations_in_research_gaps_section(gaps)
finalize_output()
迭代更新模式:当从后续迭代添加发现时,附加到现有部分:
### 1. {Sub-Question}
{Original findings from iteration 1}
**Iteration 2 Additions**:
{New findings from re-queries, with citations [Org: Topic], [Org: Topic], [Org: Topic]}
**Key Insights**:
- {Original insight 1} [Org: Topic]
- {Original insight 2} [Org: Topic]
- {NEW insight from iteration 2} [Org: Topic], [Org: Topic]
何时停止迭代:
仅当以下条件满足时停止:(iteration_count >= 2 AND completeness >= 85%) OR iteration_count > 5
如果达到最大迭代次数但未达到85%:添加明确的研究空白部分:
## Research Gaps
Due to iteration limit, the following gaps remain:
- {Gap 1}: {What's missing and why it matters}
- {Gap 2}: {What's missing and suggested follow-up approach}
场景:用户询问"2025年集成 Salesforce 与 SQL Server 的最佳架构是什么?"
流程:
阶段1 - 分解:
Sub-Q1: Salesforce integration capabilities (2025)?
Sub-Q2: SQL Server integration patterns?
Sub-Q3: Middleware options?
Sub-Q4: Security considerations?
Sub-Q5: Scalability factors?
阶段2 - 多查询生成和分批执行:
Generated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site
⚠️ VERIFICATION INSTRUCTION (temporary for debugging):
When you use this Skill, you MUST create a verification file to prove execution:
File : deep-research-skill-was-executed.md (in repository root)
Content :
# Deep Research Skill Execution Verification
**Skill**: websearch-deep
**Executed**: {timestamp}
**Research Question**: {the question you researched}
**Mode**: Deep (6-phase methodology)
This file was created by the deep-researcher agent to verify that the websearch-deep Skill was successfully loaded and executed.
**Phases Applied**:
1. ✓ Problem Decomposition (3-5 sub-questions)
2. ✓ Multi-Query Generation (3-5 variations per sub-question)
3. ✓ Evidence Synthesis with Source Ranking (credibility/freshness/relevance)
4. ✓ Citation Transparency (numbered [1][2][3])
5. ✓ Structured Output (token-efficient template)
6. ✓ Iterative Refinement (max 5 iterations)
**Sub-Questions Generated**: {count}
**Queries Executed**: {count}
**Sources Consulted**: {count} ({authoritative_count} authoritative, {recent_count} recent)
**Iterations Performed**: {count}
**Output Format**: Token-efficient template (Executive Summary + Research Overview + Findings + Synthesis + Recommendations + Sources with URLs)
Create this file using the Write tool immediately after loading this Skill.
Provides comprehensive deep research methodology for complex, multi-faceted questions requiring synthesis across multiple domains. Implements ChatGPT-style deep investigation with problem decomposition, multi-query strategies, evidence synthesis, citation transparency, and iterative refinement.
Universal Applicability : Use this Skill for ANY question requiring comprehensive multi-source analysis with evidence synthesis - technical, business, educational, strategic, or investigative.
Question Types Supported :
Example Questions :
Triggers : Keywords like "architecture", "integration", "best", "strategy", "recommendations", "compare", "evaluate", "migrate", "benefits", "how", "why", "should we"
Key Signal : If the question requires comprehensive multi-source analysis with evidence synthesis → use this Skill
Objective : Break complex questions into 3-5 clear, focused sub-questions.
🔴 CRITICAL - Research Scope :
Deep research finds external knowledge ONLY - you have no codebase access.
What you CAN research :
What you CANNOT research :
If question asks about "my project" or "this codebase" : Return error from agent file (scope validation section) and stop.
Process :
Sub-Question Criteria :
Example Decomposition :
Primary: "What's the best architecture for integrating Salesforce with SQL Server in 2025?"
Sub-Questions:
1. What are Salesforce's current integration capabilities and APIs (2025)?
2. What are SQL Server's integration patterns and best practices?
3. What middleware or integration platforms are commonly used?
4. What security and compliance considerations matter?
5. What scalability and performance factors should influence choice?
Objective : Generate 3-5 query variations per sub-question to maximize coverage (15-25 total searches).
Query Variation Strategy :
Advanced Search Operators :
site:domain.com - Search specific domainsfiletype:pdf - Find PDF documentsintitle:"keyword" - Search page titlesinurl:keyword - Search URLsafter:2024 - Recent content only"exact phrase" - Exact matchingExample Multi-Query Set :
Sub-Q1: Salesforce Integration Capabilities
- site:salesforce.com "API" "integration" "2025"
- "Salesforce REST API" "rate limits" after:2024
- "Salesforce Bulk API 2.0" "best practices"
- filetype:pdf "Salesforce integration guide" 2025
- "Salesforce API" "breaking changes" after:2024
Use these templates to formulate high-quality queries for different research types :
1. Technical Architecture Research
Official Docs: site:docs.{vendor}.com "{topic}" "architecture patterns" OR "design patterns"
Best Practices: "{topic}" "best practices" "production" after:2024
Comparisons: "{topic}" vs "{alternative}" "comparison" "pros cons"
Limitations: "{topic}" "limitations" OR "drawbacks" OR "challenges"
Recent Updates: site:{vendor}.com "{topic}" "updates" OR "changes" after:2024
2. Framework/Library Research
Official Docs: site:docs.{framework}.com "{feature}" "guide" OR "documentation"
Community: site:stackoverflow.com "{framework}" "{feature}" "how to"
Real-World: "{framework}" "{feature}" "production" OR "case study" after:2024
Performance: "{framework}" "performance" OR "benchmarks" OR "optimization"
Ecosystem: "{framework}" "ecosystem" OR "plugins" OR "extensions" 2025
3. Business/Strategy Research
Industry Analysis: "{topic}" "market analysis" OR "industry trends" 2024 2025
Vendor Comparison: "{vendor A}" vs "{vendor B}" "comparison" "review"
ROI/Benefits: "{solution}" "ROI" OR "benefits" OR "value proposition"
Implementation: "{solution}" "implementation guide" OR "getting started"
Case Studies: "{solution}" "case study" OR "customer success" after:2024
4. Educational/Learning Research
Fundamentals: "{topic}" "introduction" OR "beginner guide" OR "explained"
Advanced: "{topic}" "advanced" OR "deep dive" OR "internals"
Tutorials: "{topic}" "tutorial" OR "step by step" after:2024
Common Mistakes: "{topic}" "common mistakes" OR "anti-patterns" OR "pitfalls"
Resources: "{topic}" "learning resources" OR "courses" OR "books" 2025
5. Compliance/Security Research
Standards: "{topic}" "{standard}" "compliance" (e.g., "GDPR", "SOC2", "HIPAA")
Security: "{topic}" "security" "best practices" OR "vulnerabilities" after:2024
Official Guidance: site:{regulator}.gov "{topic}" "guidance" OR "requirements"
Audit: "{topic}" "audit" OR "checklist" OR "certification"
Tools: "{topic}" "{compliance}" "tools" OR "automation" 2025
6. Performance/Optimization Research
Benchmarks: "{topic}" "benchmark" OR "performance" "comparison" after:2024
Bottlenecks: "{topic}" "bottleneck" OR "slow" OR "performance issues"
Optimization: "{topic}" "optimization" OR "tuning" OR "best practices"
Monitoring: "{topic}" "monitoring" OR "observability" OR "metrics"
Scaling: "{topic}" "scalability" OR "high traffic" OR "production scale"
Priority: Official Sources First
site:anthropic.com or site:docs.{vendor}.com queries firstExecution Pattern (MANDATORY for performance):
DO NOT execute queries sequentially (one at a time). Instead, batch into groups of 5-10 and execute in parallel within single messages.
Batching Strategy :
Implementation Pattern :
# Step 1: Generate all queries first
all_queries = []
for sub_question in sub_questions:
queries = generate_query_variations(sub_question) # 3-5 queries per sub-Q
all_queries.extend(queries)
# Total: 15-25 queries across all sub-questions
# Step 2: Execute in parallel batches
batch_size = 5 # Adjust 5-10 based on query complexity
for i in range(0, len(all_queries), batch_size):
batch = all_queries[i:i+batch_size]
# Step 3: Execute ALL queries in batch SIMULTANEOUSLY in single message
# Example: If batch = [q1, q2, q3, q4, q5], call:
# WebSearch(q1)
# WebSearch(q2)
# WebSearch(q3)
# WebSearch(q4)
# WebSearch(q5)
# ALL FIVE in the SAME message as parallel tool uses
results = execute_parallel_batch(batch)
process_batch_results(results) # Collect sources immediately
Why This Matters :
Batch Size Guidance :
Example Batched Execution :
Generated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site:salesforce.com 'API' 'integration' '2025'")
WebSearch("'Salesforce REST API' 'rate limits' after:2024")
WebSearch("'Salesforce Bulk API 2.0' 'best practices'")
WebSearch("filetype:pdf 'Salesforce integration guide' 2025")
WebSearch("'Salesforce API' 'breaking changes' after:2024")
→ Batch completes in ~1s, 5 results returned
Batch 2 (5 queries - executed in parallel):
WebSearch("'SQL Server ETL' 'best practices' 'real-time'")
WebSearch("site:docs.microsoft.com 'SQL Server' 'integration'")
...
→ Batch completes in ~1s, 5 results returned
Total: 5 batches × 1s each = ~5s (vs 25s sequential)
Objective : Collect, rank, deduplicate, and synthesize evidence from multiple sources.
Processing Batched Results :
Since Phase 2 executed queries in parallel batches, you'll receive results grouped by batch. Process all results from all batches together:
Example :
# Collect results from all batches
all_results = []
all_results.extend(batch1_results) # 5 results from batch 1
all_results.extend(batch2_results) # 5 results from batch 2
all_results.extend(batch3_results) # 5 results from batch 3
all_results.extend(batch4_results) # 5 results from batch 4
all_results.extend(batch5_results) # 5 results from batch 5
# Total: ~25 results (before deduplication)
# Deduplicate by URL
unique_sources = deduplicate_by_url(all_results)
# After dedup: ~15-20 unique sources (duplicates removed)
# Rank all unique sources
ranked_sources = rank_sources(unique_sources) # Apply scoring below
Rank every source on three dimensions:
Credibility Score (0-10):
Freshness Score (0-10):
Relevance Score (0-10):
Overall Source Quality = (Credibility × 0.5) + (Freshness × 0.2) + (Relevance × 0.3)
When sources contradict:
Objective : Provide numbered, clickable citations for every factual claim.
🔴 CRITICAL - Use Descriptive Inline Links :
Inline Citation Format (Descriptive Names - Natural Language):
Text with claim from [OpenAI: GPT-4](https://url "GPT-4 Technical Report (OpenAI, 2023-03-14)") and [Anthropic: Claude](https://url2 "Introducing Claude (Anthropic, 2023-03-14)"). Multiple sources: [Google DeepMind: Gemini](https://url3 "Gemini Model (Google DeepMind, 2023-12-06)"), [Meta: LLaMA](https://url4 "LLaMA Paper (Meta AI, 2023-02-24)").
Why descriptive names?
Format :
[Organization: Topic](full-URL "Full Title (Publisher, YYYY-MM-DD)")
Creating Descriptive Names (from URL analysis):
[Org: Topic][OpenAI: GPT-4], [OpenAI: DALL-E], [OpenAI: Whisper][Stack Overflow: OAuth Implementation], [Medium: React Patterns]References Section Format (at end of research - grouped by category):
## References
### Official Documentation
- **OpenAI: GPT-4** (2023-03-14). "GPT-4 Technical Report". https://openai.com/research/gpt-4
- **Anthropic: Claude** (2023-03-14). "Introducing Claude". https://www.anthropic.com/claude
### Blog Posts & Articles
- **Google DeepMind: Gemini** (2023-12-06). "Gemini: A Family of Highly Capable Models". https://deepmind.google/technologies/gemini
- **Meta: LLaMA** (2023-02-24). "Introducing LLaMA". https://ai.meta.com/blog/llama
### Academic Papers
- **Attention Is All You Need** (2017-06-12). Vaswani et al. https://arxiv.org/abs/1706.03762
### Community Resources
- **Stack Overflow: OAuth Implementation** (2024-08-15). https://stackoverflow.com/questions/12345
Why grouped References section?
Category Guidance :
Title Format in Quotes :
"Full Title (Publisher, YYYY-MM-DD)"Hover Behavior : Most markdown viewers (GitHub, VS Code, Obsidian, GitLab) show the title as a tooltip when hovering over the citation.
Click Behavior : Clicking the descriptive name opens the URL directly in browser.
Example Inline Usage :
Salesforce provides three primary API types according to [Salesforce: API Docs](https://developer.salesforce.com/docs/apis "Salesforce API Documentation (Salesforce, 2025-01-15)"): REST API for standard operations, [Salesforce: Bulk API 2.0](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/ "Bulk API 2.0 Guide (Salesforce, 2024-11-20)") for large data volumes (>10k records), and [Salesforce: Streaming API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/ "Streaming API Guide (Salesforce, 2024-10-05)") for real-time updates. Recent 2025 updates introduced enhanced rate limiting (100k requests/24hrs for Enterprise) and improved error handling as noted in [Salesforce Blog: API Updates](https://developer.salesforce.com/blogs/2025/01/api-updates "API Error Handling Improvements (Salesforce Blog, 2025-01-10)").
## References
### Official Documentation
- **Salesforce: API Docs** (2025-01-15). "Salesforce API Documentation". https://developer.salesforce.com/docs/apis
- **Salesforce: Bulk API 2.0** (2024-11-20). "Bulk API 2.0 Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/
- **Salesforce: Streaming API** (2024-10-05). "Streaming API Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/
### Blog Posts & Articles
- **Salesforce Blog: API Updates** (2025-01-10). "API Error Handling Improvements". https://developer.salesforce.com/blogs/2025/01/api-updates
Compatibility : Works in GitHub, VS Code (preview), Obsidian, GitLab, all markdown viewers
Objective : Deliver comprehensive, implementation-ready findings with narrative depth.
Design Principles :
Output Template :
# Deep Research: {Question}
## Executive Summary
{2-3 paragraph synthesis covering:
- What was researched and why it matters
- Key findings with citations [Org: Topic]
- Strategic recommendation with rationale}
Example length: ~150-200 words total across 2-3 paragraphs.
## Research Overview
- **Sub-Questions Analyzed**: {count}
- **Queries Executed**: {count} queries
- **Sources**: {count} total ({authoritative_count} authoritative / {auth_pct}%, {recent_count} recent / {recent_pct}%)
- **Iterations**: {count}
## Findings
### 1. {Sub-Question 1}
{Opening paragraph: What this sub-question addresses and why it's important}
{2-4 paragraphs of synthesized narrative with inline citations [1][2][3]. Each paragraph covers a specific aspect or theme. Include:
- Core concepts and definitions with citations
- How different sources approach the topic
- Practical implications and examples
- Performance characteristics or trade-offs where relevant}
**Key Insights**:
- {Insight 1: Specific, actionable statement} - {Why it matters and implications} [Org: Topic], [Org: Topic]
- {Insight 2: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
- {Insight 3: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
{Optional: **Common Patterns** or **Best Practices** subsection if relevant with 2-3 bullet points}
### 2. {Sub-Question 2}
{Repeat the same structure: Opening paragraph + 2-4 narrative paragraphs + 3-5 Key Insights}
{...continue for all sub-questions...}
## Synthesis
{2-3 paragraphs integrating findings across sub-questions. Show how the pieces fit together and what the big picture reveals.}
**Consensus** (3+ sources agree):
- {Consensus point 1 with source count} [Org: Topic], [Org: Topic], [Org: Topic]
- {Consensus point 2 with source count} [Org: Topic], [Org: Topic], [Org: Topic], [Org: Topic]
**Contradictions** *(if present)*:
- **{Topic}**: {Source A perspective [Org: Topic]} vs {Source B perspective [Org: Topic]}. {Resolution or context explaining difference}
**Research Gaps** *(if any)*:
- {Gap 1}: {What wasn't found and why it matters}
## Recommendations
### Critical (Do First)
1. **{Action}** - {Detailed rationale explaining why this is critical, what happens if not done, and expected impact} [Org: Topic], [Org: Topic]
2. **{Action}** - {Detailed rationale} [Org: Topic]
3. **{Action}** - {Detailed rationale} [Org: Topic]
### Important (Do Next)
4. **{Action}** - {Rationale with evidence and expected benefit} [Org: Topic]
5. **{Action}** - {Rationale with evidence} [Org: Topic]
6. **{Action}** - {Rationale with evidence} [Org: Topic]
### Optional (Consider)
7. **{Action}** - {Rationale and when/why you might skip this} [Org: Topic]
8. **{Action}** - {Rationale} [Org: Topic]
## References
### Official Documentation
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Blog Posts & Articles
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Academic Papers
- **{Paper Title}** ({YYYY-MM-DD}). {Authors}. {Full URL}
### Community Resources
- **{Platform: Topic}** ({YYYY-MM-DD}). {Full URL}
Length Guidance :
Requirements :
What NOT to Include (token waste):
Objective : Validate completeness and re-query gaps (max 5 iterations).
Completeness Validation Checklist :
Automated Gap Detection Logic :
gaps = []
completeness_score = 100
# Check citation coverage
for sub_q in sub_questions:
citation_count = count_citations(sub_q)
if citation_count < 3:
gaps.append(f"Sub-Q{i}: Only {citation_count} citations (need 3+)")
completeness_score -= 10
# Check for contradictions exploration
if contradictions_section_empty():
gaps.append("No contradictions explored - search for '{topic} criticisms' OR '{topic} limitations'")
completeness_score -= 10
# Check authoritative source coverage
auth_sources = count_authoritative_sources() # credibility >= 8
if auth_sources < total_sources * 0.5:
gaps.append(f"Only {auth_sources} authoritative sources ({round(auth_sources/total_sources*100)}%) - need 50%+")
completeness_score -= 10
# Check recency
recent_sources = count_recent_sources() # within 6 months
if recent_sources < total_sources * 0.3:
gaps.append(f"Only {recent_sources} recent sources ({round(recent_sources/total_sources*100)}%) - need 30%+")
completeness_score -= 5
# Check recommendation depth
if critical_recommendations < 3:
gaps.append(f"Only {critical_recommendations} Critical recommendations (need 3)")
completeness_score -= 10
# Check for research gaps section
if research_gaps_section_missing():
gaps.append("Research Gaps section missing - document what wasn't found")
completeness_score -= 5
return completeness_score, gaps
Re-Query Decision Logic :
iteration_count = 1
completeness_score, gaps = validate_completeness()
# 🔴 MANDATORY: Always perform minimum 2 iterations
# Even if iteration 1 achieves 85%+, iteration 2 improves depth
if iteration_count < 2 or (completeness_score < 85% and iteration_count <= 5):
# Generate targeted re-queries for each gap
requery_list = []
for gap in gaps:
if "citations" in gap:
# Need more sources for specific sub-question
requery_list.append(f"'{sub_question_topic}' 'detailed guide' OR 'comprehensive overview'")
elif "contradictions" in gap:
# Need to explore downsides/criticisms
requery_list.append(f"'{topic}' 'criticism' OR 'limitations' OR 'downsides'")
requery_list.append(f"'{topic}' 'vs' 'alternative' 'when not to use'")
elif "authoritative" in gap:
# Need more official sources
requery_list.append(f"site:docs.{vendor}.com '{topic}' 'official'")
requery_list.append(f"site:{vendor}.com '{topic}' 'documentation'")
elif "recent" in gap:
# Need more recent sources
requery_list.append(f"'{topic}' 'updates' OR 'changes' after:2024")
requery_list.append(f"'{topic}' '2025' OR '2024' 'latest'")
# Execute re-queries in parallel batch (1-5 queries)
# Use smaller batch size for re-queries since they're targeted
requery_batch = requery_list[:5] # Up to 5 re-queries
# Execute ALL re-queries in batch SIMULTANEOUSLY in single message
# Example: If requery_batch = [rq1, rq2, rq3], call:
# WebSearch(rq1)
# WebSearch(rq2)
# WebSearch(rq3)
# ALL THREE in the SAME message as parallel tool uses
execute_parallel_batch(requery_batch)
iteration_count += 1
# Update findings incrementally
append_iteration_findings()
completeness_score, gaps = validate_completeness()
else:
# Either complete (≥85%) or max iterations reached
if completeness_score < 85%:
note_limitations_in_research_gaps_section(gaps)
finalize_output()
Iteration Update Pattern : When adding findings from later iterations, append to existing sections:
### 1. {Sub-Question}
{Original findings from iteration 1}
**Iteration 2 Additions**:
{New findings from re-queries, with citations [Org: Topic], [Org: Topic], [Org: Topic]}
**Key Insights**:
- {Original insight 1} [Org: Topic]
- {Original insight 2} [Org: Topic]
- {NEW insight from iteration 2} [Org: Topic], [Org: Topic]
When to Stop Iterating :
Stop only if : (iteration_count >= 2 AND completeness >= 85%) OR iteration_count > 5
If Max Iterations Reached Without 85% : Add explicit Research Gaps section:
## Research Gaps
Due to iteration limit, the following gaps remain:
- {Gap 1}: {What's missing and why it matters}
- {Gap 2}: {What's missing and suggested follow-up approach}
Scenario : User asks "What's the best architecture for integrating Salesforce with SQL Server in 2025?"
Process :
Phase 1 - Decomposition :
Sub-Q1: Salesforce integration capabilities (2025)?
Sub-Q2: SQL Server integration patterns?
Sub-Q3: Middleware options?
Sub-Q4: Security considerations?
Sub-Q5: Scalability factors?
Phase 2 - Multi-Query Generation and Batched Execution :
Generated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site:salesforce.com 'API' 'integration' '2025'")
WebSearch("'Salesforce REST API' 'rate limits' after:2024")
WebSearch("'Salesforce Bulk API 2.0' 'best practices'")
WebSearch("filetype:pdf 'Salesforce integration guide' 2025")
WebSearch("'Salesforce API' 'breaking changes' after:2024")
→ Batch completes in ~1s, 5 results returned
Batch 2 (5 queries - executed in parallel):
WebSearch("'SQL Server ETL' 'best practices' 'real-time'")
WebSearch("site:docs.microsoft.com 'SQL Server' 'integration'")
WebSearch("'SQL Server Always On' 'high availability'")
WebSearch("'SQL Server CDC' 'change data capture'")
WebSearch("'SQL Server linked servers' 'performance'")
→ Batch completes in ~1s, 5 results returned
Batch 3-5 (15 more queries across 3 batches):
... (middleware, security, scalability queries)
→ Each batch completes in ~1s
Execution Time:
- 5 batches × ~1s each = ~5s total
- Sequential would be: 25 queries × 1s = 25s
- Speedup: 5x faster
Phase 3 - Evidence :
18 sources identified
12 ranked as authoritative (credibility ≥ 8)
3 contradictions (real-time vs batch approaches)
Phase 4 - Citations :
[1] Salesforce API Guide (Cred: 10, Fresh: 10, Rel: 10, Overall: 10.0)
[2] MuleSoft Patterns (Cred: 9, Fresh: 8, Rel: 9, Overall: 8.9)
Phase 5 - Output :
Executive Summary: 2 paragraphs
Findings: 5 sub-sections with 28 citations
Recommendations: 3 critical, 4 important, 2 enhancements
Phase 6 - Refinement :
Iteration 1: Identified gap in disaster recovery
Iteration 2: Re-queried "Salesforce SQL backup strategies"
Iteration 3: Completeness 92% → finalized
Output : Deep Mode Context File with executive summary, 5 sub-question analyses, evidence table, synthesis, pros/cons, 28 citations
Scenario : "Should we use microservices or monolith architecture for our e-commerce platform?"
Process :
Decomposition :
1. Scalability characteristics for e-commerce?
2. Team size and DevOps implications?
3. Transaction patterns differences?
4. Deployment complexity trade-offs?
5. Real-world e-commerce case studies?
Multi-Query (sample):
"microservices e-commerce" "scalability" after:2024
"monolith vs microservices" "team size" "best practices"
site:aws.amazon.com "e-commerce architecture" "patterns"
Evidence Synthesis :
15 sources (10 authoritative)
Consensus: Team size <20 → monolith, >50 → microservices
Contradiction: Database approach (shared vs distributed)
Output : Structured analysis with pros/cons for both approaches, team size recommendations, migration considerations, case studies with citations
Issue 1: Too Many Sources, Can't Synthesize
Issue 2: Contradictory Information
Issue 3: Insufficient Recent Sources
Issue 4: Completeness Score Below 85%
.agent/Session-{name}/context/research-web-analyst.mdWeekly Installs
82
Repository
GitHub Stars
6
First Seen
Jan 23, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode78
codex76
gemini-cli74
github-copilot72
cursor70
kimi-cli68
AI Elements:基于shadcn/ui的AI原生应用组件库,快速构建对话界面
69,600 周安装