tooluniverse-literature-deep-research by mims-harvard/tooluniverse
npx skills add https://github.com/mims-harvard/tooluniverse --skill tooluniverse-literature-deep-research系统性进行综合性文献研究的方法:消除主题歧义,使用碰撞感知查询进行搜索,评估证据等级,并生成结构化报告。
核心原则:
User Query
↓
Phase 0: CLARIFY + MODE SELECT (factoid vs deep report)
↓
Phase 1: SUBJECT DISAMBIGUATION + PROFILE
├─ Detect domain (biological target / drug / disease / general academic)
├─ Resolve identifiers and gather synonyms/aliases
├─ Check for naming collisions
└─ Gather baseline context via annotation tools (domain-specific)
↓
Phase 2: LITERATURE SEARCH (methodology kept internal)
├─ High-precision seed queries
├─ Citation network expansion from seeds
├─ Collision-filtered broader queries
└─ Theme clustering + evidence grading
↓
Phase 3: REPORT SYNTHESIS (report-first pattern)
├─ Create [topic]_report.md with all section headers IMMEDIATELY
├─ Progressively fill sections as data arrives (update after each phase)
├─ Write Executive Summary LAST (after all sections complete)
├─ Generate [topic]_bibliography.json + .csv
└─ Validate completeness checklist
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
仅询问必要信息;跳过答案显而易见的问题:
| 模式 | 何时使用 | 交付成果 |
|---|---|---|
| 事实核查 / 验证 | 单个具体问题 | [topic]_factcheck_report.md(≤1 页)+ 参考文献 |
| 迷你综述 | 狭窄主题 | 简短叙述性报告(1-3 页) |
| 完整深度研究 | 全面概述 | 完整的 15 部分报告 + 参考文献 |
启发式判断:"X 进化出对哪种抗生素的耐药性?" → 事实核查。"关于 X,文献说了什么?" → 完整报告。
提供一个正确、有来源验证的答案,并明确标注证据归属。
# [TOPIC]: 事实核查报告
*生成日期: [Date]*
## 问题
[用户问题]
## 答案
**[一句话答案]** [证据等级: ★★★/★★☆/★☆☆/☆☆☆]
## 来源
- [主要引用: 期刊/年份/PMID/DOI]
## 验证说明
- [1-3 个要点: 陈述出现的位置,关键限制条件]
## 局限性
- [全文可用性,证据类型的注意事项]
优先使用 ToolUniverse 文献工具而非网页浏览。在可能的情况下,使用 EuropePMC_search_articles(extract_terms_from_fulltext=[...]) 进行开放获取片段验证。
| 查询模式 | 领域 | 阶段 1 操作 |
|---|---|---|
| 基因符号 (EGFR, TP53) | 生物靶点 | 完整的生物信息消歧 |
| 蛋白质名称 ("V-ATPase") | 生物靶点 | 完整的生物信息消歧 |
| 药物名称 ("metformin") | 药物 | 药物消歧(见 1.5) |
| 疾病 ("Alzheimer's") | 疾病 | 疾病消歧(见 1.6) |
| 计算机科学/机器学习主题 ("transformer architecture") | 通用学术 | 仅文献(跳过生物工具) |
| 方法、概念、通用主题 | 通用学术 | 仅文献(跳过生物工具) |
| 跨领域主题 ("GNNs for drug discovery") | 跨学科 | 在每个领域内解析每个实体(见 1.9) |
对于超出文献范围的深度实体特定研究,委托给专门的技能:
tooluniverse-target-researchtooluniverse-drug-researchtooluniverse-disease-research当重点是文献综合与证据分级时,使用此技能。当重点是通过结构化数据库查询进行实体剖析时,使用专门的技能。为了达到最大深度,可以并行运行两者。
UniProt_search → UniProt accession
UniProt_get_entry_by_accession → Full entry with cross-references
UniProt_id_mapping → Map between ID types
ensembl_lookup_gene → Ensembl gene ID, biotype
MyGene_get_gene_annotation → NCBI Gene ID, aliases, summary
检查该领域的主要数据库(前 20 条结果)。如果超过 20% 的结果不相关,构建一个负面过滤器:
| 领域 | 冲突检查语法 |
|---|---|
| 生物医学 | PubMed: "[TERM]"[Title] |
| 计算机科学/机器学习 | ArXiv: ti:"[TERM]" 或带 fieldsOfStudy 过滤器的 SemanticScholar |
| 通用 | OpenAlex 或 Crossref 标题搜索 |
NOT [collision1] NOT [collision2]基因家族消歧:使用官方符号并明确排除。示例:"ADAR" NOT "ADAR2" NOT "ADARB1" 用于获取 ADAR1 特异性结果。
跨领域冲突:某些术语在不同领域有不同含义(例如,"RAG" 在计算机科学中指检索增强生成,在生物学中指重组激活基因)。添加领域上下文术语进行过滤:"RAG" AND "language model" NOT "recombination activating"。
通过注释工具收集结构、功能和表达背景信息:
InterPro_get_protein_domains → Domain architecture
UniProt_get_ptm_processing_by_accession → PTMs, active sites
HPA_get_subcellular_location → Localization
GTEx_get_median_gene_expression → Tissue expression (use gtex_v8)
GO_get_annotations_for_gene → GO terms
Reactome_map_uniprot_to_pathways → Pathways
STRING_get_protein_interactions → Interaction partners
intact_get_interactions → Experimentally validated PPIs
OpenTargets_get_target_tractability_by_ensemblID → Druggability assessment
GPCR 靶点:如果靶点是 GPCR(约占已批准药物靶点的 35%),则委托给 tooluniverse-target-research 以获取专门的 GPCRdb 数据(3D 结构、配体、突变)。
## 靶点身份信息
| Identifier | Value | Source |
|------------|-------|--------|
| Official Symbol | [SYMBOL] | HGNC |
| UniProt | [ACC] | UniProt |
| Ensembl Gene | [ENSG...] | Ensembl |
**Synonyms**: [list]
**Collisions**: [assessment]
跳过蛋白质结构/表达/GO 分析。改为:
解析身份:OpenTargets_get_drug_chembId_by_generic_name, ChEMBL_get_drug, PubChem_get_CID_by_compound_name, drugbank_get_drug_basic_info_by_drug_name_or_id
靶点与机制:ChEMBL_get_drug_mechanisms, OpenTargets_get_associated_targets_by_drug_chemblId, DGIdb_get_drug_gene_interactions, drugbank_get_targets_by_drug_name_or_drugbank_id
安全性与适应症:OpenTargets_get_drug_adverse_events_by_chemblId, OpenTargets_get_drug_indications_by_chemblId, search_clinical_trials
解析本体 ID:使用 OpenTargets_get_drug_chembId_by_generic_name 或疾病搜索工具来解析 EFO/MONDO ID。当工具结果中可用时,交叉参考 ICD-10 和 UMLS CUI。
OpenTargets_get_diseases_phenotypes_by_target_ensembl → Disease associations
DisGeNET_get_disease_genes → Disease-gene associations
DisGeNET_search_disease → Disease search with ontology IDs
CTD_get_disease_chemicals → Chemical-disease links
分别解析两个实体,然后进行交叉引用:
CTD_get_chemical_gene_interactions → Chemical-gene links
CTD_get_chemical_diseases → Chemical-disease associations
OpenTargets_get_associated_targets_by_drug_chemblId → Drug targets
OpenTargets_get_associated_diseases_by_drug_chemblId → Drug-disease associations
→ Intersect to find shared targets/pathways
对于计算机科学、社会科学、人文科学或其他非生物主题:
对于跨越多个领域的主题(例如,"GNNs for drug discovery"、"AlphaFold protein prediction"):
方法论保持内部。报告展示发现,而非过程。
步骤 1:高精度种子文献(15-30 篇核心论文)
领域特定的种子查询:
Biomedical: "[TERM]"[Title] AND (mechanism OR function OR structure OR review)
CS/ML: ti:"[TERM]" AND (architecture OR benchmark OR evaluation OR survey)
General: "[TERM]" in title via OpenAlex/Crossref
使用日期/排序过滤器以获取最新或高影响力文献:
mindate, maxdate, sort="pub_date"year="2023-2024", sort="citationCount:desc"date_from, sort_by="submittedDate"步骤 2:引文网络扩展
PubMed_get_cited_by → Forward citations (primary)
EuropePMC_get_citations → Forward (fallback)
PubMed_get_related → Related papers
EuropePMC_get_references → Backward citations
SemanticScholar_get_recommendations → AI-similar papers
OpenCitations_get_citations → DOI-based citation data
步骤 3:冲突过滤的广泛查询
"[TERM]" AND ([context1] OR [context2]) NOT [collision_term]
生物医学:PubMed_search_articles, PMC_search_papers, EuropePMC_search_articles, PubTator3_LiteratureSearch
计算机科学/机器学习:ArXiv_search_papers, DBLP_search_publications, SemanticScholar_search_papers
通用学术:openalex_literature_search, Crossref_search_works, CORE_search_papers, DOAJ_search_articles
预印本:BioRxiv_get_preprint, MedRxiv_get_preprint, OSF_search_preprints, BioRxiv_list_recent_preprints(用于预印本关键词搜索:EuropePMC_search_articles(source='PPR'))
多源深度搜索:advanced_literature_search_agent(并行搜索 12+ 个数据库;需要 Azure OpenAI 密钥 — 如果不可用,则通过单独查询 PubMed + ArXiv + SemanticScholar + OpenAlex 来复制覆盖范围)
引文影响力:iCite_search_publications(搜索 + RCR/APT 指标), iCite_get_publications(通过 PMID 获取指标), scite_get_tallies(支持/反驳计数)注意:iCite 和 scite 仅限 PubMed。对于计算机科学/机器学习论文,使用 SemanticScholar_get_paper 获取引用计数和影响力分数。
作者搜索:PubMed "Author[Author]", ArXiv "au:Name", SemanticScholar/OpenAlex 作为查询文本
当摘要缺乏关键细节时,使用全文片段提取。有关三层策略(Europe PMC 自动片段 → 手动 Semantic Scholar/ArXiv → 手动下载),请参阅 FULLTEXT_STRATEGY.md。
Attempt 1 → fails → wait 2s → Attempt 2 → fails → wait 5s → Fallback tool
| 主要工具 | 备用工具 1 | 备用工具 2 |
|---|---|---|
PubMed_get_cited_by | EuropePMC_get_citations | OpenCitations_get_citations |
PubMed_get_related | SemanticScholar_get_recommendations | SemanticScholar_search_papers |
GTEx_get_median_gene_expression | HPA_get_rna_expression_by_source | 记录为不可用 |
Unpaywall_check_oa_status | Europe PMC isOpenAccess | OpenAlex is_oa |
使用 Unpaywall 邮箱:完整的开放获取检查。没有邮箱:通过 Europe PMC、PMC、OpenAlex、DOAJ 标志进行尽力而为的检查。标签:*OA Status: Best-effort (Unpaywall not configured)*
根据证据强度对每个主张进行分级:
| 等级 | 标签 | 描述 | 生物学示例 | 计算机科学/机器学习示例 |
|---|---|---|---|---|
| T1 | ★★★ 机制性 | 直接的实验/形式证据 | CRISPR KO + 拯救实验,随机对照试验 | 形式化证明,带显著性检验的受控消融实验 |
| T2 | ★★☆ 功能性 | 显示作用的功能性研究 | siRNA 敲低表型 | 在标准数据集上进行的基准测试与基线比较 |
| T3 | ★☆☆ 关联性 | 筛选命中、相关性、观察性研究 | 高通量筛选,全基因组关联研究 | 观察性研究、案例研究、轶事比较 |
| T4 | ☆☆☆ 提及性 | 综述、文本挖掘、外围提及 | 综述文章 | 综述论文、博客文章、研讨会摘要 |
在报告中,行内标注:
Target X regulates pathway Y [★★★: PMID:12345678] through direct
phosphorylation [★★☆: PMID:23456789].
每个主题,总结证据质量:
### Theme: Lysosomal Function (47 papers)
**Evidence Quality**: Strong (32 mechanistic, 11 functional, 4 association)
| 文件 | 模式 | 是否总是生成? |
|---|---|---|
[topic]_report.md | 完整深度研究 | 是 |
[topic]_factcheck_report.md | 事实核查 | 是 |
[topic]_bibliography.json | 所有模式 | 是 |
[topic]_bibliography.csv | 所有模式 | 是 |
methods_appendix.md | 任何(仅在请求时) | 否 |
在阶段 0 之后立即创建报告文件,包含所有 15 个部分的标题(使用 REPORT_TEMPLATE.md 中的模板)。然后:
这确保了即使过程中断,部分结果也能被保存。
使用 REPORT_TEMPLATE.md 中的 15 部分模板。关键部分根据领域进行调整:
有关完整模板、领域特定调整、参考文献格式、主题提取协议和完整性检查清单,请参阅 REPORT_TEMPLATE.md。
简要进度更新(非搜索日志):
请勿暴露:原始工具输出、去重计数、搜索轮次详情、逐个数据库的结果。
对于事实核查查询:(一次性地)询问用户是只需要验证后的答案,还是需要完整报告。默认为事实核查模式。
TOOL_NAMES_REFERENCE.md — 包含参数的 123 个工具的完整列表REPORT_TEMPLATE.md — 完整报告模板、领域调整、参考文献格式、主题提取、完整性检查清单FULLTEXT_STRATEGY.md — 三层全文验证策略WORKFLOW.md — 紧凑的工作流程备忘单EXAMPLES.md — 工作示例(ATP6V1A、TRAG 冲突、稀疏靶点、药物查询)每周安装次数
221
仓库
GitHub 星标数
1.2K
首次出现
Feb 4, 2026
安全审计
安装于
opencode211
codex209
gemini-cli204
github-copilot200
amp193
kimi-cli192
Systematic approach to comprehensive literature research: disambiguate the subject, search with collision-aware queries, grade evidence, and produce a structured report.
KEY PRINCIPLES :
User Query
↓
Phase 0: CLARIFY + MODE SELECT (factoid vs deep report)
↓
Phase 1: SUBJECT DISAMBIGUATION + PROFILE
├─ Detect domain (biological target / drug / disease / general academic)
├─ Resolve identifiers and gather synonyms/aliases
├─ Check for naming collisions
└─ Gather baseline context via annotation tools (domain-specific)
↓
Phase 2: LITERATURE SEARCH (methodology kept internal)
├─ High-precision seed queries
├─ Citation network expansion from seeds
├─ Collision-filtered broader queries
└─ Theme clustering + evidence grading
↓
Phase 3: REPORT SYNTHESIS (report-first pattern)
├─ Create [topic]_report.md with all section headers IMMEDIATELY
├─ Progressively fill sections as data arrives (update after each phase)
├─ Write Executive Summary LAST (after all sections complete)
├─ Generate [topic]_bibliography.json + .csv
└─ Validate completeness checklist
Ask only what is needed; skip questions with obvious answers:
| Mode | When to Use | Deliverable |
|---|---|---|
| Factoid / Verification | Single concrete question | [topic]_factcheck_report.md (≤1 page) + bibliography |
| Mini-review | Narrow topic | Short narrative report (1-3 pages) |
| Full Deep-Research | Comprehensive overview | Full 15-section report + bibliography |
Heuristic : "Which antibiotic was X evolved to resist?" → Factoid. "What does the literature say about X?" → Full.
Provide a correct, source-verified answer with explicit evidence attribution.
# [TOPIC]: Fact-check Report
*Generated: [Date]*
## Question
[User question]
## Answer
**[One-sentence answer]** [Evidence: ★★★/★★☆/★☆☆/☆☆☆]
## Source(s)
- [Primary citation: journal/year/PMID/DOI]
## Verification Notes
- [1-3 bullets: where the statement appears, key constraints]
## Limitations
- [Full text availability, evidence type caveats]
Prefer ToolUniverse literature tools over web browsing. Use EuropePMC_search_articles(extract_terms_from_fulltext=[...]) for OA snippet verification when possible.
| Query Pattern | Domain | Phase 1 Action |
|---|---|---|
| Gene symbol (EGFR, TP53) | Biological target | Full bio disambiguation |
| Protein name ("V-ATPase") | Biological target | Full bio disambiguation |
| Drug name ("metformin") | Drug | Drug disambiguation (see 1.5) |
| Disease ("Alzheimer's") | Disease | Disease disambiguation (see 1.6) |
| CS/ML topic ("transformer architecture") | General academic | Literature-only (skip bio tools) |
| Method, concept, general topic | General academic | Literature-only (skip bio tools) |
| Cross-domain ("GNNs for drug discovery") | Interdisciplinary | Resolve each entity in its domain (see 1.9) |
For deep entity-specific research beyond literature, delegate to specialized skills:
tooluniverse-target-researchtooluniverse-drug-researchtooluniverse-disease-researchUse this skill when the focus is literature synthesis and evidence grading. Use specialized skills when the focus is entity profiling with structured database queries. For maximum depth, run both in parallel.
UniProt_search → UniProt accession
UniProt_get_entry_by_accession → Full entry with cross-references
UniProt_id_mapping → Map between ID types
ensembl_lookup_gene → Ensembl gene ID, biotype
MyGene_get_gene_annotation → NCBI Gene ID, aliases, summary
Check the primary database for the domain (first 20 results). If >20% off-topic, build a negative filter:
| Domain | Collision Check Syntax |
|---|---|
| Biomedical | PubMed: "[TERM]"[Title] |
| CS/ML | ArXiv: ti:"[TERM]" or SemanticScholar with fieldsOfStudy filter |
| General | OpenAlex or Crossref title search |
NOT [collision1] NOT [collision2]Gene family disambiguation : Use official symbol with explicit exclusions. Example: "ADAR" NOT "ADAR2" NOT "ADARB1" for ADAR1-specific results.
Cross-domain collision : Some terms have different meanings across fields (e.g., "RAG" = Retrieval-Augmented Generation in CS, Recombination Activating Gene in biology). Add domain context terms to filter: "RAG" AND "language model" NOT "recombination activating".
Gather structural, functional, and expression context via annotation tools:
InterPro_get_protein_domains → Domain architecture
UniProt_get_ptm_processing_by_accession → PTMs, active sites
HPA_get_subcellular_location → Localization
GTEx_get_median_gene_expression → Tissue expression (use gtex_v8)
GO_get_annotations_for_gene → GO terms
Reactome_map_uniprot_to_pathways → Pathways
STRING_get_protein_interactions → Interaction partners
intact_get_interactions → Experimentally validated PPIs
OpenTargets_get_target_tractability_by_ensemblID → Druggability assessment
GPCR targets : If the target is a GPCR (~35% of approved drug targets), delegate to tooluniverse-target-research for specialized GPCRdb data (3D structures, ligands, mutations).
## Target Identity
| Identifier | Value | Source |
|------------|-------|--------|
| Official Symbol | [SYMBOL] | HGNC |
| UniProt | [ACC] | UniProt |
| Ensembl Gene | [ENSG...] | Ensembl |
**Synonyms**: [list]
**Collisions**: [assessment]
Skip protein architecture/expression/GO. Instead:
Resolve identity : OpenTargets_get_drug_chembId_by_generic_name, ChEMBL_get_drug, PubChem_get_CID_by_compound_name, drugbank_get_drug_basic_info_by_drug_name_or_id
Targets & mechanisms: ChEMBL_get_drug_mechanisms, OpenTargets_get_associated_targets_by_drug_chemblId, DGIdb_get_drug_gene_interactions, drugbank_get_targets_by_drug_name_or_drugbank_id
Safety & indications: OpenTargets_get_drug_adverse_events_by_chemblId, OpenTargets_get_drug_indications_by_chemblId, search_clinical_trials
Resolve ontology IDs : Use OpenTargets_get_drug_chembId_by_generic_name or disease search tools to resolve EFO/MONDO IDs. Cross-reference ICD-10 and UMLS CUI when available from tool results.
OpenTargets_get_diseases_phenotypes_by_target_ensembl → Disease associations
DisGeNET_get_disease_genes → Disease-gene associations
DisGeNET_search_disease → Disease search with ontology IDs
CTD_get_disease_chemicals → Chemical-disease links
Resolve both entities separately, then cross-reference:
CTD_get_chemical_gene_interactions → Chemical-gene links
CTD_get_chemical_diseases → Chemical-disease associations
OpenTargets_get_associated_targets_by_drug_chemblId → Drug targets
OpenTargets_get_associated_diseases_by_drug_chemblId → Drug-disease associations
→ Intersect to find shared targets/pathways
For CS, social science, humanities, or other non-bio topics:
For topics spanning multiple domains (e.g., "GNNs for drug discovery", "AlphaFold protein prediction"):
Methodology stays internal. The report shows findings, not process.
Step 1: High-Precision Seeds (15-30 core papers)
Domain-specific seed queries:
Biomedical: "[TERM]"[Title] AND (mechanism OR function OR structure OR review)
CS/ML: ti:"[TERM]" AND (architecture OR benchmark OR evaluation OR survey)
General: "[TERM]" in title via OpenAlex/Crossref
Use date/sort filters for recency or impact:
mindate, maxdate, sort="pub_date"year="2023-2024", sort="citationCount:desc"date_from, sort_by="submittedDate"Step 2: Citation Network Expansion
PubMed_get_cited_by → Forward citations (primary)
EuropePMC_get_citations → Forward (fallback)
PubMed_get_related → Related papers
EuropePMC_get_references → Backward citations
SemanticScholar_get_recommendations → AI-similar papers
OpenCitations_get_citations → DOI-based citation data
Step 3: Collision-Filtered Broader Queries
"[TERM]" AND ([context1] OR [context2]) NOT [collision_term]
Biomedical : PubMed_search_articles, PMC_search_papers, EuropePMC_search_articles, PubTator3_LiteratureSearch
CS/ML : ArXiv_search_papers, DBLP_search_publications, SemanticScholar_search_papers
General academic : openalex_literature_search, Crossref_search_works, CORE_search_papers, DOAJ_search_articles
Preprints : BioRxiv_get_preprint, MedRxiv_get_preprint, OSF_search_preprints, BioRxiv_list_recent_preprints (For preprint keyword search: EuropePMC_search_articles(source='PPR'))
Multi-source deep search : advanced_literature_search_agent (searches 12+ databases in parallel; requires Azure OpenAI key — if unavailable, replicate coverage by querying PubMed + ArXiv + SemanticScholar + OpenAlex individually)
Citation impact : iCite_search_publications (search + RCR/APT metrics), iCite_get_publications (metrics by PMID), scite_get_tallies (supporting/contradicting counts) Note: iCite and scite are PubMed-only. For CS/ML papers, useSemanticScholar_get_paper for citation counts and influence scores.
Author search : PubMed "Author[Author]", ArXiv "au:Name", SemanticScholar/OpenAlex as query text
When abstracts lack critical details, use full-text snippet extraction. See FULLTEXT_STRATEGY.md for the three-tier strategy (Europe PMC auto-snippets → manual Semantic Scholar/ArXiv → manual download).
Attempt 1 → fails → wait 2s → Attempt 2 → fails → wait 5s → Fallback tool
| Primary | Fallback 1 | Fallback 2 |
|---|---|---|
PubMed_get_cited_by | EuropePMC_get_citations | OpenCitations_get_citations |
PubMed_get_related | SemanticScholar_get_recommendations | SemanticScholar_search_papers |
GTEx_get_median_gene_expression |
With Unpaywall email: full OA check. Without: best-effort via Europe PMC, PMC, OpenAlex, DOAJ flags. Label: *OA Status: Best-effort (Unpaywall not configured)*
Grade every claim by evidence strength:
| Tier | Label | Description | Bio Example | CS/ML Example |
|---|---|---|---|---|
| T1 | ★★★ Mechanistic | Direct experimental/formal evidence | CRISPR KO + rescue, RCT | Formal proof, controlled ablation with significance test |
| T2 | ★★☆ Functional | Functional study showing role | siRNA knockdown phenotype | Benchmark on standard dataset with baselines |
| T3 | ★☆☆ Association | Screen hit, correlation, observational | High-throughput screen, GWAS | Observational study, case study, anecdotal comparison |
| T4 | ☆☆☆ Mention | Review, text-mined, peripheral | Review article | Survey paper, blog post, workshop abstract |
In report , label inline:
Target X regulates pathway Y [★★★: PMID:12345678] through direct
phosphorylation [★★☆: PMID:23456789].
Per theme , summarize evidence quality:
### Theme: Lysosomal Function (47 papers)
**Evidence Quality**: Strong (32 mechanistic, 11 functional, 4 association)
| File | Mode | Always? |
|---|---|---|
[topic]_report.md | Full Deep-Research | Yes |
[topic]_factcheck_report.md | Factoid | Yes |
[topic]_bibliography.json | All modes | Yes |
[topic]_bibliography.csv | All modes | Yes |
methods_appendix.md | Any (only if requested) |
Create the report file immediately after Phase 0 with all 15 section headers (use template from REPORT_TEMPLATE.md). Then:
This ensures partial results are saved even if the process is interrupted.
Use the 15-section template from REPORT_TEMPLATE.md. Key sections adapt by domain:
See REPORT_TEMPLATE.md for full template, domain-specific adaptations, bibliography format, theme extraction protocol, and completeness checklist.
Brief progress updates (not search logs):
DO NOT expose : raw tool outputs, dedup counts, search round details, database-by-database results.
For factoid queries : ask (once) if user wants just the verified answer or a full report. Default to factoid mode.
TOOL_NAMES_REFERENCE.md — Complete list of 123 tools with parametersREPORT_TEMPLATE.md — Full report template, domain adaptations, bibliography format, theme extraction, completeness checklistFULLTEXT_STRATEGY.md — Three-tier full-text verification strategyWORKFLOW.md — Compact workflow cheat-sheetEXAMPLES.md — Worked examples (ATP6V1A, TRAG collision, sparse target, drug query)Weekly Installs
221
Repository
GitHub Stars
1.2K
First Seen
Feb 4, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode211
codex209
gemini-cli204
github-copilot200
amp193
kimi-cli192
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
41,800 周安装
HPA_get_rna_expression_by_source |
| Document as unavailable |
Unpaywall_check_oa_status | Europe PMC isOpenAccess | OpenAlex is_oa |
| No |