devtu-optimize-skills by mims-harvard/tooluniverse
npx skills add https://github.com/mims-harvard/tooluniverse --skill devtu-optimize-skills基于证据分级和来源标注的高质量研究技能最佳实践。
python3 -m tooluniverse.cli run <Tool> '<json>' 进行验证完整详情:references/optimization-patterns.md
---|---|---
1 | 工具接口验证 | 首次调用前执行 get_tool_info();维护修正表
2 | 基础数据层 | 首先查询聚合器(如 Open Targets, PubChem)
3 | 版本化标识符 | 同时捕获 ENSG00000123456 和 .12 版本
4 | 优先消歧 | 解析 ID,检测冲突,构建否定过滤器
5 | 仅报告输出 | 叙述性内容放在报告中;方法学仅在询问时放在附录
6 | 证据分级 | T1(机制性)→ T2(功能性)→ T3(关联性)→ T4(提及)
7 | 量化完整性 | 每部分设定最小数量(例如 >=20 个 PPIs,前 10 个组织)
8 | 强制检查清单 | 所有部分都必须存在,即使内容为"证据有限"
9 | 聚合数据缺口 | 单一章节整合所有缺失数据
10 | 查询策略 | 高精度种子 → 引用扩展 → 经过冲突过滤的广泛查询
11 | 工具故障处理 | 主要工具 → 备用工具 1 → 备用工具 2 → 记录不可用
12 | 可扩展输出 | 叙述性报告 + JSON/CSV 格式的参考文献
13 | 综合章节 | 生物学模型 + 可验证的假设,而不仅仅是论文列表
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
Phase -1: 工具验证(检查参数)
Phase 0: 基础数据(聚合器查询)
Phase 1: 消歧(ID、冲突、基线)
Phase 2: 专项查询(填补缺口)
Phase 3: 报告综合(基于证据分级的叙述)
关键规则:切勿在未首先测试所有工具调用的情况下编写技能文档。
operation 参数| 反面模式 | 修复方法 |
|---|---|
| "搜索日志"式报告 | 方法学保持内部化;仅报告发现结果 |
| 缺少消歧 | 添加冲突检测;构建否定过滤器 |
| 无证据分级 | 应用 T1-T4 分级;标注每个主张 |
| 省略空章节 | 包含并注明"未识别到" |
| 无综合 | 添加生物学模型 + 假设 |
| 静默失败 | 在"数据缺口"部分记录;实现备用方案 |
| 错误的工具参数 | 调用前通过 get_tool_info() 验证 |
| GTEx 无返回结果 | 尝试版本化 ID ENSG*.version |
| 无基础数据层 | 首先查询聚合器 |
| 未测试的工具调用 | 测试驱动:先编写测试脚本 |
| 投诉 | 修复方法 |
|---|---|
| "报告太短" | 添加 Phase 0 基础数据 + Phase 1 消歧 |
| "噪音太多" | 添加冲突过滤 |
| "无法分辨重点" | 添加 T1-T4 证据层级 |
| "缺少章节" | 添加带最低数量要求的强制检查清单 |
| "太长/难以阅读" | 将叙述性内容与 JSON 分开 |
| "只是论文列表" | 添加综合章节 |
| "工具失败,无数据" | 添加重试 + 备用链 |
---
name: [domain]-research
description: [触发条件:做什么 + 何时]
---
# [领域] 研究
## 工作流程
Phase -1: 工具验证 → Phase 0: 基础数据 → Phase 1: 消歧
→ Phase 2: 搜索 → Phase 3: 报告
## Phase -1: 工具验证
[参数修正表]
## Phase 0: 基础数据
[聚合器查询]
## Phase 1: 消歧
[ID、冲突、基线]
## Phase 2: 专项查询
[查询策略、备用方案]
## Phase 3: 报告综合
[证据分级、强制章节]
## 输出文件
- [topic]_report.md, [topic]_bibliography.json
## 量化最低要求
[每部分的数量要求]
## 完整性检查清单
[带复选框的必需章节]
每周安装量
151
代码仓库
GitHub 星标数
1.2K
首次出现
2026年2月4日
安全审计
安装于
codex146
opencode145
gemini-cli141
github-copilot139
amp134
kimi-cli133
Best practices for high-quality research skills with evidence grading and source attribution.
python3 -m tooluniverse.cli run <Tool> '<json>' to verifyFull details: references/optimization-patterns.md
---|---|---
1 | Tool Interface Verification | get_tool_info() before first call; maintain corrections table
2 | Foundation Data Layer | Query aggregator (Open Targets, PubChem) FIRST
3 | Versioned Identifiers | Capture both ENSG00000123456 and .12 version
4 | Disambiguation First | Resolve IDs, detect collisions, build negative filters
5 | Report-Only Output | Narrative in report; methodology in appendix only if asked
6 | Evidence Grading | T1 (mechanistic) → T2 (functional) → T3 (association) → T4 (mention)
7 | Quantified Completeness | Numeric minimums per section (>=20 PPIs, top 10 tissues)
8 | Mandatory Checklist | All sections exist, even if "Limited evidence"
9 | Aggregated Data Gaps | Single section consolidating all missing data
10 | Query Strategy | High-precision seeds → citation expansion → collision-filtered broad
11 | Tool Failure Handling | Primary → Fallback 1 → Fallback 2 → document unavailable
12 | Scalable Output | Narrative report + JSON/CSV bibliography
13 | Synthesis Sections | Biological model + testable hypotheses, not just paper lists
Phase -1: Tool Verification (check params)
Phase 0: Foundation Data (aggregator query)
Phase 1: Disambiguation (IDs, collisions, baseline)
Phase 2: Specialized Queries (fill gaps)
Phase 3: Report Synthesis (evidence-graded narrative)
Full details: references/testing-standards.md
Critical rule : NEVER write skill docs without testing all tool calls first.
operation parameter| Anti-Pattern | Fix |
|---|---|
| "Search Log" reports | Keep methodology internal; report findings only |
| Missing disambiguation | Add collision detection; build negative filters |
| No evidence grading | Apply T1-T4 grades; label each claim |
| Empty sections omitted | Include with "None identified" |
| No synthesis | Add biological model + hypotheses |
| Silent failures | Document in Data Gaps; implement fallbacks |
| Wrong tool parameters | Verify via get_tool_info() before calling |
| GTEx returns nothing | Try versioned ID ENSG*.version |
| No foundation layer | Query aggregator first |
| Untested tool calls | Test-driven: test script FIRST |
| Complaint | Fix |
|---|---|
| "Report too short" | Add Phase 0 foundation + Phase 1 disambiguation |
| "Too much noise" | Add collision filtering |
| "Can't tell what's important" | Add T1-T4 evidence tiers |
| "Missing sections" | Add mandatory checklist with minimums |
| "Too long/unreadable" | Separate narrative from JSON |
| "Just a list of papers" | Add synthesis sections |
| "Tool failed, no data" | Add retry + fallback chains |
---
name: [domain]-research
description: [What + when triggers]
---
# [Domain] Research
## Workflow
Phase -1: Tool Verification → Phase 0: Foundation → Phase 1: Disambiguate
→ Phase 2: Search → Phase 3: Report
## Phase -1: Tool Verification
[Parameter corrections table]
## Phase 0: Foundation Data
[Aggregator query]
## Phase 1: Disambiguation
[IDs, collisions, baseline]
## Phase 2: Specialized Queries
[Query strategy, fallbacks]
## Phase 3: Report Synthesis
[Evidence grading, mandatory sections]
## Output Files
- [topic]_report.md, [topic]_bibliography.json
## Quantified Minimums
[Numbers per section]
## Completeness Checklist
[Required sections with checkboxes]
Weekly Installs
151
Repository
GitHub Stars
1.2K
First Seen
Feb 4, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
codex146
opencode145
gemini-cli141
github-copilot139
amp134
kimi-cli133
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
118,000 周安装