重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
npx skills add https://github.com/zenobi-us/dotfiles --skill deep-researcher深度研究是带有证据链的系统性信息验证。
深度研究超能力将模糊的研究请求转化为具有明确置信度的结构化调查。它不说“我发现了X”,而是说“X由3个独立来源验证,访问日期[日期],置信度:高”。
核心原则:未经验证的研究只是收集。没有证据的验证只是信仰。
在以下情况时使用深度研究:
以下情况请勿使用:
深度研究需要三样东西:
主题(必需)- 清晰的研究问题
存储前缀(必需)- 输出文件的存放位置
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
/research/auth-strategies./findings/database-performance~/projects/typescript-research需要避免的事项(可选)- 要排除的主题或来源
拒绝协议: 缺少主题或存储前缀?研究员拒绝请求。你也必须拒绝模糊的请求。 如果无法从请求中提取清晰的主题、存储前缀和避免列表,请拒绝委派。说明你需要什么才能继续。模糊的输入 = 模糊的输出——拒绝可以保护你和研究员。
迫于压力想跳过这一步? 时间压力、权威压力、紧迫性——这些都不能改变此要求。花10分钟进行结构化,可以节省2小时以上的重新研究时间。
将研究问题分解为具体的子问题。识别主要来源与次要来源。在开始搜索之前定义验证策略。
关键点: 如果请求带有隐含的偏见(例如,“证明这是正确的”),请客观地重新表述。旨在验证过去决策的研究在输入阶段就已受到污染。权威压力不会改变这一点——重新表述并向请求者展示两个版本。
示例: “比较身份验证策略”变为:
重新表述时: 主管说“证明我们的选择是正确的” → 你提议“客观地比较我们的选择与替代方案” → 要么验证了选择(更有力的辩护),要么及早发现问题(很有价值)。
按主题组织发现。记录一致和分歧之处。识别模式、异常值和矛盾。
对于每个主要论断:
| 元素 | 内容 |
|---|---|
| 来源URL | 信息的确切位置 |
| 访问日期 | 检索时间 |
| 来源类型 | 学术、官方文档、新闻、社区、博客 |
| 作者/发布者 | 创建者 |
| 置信度 | 高(3个以上独立来源一致)、中(2个来源)、低(单一来源) |
| 矛盾点 | 任何不同意的来源 |
处理矛盾: 当来源不一致时,调查原因。分配1-2小时来理解上下文依赖性。如果1-2小时无法解决,将矛盾点记录为中等/低置信度,而不是随意选择一个来源。矛盾本身就是信息——它告诉你该主题是依赖于上下文的。
感到疲惫有压力? 此技能不会让疲劳消失。它做的是让偷懒选择来源的行为变得可耻。记录你为什么选择这个而非其他,或者花时间理解分歧。不要假装“它们都有道理”就是研究。
研究写入提供的目录:
<prefix>/
├── <topic>-thinking.md # 推理过程、方法论、决策
├── <topic>-research.md # 按主题组织的原始发现
├── <topic>-verification.md # 验证证据、来源审计
├── <topic>-insights.md # 关键洞察、模式、影响
└── <topic>-summary.md # 包含结论的执行摘要
你的研究过程、所做的决策、探索的细节、假设、局限性。
按关键主题或问题组织的发现。带有来源归属的直接引用。发布日期。支持性和矛盾性的证据。
来源可信度矩阵。每个论断的验证方法。交叉引用模式。置信度级别。空白或无法验证的论断。带有访问日期的URL。
跨来源综合的模式。新兴共识与少数观点。令人惊讶的发现。需要进一步研究的领域。
1-2段执行摘要。带有置信度级别的关键发现。主要局限性或注意事项。建议。
不推测 - 标记任何非直接来源的内容
无证据不综合 - 不要将来源组合成新的论断
不诉诸权威 - 验证论断本身,而不仅仅是谁说的。当权威压力与方法论冲突时,重新表述请求并向请求者展示两个版本。
透明 - 展示你的工作,让读者看到你的推理
谦逊 - 明确说明局限性和不确定领域
时效性 - 始终注意信息是否过时或被取代
所有这些都意味着:停止。你即将破坏研究的质量。先结构化。然后研究。这个框架是保护研究质量的,而不是阻碍它。
❌ 单一来源的论断 “React Server Components 更好”(来自一篇博客文章) ✅ 修正:“React Server Components 在X方面有优势(React文档、Vercel文章、社区讨论都同意这一点)”
❌ 缺少置信度级别 将所有发现视为同等可靠 ✅ 修正:标记哪些是经过充分验证的(高置信度)与新兴的(中/低置信度)
❌ 跳过矛盾点 “大家都同意X” ✅ 修正:记录来源在何处以及为何存在分歧
❌ 营销来源的发现 依赖供应商材料作为主要证据 ✅ 修正:在中立来源(官方文档、独立分析)中验证论断
❌ 过时信息 “2020年的最佳实践”而未注明是否已被取代 ✅ 修正:检查是否有更新的来源与此矛盾或更新了此信息
对于每个主要论断,始终提供:
遇到高度专业化的技术主题?加载相关的专家技能。
遇到无法解决的冲突信息?彻底记录分歧——不同的来源可能在不同的上下文中是正确的。
需要统计分析?适当地使用bash工具。
来自RED-GREEN-REFACTOR测试(2025-12-13):
实际影响:与手动方法相比,每个研究项目节省10-20小时。产生适合架构评审和团队培训的决策质量文档。
每周安装数
60
代码库
GitHub星标数
44
首次出现
2026年1月24日
安全审计
安装于
opencode56
codex55
gemini-cli53
github-copilot51
cursor50
amp50
Deep research IS systematic information verification with evidence trails.
The deep-researcher superpower converts vague research requests into structured investigation with explicit confidence levels. Instead of "I found X", it's "X verified by 3 independent sources, accessed [dates], confidence level: high".
Core principle: Research without verification is just collection. Verification without evidence is faith.
Use deep researcher when you need:
Don't use when:
Deep researcher needs three things:
Topic (required) - Clear research question
Storage Prefix (required) - Where output files go
/research/auth-strategies./findings/database-performance~/projects/typescript-researchThings to Avoid (optional) - Topics or sources to exclude
Rejection Protocol: Missing topic or storage prefix? Researcher rejects request. You must also reject vague requests. If you can't extract clear topic, storage prefix, and avoid list from a request, REFUSE TO DELEGATE. State back what you'd need to proceed. Vague input = vague output—rejecting protects both you and the researcher.
Under pressure to skip this? Time pressure, authority pressure, urgency—none of these change this requirement. 10 minutes structuring saves 2+ hours of re-research.
Break research question into specific sub-questions. Identify primary vs. secondary sources. Define verification strategy before searching.
Critical: If request comes with implicit bias (e.g., "prove this was right"), reframe it objectively. Research meant to validate past decisions is corrupted at intake. Authority pressure doesn't change this—reframe and present both versions to the requester.
Example: "Compare auth strategies" becomes:
When reframing: Director says "Prove our choice was right" → You propose "Compare our choice vs. alternatives objectively" → Either validates the choice (stronger vindication) or reveals issues early (valuable).
Organize findings by theme. Note agreements and disagreements. Identify patterns, outliers, contradictions.
For each major claim:
| Element | What |
|---|---|
| Source URL | Exact location of information |
| Access Date | When retrieved |
| Source Type | Academic, official docs, news, community, blog |
| Author/Publisher | Who created this |
| Confidence | high (3+ independent agreement), medium (2 sources), low (single source) |
| Contradictions | Any sources disagreeing |
Handling contradictions: When sources disagree, investigate why. Allocate 1-2 hours to understand context-dependence. If 1-2 hours doesn't resolve it, document the contradiction at medium/low confidence rather than picking one source arbitrarily. Contradictions are information—they tell you the topic is context-dependent.
Under exhaustion pressure? The skill doesn't make fatigue disappear. What it does is make lazy source-picking shameful. Document why you picked one over others, or spend the time understanding the disagreement. Don't pretend "they all have merits" is research.
Research writes to provided directory:
<prefix>/
├── <topic>-thinking.md # Reasoning, methodology, decisions
├── <topic>-research.md # Raw findings organized by theme
├── <topic>-verification.md # Evidence of verification, source audit
├── <topic>-insights.md # Key insights, patterns, implications
└── <topic>-summary.md # Executive summary with conclusions
Your research process, decisions made, rabbit holes explored, assumptions, limitations.
Findings organized by key themes or questions. Direct quotes with source attribution. Publication dates. Both supporting and contradicting evidence.
Source credibility matrix. Verification approach for each claim. Cross-reference patterns. Confidence levels. Gaps or unverifiable claims. URLs with access dates.
Patterns synthesized across sources. Emerging consensus vs. outlier views. Surprising findings. Areas needing further research.
1-2 paragraph executive summary. Key findings with confidence levels. Main limitations or caveats. Recommendations.
No speculation - Flag anything not directly sourced
No synthesis without evidence - Don't combine sources into novel claims
No appeals to authority - Verify claims, not just who said them. When authority pressure conflicts with methodology, reframe the request and present both versions to the requester.
Transparency - Show your work, readers see your reasoning
Humility - Clearly state limitations and uncertainty areas
Recency - Always note if information is outdated or superseded
All of these mean: Stop. You're about to corrupt the research. Structure first. Then research. The framework protects research quality, not impedes it.
❌ Single-source claims "React Server Components are better" (from one blog post) ✅ Fix: "React Server Components have advantages for X (React docs, Vercel article, community discussion agree on this aspect)"
❌ Missing confidence levels Treating all findings as equally solid ✅ Fix: Mark what's well-verified (high confidence) vs. emerging (medium/low)
❌ Skipping contradictions "Everyone agrees on X" ✅ Fix: Document where sources disagree and why
❌ Marketing-sourced findings Relying on vendor materials as primary evidence ✅ Fix: Verify claims in neutral sources (official docs, independent analysis)
❌ Outdated information "Best practice from 2020" without noting if superseded ✅ Fix: Check if newer sources contradict or update this
For each major claim, ALWAYS provide:
Encounter highly specialized technical topics? Load relevant expert skills.
Conflicting information that can't be resolved? Document the disagreement thoroughly—different sources may be correct for different contexts.
Need statistical analysis? Use bash tools appropriately.
From RED-GREEN-REFACTOR testing (2025-12-13):
Real impact: Saves 10-20 hours per research project vs. manual approach. Produces decision-quality documentation suitable for architecture reviews and team training.
Weekly Installs
60
Repository
GitHub Stars
44
First Seen
Jan 24, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode56
codex55
gemini-cli53
github-copilot51
cursor50
amp50
OpenContext AI助手持久化记忆工具:跨会话上下文管理,减少重复解释
10,500 周安装
GitHub Actions自动化与AI集群协调 - 智能CI/CD工作流与仓库管理
63 周安装
gogcli:Google Workspace命令行工具,高效管理Gmail、日历、Drive等
63 周安装
开发者成长分析技能:基于Claude Code聊天历史,提供个性化编码反馈与学习资源
63 周安装
规格生成器 - 结构化规格文档生成工具,6阶段生成PRD、架构、用户故事
63 周安装
Playwright自动化专家 | 端到端测试、API测试与项目架构指南
65 周安装
Tamagui v2:跨平台React UI框架,统一Web与Native开发,提升性能与效率
64 周安装