research-synthesis by dhruvbaldawa/ccconfigs
npx skills add https://github.com/dhruvbaldawa/ccconfigs --skill research-synthesis在以下情况使用:
在以下情况跳过:
仅使用来自 MCP 工具的真实研究。切勿捏造:
如果未找到数据: ❌ 错误示例:“研究表明 70% 的 OKR 实施失败……” ✅ 正确示例:“我没有关于 OKR 失败率的数据。是否需要我使用 Perplexity 进行研究?”
在添加到脑力转储前:
| 工具 | 用途 | 示例 |
|---|---|---|
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 特定 URL、提取文章内容、用户提及的来源 |
| 用户:“查看这篇文章:https://...” |
| WebSearch | 近期趋势/新闻、统计数据、多角度观点、一般知识空白 | “关于 OKR 失败的最新研究”、“放弃敏捷的公司” |
| 工具 | 用途 | 示例 |
|---|---|---|
| Parallel Search | 使用代理模式进行高级网络搜索、事实核查、竞争情报、多源综合、深度 URL 内容提取 | 需要综合的复杂查询、跨来源验证、从 URL 提取完整内容 |
| 工具 | 用途 | 示例 |
|---|---|---|
| Perplexity | 当 WebSearch/Parallel 不足时进行广泛调查 | 行业共识、统计数据、多角度观点 |
| 工具 | 用途 | 示例 |
|---|---|---|
| Context7 | 库/框架文档、API 参考、技术规格 | “React useEffect 如何工作?”、“查看最新的 API 文档” |
决策树:
Need research?
├─ Specific URL? → WebFetch → Parallel Search
├─ Technical docs/APIs? → Context7
├─ General search? → WebSearch → Parallel Search → Perplexity
└─ Complex synthesis? → Parallel Search
理由: 内置工具(WebFetch, WebSearch)速度更快且始终可用。Parallel Search 提供用于综合和深度内容提取的高级代理模式。Perplexity 在需要时提供广泛的调查。Context7 仅用于官方文档。
❌ 错误(数据堆砌):
Research shows:
- Stat 1
- Stat 2
- Stat 3
✅ 正确(综合叙述):
Found pattern: 3 recent studies show 60-70% OKR failure rates.
- HBR: 70% failure, metric gaming primary cause
- McKinsey: >100 OKRs correlate with diminishing returns
- Google: Shifted from strict OKRs to "goals and signals"
Key insight: Failure correlates with treating OKRs as compliance exercise.
## Research
### OKR Implementation Failures
60-70% failure rate (HBR, McKinsey). Primary causes: metric gaming, checkbox compliance.
**Sources:**
- HBR: "Why OKRs Don't Work" - 70% fail to improve performance
- McKinsey: Survey of 500 companies
- Google blog: Evolution of goals system
**Key Quote:**
> "When OKRs become performance evaluation, they stop being planning."
> - John Doerr, Measure What Matters
研究自然地融入对话:
主动式:“这是一个强有力的主张——让我查一下数据……[使用工具] 直觉不错!找到了 3 项证实的研究。添加到脑力转储。”
请求式:“查找 X……[使用工具] 找到了几个案例。我应该全部添加到脑力转储,还是专注于一种方法?”
起草过程中:“需要引用……[使用工具] 找到了支持性研究。添加到草稿中并注明归属。”
在更新前始终询问(除非上下文清晰):“找到了 X, Y, Z。是否添加到脑力转储的‘研究’部分下?”
更新以下部分:
在添加到脑力转储前:
有关详细示例,请参阅 reference/examples.md
每周安装次数
1
仓库
GitHub 星标数
20
首次出现
1 天前
安全审计
安装于
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
Use when:
Skip when:
Only use REAL research from MCP tools. Never invent:
If no data found: ❌ BAD: "Research shows 70% of OKR implementations fail..." ✅ GOOD: "I don't have data on OKR failure rates. Should I research using Perplexity?"
Before adding to braindump:
| Tool | Use For | Examples |
|---|---|---|
| WebFetch | Specific URLs, extracting article content, user-mentioned sources | User: "Check this article: https://..." |
| WebSearch | Recent trends/news, statistical data, multiple perspectives, general knowledge gaps | "Recent research on OKR failures", "Companies that abandoned agile" |
| Tool | Use For | Examples |
|---|---|---|
| Parallel Search | Advanced web search with agentic mode, fact-checking, competitive intelligence, multi-source synthesis, deep URL extraction | Complex queries needing synthesis, validation across sources, extracting full content from URLs |
| Tool | Use For | Examples |
|---|---|---|
| Perplexity | Broad surveys when WebSearch/Parallel insufficient | Industry consensus, statistical data, multiple perspectives |
| Tool | Use For | Examples |
|---|---|---|
| Context7 | Library/framework docs, API references, technical specifications | "How does React useEffect work?", "Check latest API docs" |
Decision tree:
Need research?
├─ Specific URL? → WebFetch → Parallel Search
├─ Technical docs/APIs? → Context7
├─ General search? → WebSearch → Parallel Search → Perplexity
└─ Complex synthesis? → Parallel Search
Rationale: Built-in tools (WebFetch, WebSearch) are faster and always available. Parallel Search provides advanced agentic mode for synthesis and deep content extraction. Perplexity offers broad surveys when needed. Context7 for official docs only.
❌ Bad (data dump):
Research shows:
- Stat 1
- Stat 2
- Stat 3
✅ Good (synthesized narrative):
Found pattern: 3 recent studies show 60-70% OKR failure rates.
- HBR: 70% failure, metric gaming primary cause
- McKinsey: >100 OKRs correlate with diminishing returns
- Google: Shifted from strict OKRs to "goals and signals"
Key insight: Failure correlates with treating OKRs as compliance exercise.
## Research
### OKR Implementation Failures
60-70% failure rate (HBR, McKinsey). Primary causes: metric gaming, checkbox compliance.
**Sources:**
- HBR: "Why OKRs Don't Work" - 70% fail to improve performance
- McKinsey: Survey of 500 companies
- Google blog: Evolution of goals system
**Key Quote:**
> "When OKRs become performance evaluation, they stop being planning."
> - John Doerr, Measure What Matters
Research flows naturally into conversation:
Proactive : "That's a strong claim - let me check data... [uses tool] Good intuition! Found 3 confirming studies. Adding to braindump."
Requested : "Find X... [uses tool] Found several cases. Should I add all to braindump or focus on one approach?"
During Drafting : "Need citation... [uses tool] Found supporting research. Adding to draft with attribution."
Always ask before updating (unless context is clear): "Found X, Y, Z. Add to braindump under Research?"
Update sections:
Before adding to braindump:
For detailed examples, see reference/examples.md
Weekly Installs
1
Repository
GitHub Stars
20
First Seen
1 day ago
Security Audits
Gen Agent Trust HubPassSocketFailSnykWarn
Installed on
zencoder1
amp1
cline1
openclaw1
opencode1
cursor1
NotebookLM Python库:自动化访问Google NotebookLM,实现AI内容创作与文档处理
1,500 周安装