research-external by parcadei/continuous-claude-v3
npx skills add https://github.com/parcadei/continuous-claude-v3 --skill research-external研究外部资源(文档、网络、API)以获取库、最佳实践和一般主题信息。
注意: 当前年份是 2025 年。在研究最佳实践时,请使用 2024-2025 年作为参考时间范围。
/research-external <focus> [options]
如果用户仅输入 /research-external 而没有参数或参数不完整,请通过以下问题流程引导他们。每个阶段使用 AskUserQuestion。
question: "您需要哪种类型的信息?"
header: "类型"
options:
- label: "如何使用库/包"
description: "API 文档、示例、模式"
- label: "任务的最佳实践"
description: "推荐方法、比较"
- label: "一般主题研究"
description: "全面的多源搜索"
- label: "比较选项/替代方案"
description: "哪个工具/库/方法最好"
映射关系:
question: "您具体想研究什么?"
header: "主题"
options: [] # 自由文本输入
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
良好回答示例:
如果用户选择了库研究:
question: "使用哪个包注册表?"
header: "注册表"
options:
- label: "npm (JavaScript/TypeScript)"
description: "Node.js 包"
- label: "PyPI (Python)"
description: "Python 包"
- label: "crates.io (Rust)"
description: "Rust crates"
- label: "Go modules"
description: "Go 包"
然后,如果尚未提供,请询问具体的库名称。
question: "研究应该有多彻底?"
header: "深度"
options:
- label: "快速回答"
description: "仅要点"
- label: "深入研究"
description: "多个来源、示例、边界情况"
映射关系:
question: "我应该生成什么?"
header: "输出"
options:
- label: "聊天摘要"
description: "告诉我您发现了什么"
- label: "研究文档"
description: "写入 thoughts/shared/research/"
- label: "实施交接"
description: "为编码准备上下文"
映射关系:
Based on your answers, I'll research:
**Focus:** library
**Topic:** "Prisma ORM connection pooling"
**Library:** prisma (npm)
**Depth:** thorough
**Output:** doc
Proceed? [Yes / Adjust settings]
| 焦点 | 主要工具 | 目的 |
|---|---|---|
library | nia-docs | API 文档、使用模式、代码示例 |
best-practices | perplexity-search | 推荐方法、模式、比较 |
general | All MCP tools | 全面的多源研究 |
| 选项 | 值 | 描述 |
|---|---|---|
--topic | "string" | 必需。 要研究的主题/库/概念 |
--depth | shallow, thorough | 搜索深度(默认:shallow) |
--output | handoff, doc | 输出格式(默认:doc) |
--library | "name" | 对于 library 焦点:特定的包名称 |
--registry | npm, py_pi, crates, go_modules | 对于 library 焦点:包注册表 |
从用户输入中提取:
FOCUS=$1 # library | best-practices | general
TOPIC="..." # from --topic
DEPTH="shallow" # from --depth (default: shallow)
OUTPUT="doc" # from --output (default: doc)
LIBRARY="..." # from --library (optional)
REGISTRY="npm" # from --registry (default: npm)
library主要工具:nia-docs - 查找 API 文档、使用模式、代码示例。
# Semantic search in package
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--package "$LIBRARY" \
--registry "$REGISTRY" \
--query "$TOPIC" \
--limit 10)
# If thorough depth, also grep for specific patterns
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--package "$LIBRARY" \
--grep "$TOPIC")
# Supplement with official docs if URL known
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/firecrawl_scrape.py \
--url "https://docs.example.com/api/$TOPIC" \
--format markdown)
深入研究补充:
best-practices主要工具:perplexity-search - 查找推荐方法、模式、反模式。
# AI-synthesized research (sonar-pro)
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--research "$TOPIC best practices 2024 2025")
# If comparing alternatives
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--reason "$TOPIC vs alternatives - which to choose?")
深入研究补充:
# Chain-of-thought for complex decisions
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--reason "$TOPIC tradeoffs and considerations 2025")
# Deep comprehensive research
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--deep "$TOPIC comprehensive guide 2025")
# Recent developments
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--search "$TOPIC latest developments" \
--recency month --max-results 5)
general使用所有可用的 MCP 工具 - 全面的多源研究。
步骤 2a:库文档 (nia-docs)
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--search "$TOPIC")
步骤 2b:网络研究 (perplexity)
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--research "$TOPIC")
步骤 2c:特定文档 (firecrawl)
# Scrape relevant documentation pages found in perplexity results
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/firecrawl_scrape.py \
--url "$FOUND_DOC_URL" \
--format markdown)
深入研究补充:
合并所有来源的结果:
doc(默认)写入:thoughts/shared/research/YYYY-MM-DD-{topic-slug}.md
---
date: {ISO timestamp}
type: external-research
topic: "{topic}"
focus: {focus}
sources: [nia, perplexity, firecrawl]
status: complete
---
# Research: {Topic}
## Summary
{2-3 sentence summary of findings}
## Key Findings
### Library Documentation
{From nia-docs - API references, usage patterns}
### Best Practices (2024-2025)
{From perplexity - recommended approaches}
### Code Examples
```{language}
// Working examples found
| 选项 | 优点 | 缺点 |
|---|---|---|
| {Option 1} | ... | ... |
handoff写入:thoughts/shared/handoffs/{session}/research-{topic-slug}.yaml
---
type: research-handoff
ts: {ISO timestamp}
topic: "{topic}"
focus: {focus}
status: complete
---
goal: Research {topic} for implementation planning
sources_used: [nia, perplexity, firecrawl]
findings:
key_concepts:
- {concept1}
- {concept2}
code_examples:
- pattern: "{pattern name}"
code: |
// example code
best_practices:
- {practice1}
- {practice2}
pitfalls:
- {pitfall1}
recommendations:
- {rec1}
- {rec2}
sources:
- title: "{Source 1}"
url: "{url1}"
type: {documentation|article|reference}
for_plan_agent: |
Based on research, the recommended approach is:
1. {Step 1}
2. {Step 2}
Key libraries: {lib1}, {lib2}
Avoid: {pitfall1}
Research Complete
Topic: {topic}
Focus: {focus}
Output: {path to file}
Key findings:
- {Finding 1}
- {Finding 2}
- {Finding 3}
Sources: {N} sources cited
{If handoff output:}
Ready for plan-agent to continue.
如果 MCP 工具失败(API 密钥缺失、速率限制等):
在输出中记录失败:
tool_status:
nia: success
perplexity: failed (rate limited)
firecrawl: skipped
继续使用其他来源 - 部分结果仍然有价值
适当设置状态:
complete - 所有请求的工具都成功了partial - 某些工具失败,发现仍然有用failed - 未获得有用的结果在发现中注明空白:
## Gaps
- Perplexity unavailable - best practices section limited to nia results
/research-external library --topic "dependency injection" --library fastapi --registry py_pi
/research-external best-practices --topic "error handling in Python async" --depth thorough
/research-external general --topic "OAuth2 PKCE flow implementation" --depth thorough --output handoff
/research-external library --topic "useEffect cleanup" --library react
| 研究之后 | 使用技能 | 用于 |
|---|---|---|
--output handoff | plan-agent | 创建实施计划 |
| 找到的代码示例 | implement_task | 直接实施 |
| 架构决策 | create_plan | 详细规划 |
| 库比较 | 呈现给用户 | 决策制定 |
NIA_API_KEY 或 mcp_config.json 中的 nia 服务器~/.claude/.env 中的 PERPLEXITY_API_KEYFIRECRAWL_API_KEY 和 mcp_config.json 中的 firecrawl 服务器research-codebase 或 scoutWeekly Installs
201
Repository
GitHub Stars
3.6K
First Seen
Jan 22, 2026
Security Audits
Installed on
opencode194
codex192
gemini-cli191
cursor189
github-copilot188
amp182
Research external sources (documentation, web, APIs) for libraries, best practices, and general topics.
Note: The current year is 2025. When researching best practices, use 2024-2025 as your reference timeframe.
/research-external <focus> [options]
If the user types just /research-external with no or partial arguments, guide them through this question flow. Use AskUserQuestion for each phase.
question: "What kind of information do you need?"
header: "Type"
options:
- label: "How to use a library/package"
description: "API docs, examples, patterns"
- label: "Best practices for a task"
description: "Recommended approaches, comparisons"
- label: "General topic research"
description: "Comprehensive multi-source search"
- label: "Compare options/alternatives"
description: "Which tool/library/approach is best"
Mapping:
question: "What specifically do you want to research?"
header: "Topic"
options: [] # Free text input
Examples of good answers:
If user selected library focus:
question: "Which package registry?"
header: "Registry"
options:
- label: "npm (JavaScript/TypeScript)"
description: "Node.js packages"
- label: "PyPI (Python)"
description: "Python packages"
- label: "crates.io (Rust)"
description: "Rust crates"
- label: "Go modules"
description: "Go packages"
Then ask for specific library name if not already provided.
question: "How thorough should the research be?"
header: "Depth"
options:
- label: "Quick answer"
description: "Just the essentials"
- label: "Thorough research"
description: "Multiple sources, examples, edge cases"
Mapping:
question: "What should I produce?"
header: "Output"
options:
- label: "Summary in chat"
description: "Tell me what you found"
- label: "Research document"
description: "Write to thoughts/shared/research/"
- label: "Handoff for implementation"
description: "Prepare context for coding"
Mapping:
Based on your answers, I'll research:
**Focus:** library
**Topic:** "Prisma ORM connection pooling"
**Library:** prisma (npm)
**Depth:** thorough
**Output:** doc
Proceed? [Yes / Adjust settings]
| Focus | Primary Tool | Purpose |
|---|---|---|
library | nia-docs | API docs, usage patterns, code examples |
best-practices | perplexity-search | Recommended approaches, patterns, comparisons |
general | All MCP tools | Comprehensive multi-source research |
| Option | Values | Description |
|---|---|---|
--topic | "string" | Required. The topic/library/concept to research |
--depth | shallow, thorough | Search depth (default: shallow) |
--output | handoff, |
Extract from user input:
FOCUS=$1 # library | best-practices | general
TOPIC="..." # from --topic
DEPTH="shallow" # from --depth (default: shallow)
OUTPUT="doc" # from --output (default: doc)
LIBRARY="..." # from --library (optional)
REGISTRY="npm" # from --registry (default: npm)
libraryPrimary tool: nia-docs - Find API documentation, usage patterns, code examples.
# Semantic search in package
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--package "$LIBRARY" \
--registry "$REGISTRY" \
--query "$TOPIC" \
--limit 10)
# If thorough depth, also grep for specific patterns
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--package "$LIBRARY" \
--grep "$TOPIC")
# Supplement with official docs if URL known
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/firecrawl_scrape.py \
--url "https://docs.example.com/api/$TOPIC" \
--format markdown)
Thorough depth additions:
best-practicesPrimary tool: perplexity-search - Find recommended approaches, patterns, anti-patterns.
# AI-synthesized research (sonar-pro)
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--research "$TOPIC best practices 2024 2025")
# If comparing alternatives
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--reason "$TOPIC vs alternatives - which to choose?")
Thorough depth additions:
# Chain-of-thought for complex decisions
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--reason "$TOPIC tradeoffs and considerations 2025")
# Deep comprehensive research
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--deep "$TOPIC comprehensive guide 2025")
# Recent developments
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--search "$TOPIC latest developments" \
--recency month --max-results 5)
generalUse ALL available MCP tools - comprehensive multi-source research.
Step 2a: Library documentation (nia-docs)
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/nia_docs.py \
--search "$TOPIC")
Step 2b: Web research (perplexity)
(cd $CLAUDE_OPC_DIR && uv run python scripts/mcp/perplexity_search.py \
--research "$TOPIC")
Step 2c: Specific documentation (firecrawl)
# Scrape relevant documentation pages found in perplexity results
(cd $CLAUDE_OPC_DIR && uv run python -m runtime.harness scripts/mcp/firecrawl_scrape.py \
--url "$FOUND_DOC_URL" \
--format markdown)
Thorough depth additions:
Combine results from all sources:
doc (default)Write to: thoughts/shared/research/YYYY-MM-DD-{topic-slug}.md
---
date: {ISO timestamp}
type: external-research
topic: "{topic}"
focus: {focus}
sources: [nia, perplexity, firecrawl]
status: complete
---
# Research: {Topic}
## Summary
{2-3 sentence summary of findings}
## Key Findings
### Library Documentation
{From nia-docs - API references, usage patterns}
### Best Practices (2024-2025)
{From perplexity - recommended approaches}
### Code Examples
```{language}
// Working examples found
| Option | Pros | Cons |
|---|---|---|
| {Option 1} | ... | ... |
handoffWrite to: thoughts/shared/handoffs/{session}/research-{topic-slug}.yaml
---
type: research-handoff
ts: {ISO timestamp}
topic: "{topic}"
focus: {focus}
status: complete
---
goal: Research {topic} for implementation planning
sources_used: [nia, perplexity, firecrawl]
findings:
key_concepts:
- {concept1}
- {concept2}
code_examples:
- pattern: "{pattern name}"
code: |
// example code
best_practices:
- {practice1}
- {practice2}
pitfalls:
- {pitfall1}
recommendations:
- {rec1}
- {rec2}
sources:
- title: "{Source 1}"
url: "{url1}"
type: {documentation|article|reference}
for_plan_agent: |
Based on research, the recommended approach is:
1. {Step 1}
2. {Step 2}
Key libraries: {lib1}, {lib2}
Avoid: {pitfall1}
Research Complete
Topic: {topic}
Focus: {focus}
Output: {path to file}
Key findings:
- {Finding 1}
- {Finding 2}
- {Finding 3}
Sources: {N} sources cited
{If handoff output:}
Ready for plan-agent to continue.
If an MCP tool fails (API key missing, rate limited, etc.):
Log the failure in output:
tool_status:
nia: success
perplexity: failed (rate limited)
firecrawl: skipped
Continue with other sources - partial results are valuable
Set status appropriately:
complete - All requested tools succeededpartial - Some tools failed, findings still usefulfailed - No useful results obtainedNote gaps in findings:
## Gaps
- Perplexity unavailable - best practices section limited to nia results
/research-external library --topic "dependency injection" --library fastapi --registry py_pi
/research-external best-practices --topic "error handling in Python async" --depth thorough
/research-external general --topic "OAuth2 PKCE flow implementation" --depth thorough --output handoff
/research-external library --topic "useEffect cleanup" --library react
| After Research | Use Skill | For |
|---|---|---|
--output handoff | plan-agent | Create implementation plan |
| Code examples found | implement_task | Direct implementation |
| Architecture decision | create_plan | Detailed planning |
| Library comparison | Present to user | Decision making |
NIA_API_KEY or nia server in mcp_config.jsonPERPLEXITY_API_KEY in environment or ~/.claude/.envFIRECRAWL_API_KEY and firecrawl server in mcp_config.jsonresearch-codebase or scout for thatWeekly Installs
201
Repository
GitHub Stars
3.6K
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode194
codex192
gemini-cli191
cursor189
github-copilot188
amp182
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
150,000 周安装
doc| Output format (default: doc) |
--library | "name" | For library focus: specific package name |
--registry | npm, py_pi, crates, go_modules | For library focus: package registry |