npx skills add https://github.com/boshu2/agentops --skill research快速参考: 通过多角度分析进行深度代码库探索。输出:
.agents/research/*.md
你必须执行此工作流。不要仅仅描述它。
CLI 依赖项: ao(知识注入——可选)。如果 ao 不可用,则跳过先验知识搜索,直接进行代码库探索。
| 标志 | 默认值 | 描述 |
|---|---|---|
--auto | 关闭 | 跳过人工批准门。由 /rpi --auto 用于完全自主的生命周期。 |
给定 /research <topic> [--auto]:
mkdir -p .agents/research
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
首先,搜索并注入现有知识(如果 ao 可用):
# 拉取与此主题相关的先验知识
ao lookup --query "<topic>" --limit 5 2>/dev/null || \
ao search "<topic>" 2>/dev/null || \
echo "ao not available, skipping knowledge search"
审查 ao 搜索结果: 如果 ao 返回相关的学习成果或模式,请将其纳入你的研究策略。寻找:
按内容搜索所有本地知识位置(不仅仅是文件名):
使用 Grep 在每个知识目录中搜索该主题。这可以捕获来自 /retro、头脑风暴和计划的成果——不仅仅是研究产物。
# 按内容搜索所有知识位置
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
# 搜索全局模式(跨仓库知识)
grep -r -l -i "<topic>" ~/.claude/patterns/ 2>/dev/null
如果找到匹配项,请在继续探索之前使用 Read 工具读取相关文件。先验知识可以避免重复调查。
在启动探索代理之前,检测哪个后端可用:
spawn_agent 是否可用 → 记录 "Backend: codex-sub-agents"TeamCreate 是否可用 → 记录 "Backend: claude-native-teams"skill 工具是否为只读(OpenCode)→ 记录 "Backend: opencode-subagents"Task 是否可用 → 记录 "Backend: background-task-fallback""Backend: inline (no spawn available)"记录选定的后端——它将包含在研究输出文档中以便追溯。
阅读匹配的后端参考以获取具体的工具调用示例:
../shared/references/claude-code-latest-features.md../shared/references/backend-codex-subagents.md../shared/references/backend-claude-teams.md../shared/references/backend-background-tasks.md../shared/references/backend-inline.md你现在必须派遣一个探索代理。 使用能力检测选择后端:
spawn_agent 可用 → Codex 子代理TeamCreate 可用 → Claude 原生团队(探索代理)skill 工具为只读(OpenCode)→ OpenCode 子代理 — task(subagent_type="explore", description="Research: <topic>", prompt="<explore prompt>")无论选择哪个后端,都使用此提示:
Thoroughly investigate: <topic>
Discovery tiers (execute in order, skip if source unavailable):
Tier 1 — Code-Map (fastest, authoritative):
Read docs/code-map/README.md → find <topic> category
Read docs/code-map/{feature}.md → get exact paths and function names
Skip if: no docs/code-map/ directory
Tier 2 — Semantic Search (conceptual matches):
mcp__smart-connections-work__lookup query="<topic>" limit=10
Skip if: MCP not connected
Tier 2.5 — Git History (recent changes and decision context):
git log --oneline -30 -- <topic-related-paths> # scoped to relevant paths, cap 30 lines
git log --all --oneline --grep="<topic>" -10 # cap 10 matches
git blame <key-file> | grep -i "<topic>" | head -20 # cap 20 lines
Skip if: not a git repo, no relevant history, or <topic> too broad (>100 matches)
NEVER: git log on full repo without -- path filter (same principle as Tier 3 scoping)
NOTE: This is git commit history, not session history. For session/handoff history, use /trace.
Tier 3 — Scoped Search (keyword precision):
Grep("<topic>", path="<specific-dir>/") # ALWAYS scope to a directory
Glob("<specific-dir>/**/*.py") # ALWAYS scope to a directory
NEVER: Grep("<topic>") or Glob("**/*.py") on full repo — causes context overload
Tier 4 — Source Code (verify from signposts):
Read files identified by Tiers 1-3 (including git history leads from Tier 2.5)
Use function/class names, not line numbers
Tier 5 — Prior Knowledge (may be stale):
Search ALL .agents/ knowledge dirs by content:
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
Read matched files. Cross-check findings against current source.
Tier 6 — External Docs (last resort):
WebSearch for external APIs or standards
Only when Tiers 1-5 are insufficient
Return a detailed report with:
- Key files found (with paths)
- How the system works
- Important patterns or conventions
- Any issues or concerns
Cite specific file:line references for all claims.
如果你的运行时支持生成并行子代理,请使用探索提示生成一个或多个研究代理。每个代理独立探索并将发现写入 .agents/research/。
如果没有多代理能力可用,则在当前会话中内联执行探索,直接使用文件读取、grep 和 glob 工具。
对于彻底的研究,执行质量验证:
检查:我们是否查看了所有应该查看的地方?有没有未探索的区域?
检查:我们是否理解了关键部分?是如何和为什么,而不仅仅是是什么?
检查:我们不知道但应该知道的是什么?
检查:我们建立在哪些假设之上?这些假设是否经过验证?
在探索代理和验证群返回后,将发现写入:.agents/research/YYYY-MM-DD-<topic-slug>.md
使用此格式:
---
id: research-YYYY-MM-DD-<topic-slug>
type: research
date: YYYY-MM-DD
---
# Research: <Topic>
**Backend:** <codex-sub-agents | claude-native-teams | background-task-fallback | inline>
**Scope:** <what was investigated>
## Summary
<2-3 sentence overview>
## Key Files
| File | Purpose |
|------|---------|
| path/to/file.py | Description |
## Findings
<detailed findings with file:line citations>
## Recommendations
<next steps or actions>
如果设置了 --auto 标志,则跳过此步骤。 在自动模式下,直接进行步骤 7。
使用 AskUserQuestion 工具:
Tool: AskUserQuestion
Parameters:
questions:
- question: "Research complete. Approve to proceed to planning?"
header: "Gate 1"
options:
- label: "Approve"
description: "Research is sufficient, proceed to /plan"
- label: "Revise"
description: "Need deeper research on specific areas"
- label: "Abandon"
description: "Stop this line of investigation"
multiSelect: false
在报告完成之前等待批准。
告诉用户:
/plan 以创建实施计划file:line.agents/research/ 产物在你的探索代理提示中包含:
用户说: /research "authentication system"
会发生什么:
.agents/research/2026-02-13-authentication-system.md结果: 详细报告,识别认证中间件位置、会话处理和令牌验证模式。
用户说: /research "cache implementation"
会发生什么:
.agents/research/2026-02-13-cache-implementation.md结果: 缓存策略、TTL 设置和驱逐策略的摘要,附带文件引用。
用户说: /research "payment processing flow"
会发生什么:
结果: 端到端支付流程图,附带文件路径和关键决策点。
| 问题 | 原因 | 解决方案 |
|---|---|---|
| 研究过于肤浅 | 默认探索深度不足以覆盖该主题 | 重新运行并扩大范围,或指定额外的搜索区域 |
| 研究输出过大 | 探索覆盖了太多不相关的领域 | 将目标缩小到具体问题,而不是宽泛的主题 |
| 缺少文件引用 | 代码库自上次探索后已更改,或文件位于意外位置 | 在引用之前使用 Glob 验证文件位置。始终使用绝对路径 |
| 自动模式跳过了重要区域 | 自动探索优先考虑广度而非深度 | 移除 --auto 标志以启用人工批准门进行引导式探索 |
| 探索代理超时 | 主题过于宽泛,单次探索无法完成 | 拆分为更小、更聚焦的主题(例如,"auth flow" 与 "entire auth system") |
| 没有可用于生成的后端 | 在没有 Task 或 TeamCreate 支持的环境中运行 | 研究内联运行——功能仍然可用但速度较慢 |
每周安装次数
236
仓库
GitHub 星标数
197
首次出现
2026年2月2日
安全审计
安装于
opencode233
codex230
gemini-cli228
github-copilot227
cursor224
kimi-cli220
Quick Ref: Deep codebase exploration with multi-angle analysis. Output:
.agents/research/*.md
YOU MUST EXECUTE THIS WORKFLOW. Do not just describe it.
CLI dependencies: ao (knowledge injection — optional). If ao is unavailable, skip prior knowledge search and proceed with direct codebase exploration.
| Flag | Default | Description |
|---|---|---|
--auto | off | Skip human approval gate. Used by /rpi --auto for fully autonomous lifecycle. |
Given /research <topic> [--auto]:
mkdir -p .agents/research
First, search and inject existing knowledge (if ao available):
# Pull relevant prior knowledge for this topic
ao lookup --query "<topic>" --limit 5 2>/dev/null || \
ao search "<topic>" 2>/dev/null || \
echo "ao not available, skipping knowledge search"
Review ao search results: If ao returns relevant learnings or patterns, incorporate them into your research strategy. Look for:
Search ALL local knowledge locations by content (not just filename):
Use Grep to search every knowledge directory for the topic. This catches learnings from /retro, brainstorms, and plans — not just research artifacts.
# Search all knowledge locations by content
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
# Search global patterns (cross-repo knowledge)
grep -r -l -i "<topic>" ~/.claude/patterns/ 2>/dev/null
If matches are found, read the relevant files with the Read tool before proceeding to exploration. Prior knowledge prevents redundant investigation.
Before launching the explore agent, detect which backend is available:
spawn_agent is available → log "Backend: codex-sub-agents"TeamCreate is available → log "Backend: claude-native-teams"skill tool is read-only (OpenCode) → log "Backend: opencode-subagents"Task is available → log "Backend: background-task-fallback""Backend: inline (no spawn available)"Record the selected backend — it will be included in the research output document for traceability.
Read the matching backend reference for concrete tool call examples:
../shared/references/claude-code-latest-features.md../shared/references/backend-codex-subagents.md../shared/references/backend-claude-teams.md../shared/references/backend-background-tasks.md../shared/references/backend-inline.mdYOU MUST DISPATCH AN EXPLORATION AGENT NOW. Select the backend using capability detection:
spawn_agent is available → Codex sub-agentTeamCreate is available → Claude native team (Explore agent)skill tool is read-only (OpenCode) → OpenCode subagent — task(subagent_type="explore", description="Research: <topic>", prompt="<explore prompt>")Use this prompt for whichever backend is selected:
Thoroughly investigate: <topic>
Discovery tiers (execute in order, skip if source unavailable):
Tier 1 — Code-Map (fastest, authoritative):
Read docs/code-map/README.md → find <topic> category
Read docs/code-map/{feature}.md → get exact paths and function names
Skip if: no docs/code-map/ directory
Tier 2 — Semantic Search (conceptual matches):
mcp__smart-connections-work__lookup query="<topic>" limit=10
Skip if: MCP not connected
Tier 2.5 — Git History (recent changes and decision context):
git log --oneline -30 -- <topic-related-paths> # scoped to relevant paths, cap 30 lines
git log --all --oneline --grep="<topic>" -10 # cap 10 matches
git blame <key-file> | grep -i "<topic>" | head -20 # cap 20 lines
Skip if: not a git repo, no relevant history, or <topic> too broad (>100 matches)
NEVER: git log on full repo without -- path filter (same principle as Tier 3 scoping)
NOTE: This is git commit history, not session history. For session/handoff history, use /trace.
Tier 3 — Scoped Search (keyword precision):
Grep("<topic>", path="<specific-dir>/") # ALWAYS scope to a directory
Glob("<specific-dir>/**/*.py") # ALWAYS scope to a directory
NEVER: Grep("<topic>") or Glob("**/*.py") on full repo — causes context overload
Tier 4 — Source Code (verify from signposts):
Read files identified by Tiers 1-3 (including git history leads from Tier 2.5)
Use function/class names, not line numbers
Tier 5 — Prior Knowledge (may be stale):
Search ALL .agents/ knowledge dirs by content:
for dir in research learnings knowledge patterns retros plans brainstorm; do
grep -r -l -i "<topic>" .agents/${dir}/ 2>/dev/null
done
Read matched files. Cross-check findings against current source.
Tier 6 — External Docs (last resort):
WebSearch for external APIs or standards
Only when Tiers 1-5 are insufficient
Return a detailed report with:
- Key files found (with paths)
- How the system works
- Important patterns or conventions
- Any issues or concerns
Cite specific file:line references for all claims.
If your runtime supports spawning parallel subagents, spawn one or more research agents with the exploration prompt. Each agent explores independently and writes findings to .agents/research/.
If no multi-agent capability is available, perform the exploration inline in the current session using file reading, grep, and glob tools directly.
For thorough research, perform quality validation:
Check: Did we look everywhere we should? Any unexplored areas?
Check: Do we UNDERSTAND the critical parts? HOW and WHY, not just WHAT?
Check: What DON'T we know that we SHOULD know?
Check: What assumptions are we building on? Are they verified?
After the Explore agent and validation swarm return, write findings to: .agents/research/YYYY-MM-DD-<topic-slug>.md
Use this format:
---
id: research-YYYY-MM-DD-<topic-slug>
type: research
date: YYYY-MM-DD
---
# Research: <Topic>
**Backend:** <codex-sub-agents | claude-native-teams | background-task-fallback | inline>
**Scope:** <what was investigated>
## Summary
<2-3 sentence overview>
## Key Files
| File | Purpose |
|------|---------|
| path/to/file.py | Description |
## Findings
<detailed findings with file:line citations>
## Recommendations
<next steps or actions>
Skip this step if--auto flag is set. In auto mode, proceed directly to Step 7.
USE AskUserQuestion tool:
Tool: AskUserQuestion
Parameters:
questions:
- question: "Research complete. Approve to proceed to planning?"
header: "Gate 1"
options:
- label: "Approve"
description: "Research is sufficient, proceed to /plan"
- label: "Revise"
description: "Need deeper research on specific areas"
- label: "Abandon"
description: "Stop this line of investigation"
multiSelect: false
Wait for approval before reporting completion.
Tell the user:
/plan to create implementation planfile:line.agents/research/ artifactInclude in your Explore agent prompt:
User says: /research "authentication system"
What happens:
.agents/research/2026-02-13-authentication-system.mdResult: Detailed report identifying auth middleware location, session handling, and token validation patterns.
User says: /research "cache implementation"
What happens:
.agents/research/2026-02-13-cache-implementation.mdResult: Summary of cache strategy, TTL settings, and eviction policies with file references.
User says: /research "payment processing flow"
What happens:
Result: End-to-end payment flow diagram with file paths and critical decision points.
| Problem | Cause | Solution |
|---|---|---|
| Research too shallow | Default exploration depth insufficient for the topic | Re-run with broader scope or specify additional search areas |
| Research output too large | Exploration covered too many tangential areas | Narrow the goal to a specific question rather than a broad topic |
| Missing file references | Codebase has changed since last exploration or files are in unexpected locations | Use Glob to verify file locations before citing them. Always use absolute paths |
| Auto mode skips important areas | Automated exploration prioritizes breadth over depth | Remove --auto flag to enable human approval gate for guided exploration |
| Explore agent times out | Topic too broad for single exploration pass | Split into smaller focused topics (e.g., "auth flow" vs "entire auth system") |
| No backend available for spawning | Running in environment without Task or TeamCreate support | Research runs inline — still functional but slower |
Weekly Installs
236
Repository
GitHub Stars
197
First Seen
Feb 2, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
opencode233
codex230
gemini-cli228
github-copilot227
cursor224
kimi-cli220
agent-browser 浏览器自动化工具 - Vercel Labs 命令行网页操作与测试
140,500 周安装