npx skills add https://github.com/jezweb/claude-skills --skill brains-trust向其他领先的 AI 模型寻求第二意见。不仅限于代码——适用于架构、策略、提示工程、调试、写作,或任何需要新视角的问题。
如果用户触发此技能但未指定咨询内容,则应用以下默认设置:
models.flared.au)。优先考虑多样性:例如,一个 Google 模型 + 一个 OpenAI 模型,或一个 Qwen 模型 + 一个 Google 模型。切勿使用来自同一提供商的两个模型。| 触发词 | 默认模式 | 默认范围 |
|---|---|---|
| "brains trust" | 共识(2 个模型) | 当前会话的工作 |
| "second opinion" |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 单一(1 个模型) |
| 当前会话的工作 |
| "ask gemini" / "ask gpt" | 单一(指定的提供商) | 当前会话的工作 |
| "peer review" | 共识(2 个模型) | 最近更改的文件 |
| "challenge this" / "devil's advocate" | 魔鬼代言人(1 个模型) | Claude 当前的立场 |
用户始终可以通过具体说明来覆盖:"brains trust this config file"、"ask gemini about the auth approach" 等。
至少将一个 API 密钥设置为环境变量:
# 推荐 —— 一个密钥覆盖所有提供商
export OPENROUTER_API_KEY="your-key"
# 可选 —— 直接访问(通常更快/更便宜)
export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
OpenRouter 是通用路径 —— 一个密钥即可访问 Gemini、GPT、Qwen、DeepSeek、Llama、Mistral 等。
请勿使用硬编码的模型 ID。 在每次咨询之前,获取当前领先的模型:
https://models.flared.au/llms.txt
这是一个实时更新的、精选的列表,包含来自 11 个提供商的约 40 个领先模型,从 OpenRouter 的完整目录中筛选而来。使用它来为任务选择合适的模型。
对于生成的 Python 脚本中的编程使用:https://models.flared.au/json
| 模式 | 默认用于 | 操作 |
|---|---|---|
| 共识 | "brains trust"、"peer review" | 并行询问来自不同提供商的 2 个模型,比较它们一致/不一致的地方 |
| 单一 | "second opinion"、"ask gemini"、"ask gpt" | 询问一个模型,并与你自己的观点综合 |
| 魔鬼代言人 | "challenge this"、"devil's advocate" | 要求一个模型明确反驳你当前的立场 |
对于共识模式,始终选择来自不同提供商的模型(例如,一个 Google 模型 + 一个 Qwen 模型),以获得最大化的视角多样性。
| 模式 | 适用场景 | 模型层级 |
|---|---|---|
| 代码审查 | 审查文件中的错误、模式、安全性 | Flash |
| 架构 | 设计决策、权衡 | Pro |
| 调试 | 经过 2 次以上失败尝试后卡住 | Flash |
| 安全 | 漏洞扫描 | Pro |
| 策略 | 业务、产品、方法决策 | Pro |
| 提示工程 | 改进提示词、系统提示词、知识库文件 | Flash |
| 通用 | 任何问题、头脑风暴、挑战 | Flash |
专业层级:所选提供商中最强大的模型(例如 google/gemini-3.1-pro-preview、openai/gpt-5.4)。快速层级:用于直接分析的快速、更便宜的模型(例如 google/gemini-3-flash-preview、qwen/qwen3.5-flash-02-23)。
检测可用密钥 —— 检查环境中的 OPENROUTER_API_KEY、GEMINI_API_KEY、OPENAI_API_KEY。如果未找到任何密钥,则显示设置说明并停止。
获取当前模型 —— WebFetch https://models.flared.au/llms.txt 并根据模式(专业 vs 快速)和咨询模式(单一 vs 共识)选择合适的模型。如果用户请求了特定的提供商("ask gemini"),则使用该提供商。
将目标文件读入上下文(如果与代码相关)。对于非代码问题(策略、提示工程、通用),跳过文件读取。
构建提示词,使用来自 references/prompt-templates.md 的 AI 对 AI 模板。使用 --- filename --- 分隔符将文件内容内联包含。不要设置输出令牌限制 —— 让模型充分推理。
创建咨询目录,路径为 .jez/artifacts/brains-trust/{timestamp}-{topic}/(例如 2026-03-10-1423-auth-architecture/)。将提示词写入其中的 prompt.txt —— 切勿通过 bash 参数内联传递代码(shell 转义会破坏它)。
生成并运行 Python 脚本,路径为 .jez/scripts/brains-trust.py,使用来自 references/provider-api-patterns.md 的模式:
* 从咨询目录的 `prompt.txt` 读取提示词
* 调用选定的 API
* 对于共识模式:使用 `concurrent.futures` 并行调用多个 API
* 将每个响应保存到咨询目录中的 `{model}.md`
* 将结果打印到标准输出
7. 综合 —— 阅读响应,向用户展示发现。注意模型一致和不一致的地方。添加你自己的观点(同意/不同意并说明理由)。让用户决定采取什么行动。
好的使用场景:
避免用于:
models.flared.au 获取max_tokens 或 maxOutputTokensCalling gemini-2.5-pro...、Received response from qwen3.5-plus.),以便用户在 30-90 秒的等待期间知道它正在工作| 时机 | 阅读 |
|---|---|
| 为任何模式构建提示词时 | references/prompt-templates.md |
| 生成 Python API 调用脚本时 | references/provider-api-patterns.md |
每周安装次数
170
仓库
GitHub 星标数
643
首次出现
14 天前
安全审计
安装于
gemini-cli161
github-copilot161
opencode161
cline160
codex160
cursor160
Consult other leading AI models for a second opinion. Not limited to code — works for architecture, strategy, prompting, debugging, writing, or any question where a fresh perspective helps.
If the user triggers this skill without specifying what to consult about, apply these defaults:
models.flared.au). Prefer diversity: e.g. one Google + one OpenAI, or one Qwen + one Google. Never two from the same provider.| Trigger | Default pattern | Default scope |
|---|---|---|
| "brains trust" | Consensus (2 models) | Current session work |
| "second opinion" | Single (1 model) | Current session work |
| "ask gemini" / "ask gpt" | Single (specified provider) | Current session work |
| "peer review" | Consensus (2 models) | Recently changed files |
| "challenge this" / "devil's advocate" | Devil's advocate (1 model) | Claude's current position |
The user can always override by being specific: "brains trust this config file", "ask gemini about the auth approach", etc.
Set at least one API key as an environment variable:
# Recommended — one key covers all providers
export OPENROUTER_API_KEY="your-key"
# Optional — direct access (often faster/cheaper)
export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
OpenRouter is the universal path — one key gives access to Gemini, GPT, Qwen, DeepSeek, Llama, Mistral, and more.
Do not use hardcoded model IDs. Before every consultation, fetch the current leading models:
https://models.flared.au/llms.txt
This is a live-updated, curated list of ~40 leading models from 11 providers, filtered from OpenRouter's full catalogue. Use it to pick the right model for the task.
For programmatic use in the generated Python script: https://models.flared.au/json
| Pattern | Default for | What happens |
|---|---|---|
| Consensus | "brains trust", "peer review" | Ask 2 models from different providers in parallel, compare where they agree/disagree |
| Single | "second opinion", "ask gemini", "ask gpt" | Ask one model, synthesise with your own view |
| Devil's advocate | "challenge this", "devil's advocate" | Ask a model to explicitly argue against your current position |
For consensus, always pick models from different providers (e.g. one Google + one Qwen) for maximum diversity of perspective.
| Mode | When | Model tier |
|---|---|---|
| Code Review | Review files for bugs, patterns, security | Flash |
| Architecture | Design decisions, trade-offs | Pro |
| Debug | Stuck after 2+ failed attempts | Flash |
| Security | Vulnerability scan | Pro |
| Strategy | Business, product, approach decisions | Pro |
| Prompting | Improve prompts, system prompts, KB files | Flash |
| General | Any question, brainstorm, challenge | Flash |
Pro tier : The most capable model from the chosen provider (e.g. google/gemini-3.1-pro-preview, openai/gpt-5.4). Flash tier : Fast, cheaper models for straightforward analysis (e.g. google/gemini-3-flash-preview, qwen/qwen3.5-flash-02-23).
Detect available keys — check OPENROUTER_API_KEY, GEMINI_API_KEY, OPENAI_API_KEY in environment. If none found, show setup instructions and stop.
Fetch current models — WebFetch https://models.flared.au/llms.txt and pick appropriate models based on mode (pro vs flash) and consultation pattern (single vs consensus). If user requested a specific provider ("ask gemini"), use that.
Read target files into context (if code-related). For non-code questions (strategy, prompting, general), skip file reading.
Build prompt using the AI-to-AI template from references/prompt-templates.md. Include file contents inline with --- filename --- separators. Do not set output token limits — let models reason fully.
Create consultation directory at (e.g. ). Write the prompt to inside it — never pass code inline via bash arguments (shell escaping breaks it).
Good use cases :
Avoid using for :
models.flared.au firstmax_tokens or maxOutputTokensCalling gemini-2.5-pro..., Received response from qwen3.5-plus.) so the user knows it's working during the 30-90 second wait| When | Read |
|---|---|
| Building prompts for any mode | references/prompt-templates.md |
| Generating the Python API call script | references/provider-api-patterns.md |
Weekly Installs
170
Repository
GitHub Stars
643
First Seen
14 days ago
Security Audits
Gen Agent Trust HubWarnSocketWarnSnykWarn
Installed on
gemini-cli161
github-copilot161
opencode161
cline160
codex160
cursor160
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
115,300 周安装
.jez/artifacts/brains-trust/{timestamp}-{topic}/2026-03-10-1423-auth-architecture/prompt.txtGenerate and run Python script at .jez/scripts/brains-trust.py using patterns from references/provider-api-patterns.md:
prompt.txtconcurrent.futures{model}.md in the consultation directorySynthesise — read the responses, present findings to the user. Note where models agree and disagree. Add your own perspective (agree/disagree with reasoning). Let the user decide what to act on.