model-recommendation by github/awesome-copilot
npx skills add https://github.com/github/awesome-copilot --skill model-recommendation分析 .agent.md 或 .prompt.md 文件以理解其目的、复杂性和所需能力,然后从 GitHub Copilot 的可用选项中推荐最合适的 AI 模型。根据任务特征、模型优势、成本效益和性能权衡提供理由。
.agent.md 或 .prompt.md 文件的路径必需项:
${input:filePath:Path to .agent.md or .prompt.md file} - 要分析文件的绝对路径或相对于工作区的路径广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
可选项:
${input:subscriptionTier:Pro} - 用户的 Copilot 订阅等级 (Free, Pro, Pro+) - 默认为 Pro${input:priorityFactor:Balanced} - 优化优先级 (Speed, Cost, Quality, Balanced) - 默认为 Balanced读取和解析文件 :
.agent.md 或 .prompt.md 文件任务类型分类 :
根据内容分析确定主要任务类别:
简单重复性任务 :
代码生成与实现:
复杂重构与架构:
调试与问题解决:
规划与研究:
代码审查与质量分析:
专业领域任务 :
高级推理与多步骤工作流:
提取能力需求 :
基于前言中的 tools 和正文指令:
应用模型选择标准 :
针对每个可用模型,根据以下维度进行评估:
| 模型 | 乘数 | 速度 | 代码质量 | 推理能力 | 上下文 | 视觉 | 最适合 |
|---|---|---|---|---|---|---|---|
| GPT-4.1 | 0x | 快 | 良好 | 良好 | 128K | ✅ | 平衡的通用任务,包含在所有计划中 |
| GPT-5 mini | 0x | 最快 | 良好 | 基础 | 128K | ❌ | 简单任务、快速响应、成本效益高 |
| GPT-5 | 1x | 中等 | 优秀 | 高级 | 128K | ✅ | 复杂代码、高级推理、多轮聊天 |
| GPT-5 Codex | 1x | 快 | 优秀 | 良好 | 128K | ❌ | 代码优化、重构、算法任务 |
| Claude Sonnet 3.5 | 1x | 中等 | 优秀 | 优秀 | 200K | ✅ | 代码生成、长上下文、平衡推理 |
| Claude Sonnet 4 | 1x | 中等 | 优秀 | 高级 | 200K | ❌ | 复杂代码、稳健推理、企业任务 |
| Claude Sonnet 4.5 | 1x | 中等 | 优秀 | 专家 | 200K | ✅ | 高级代码、架构、设计模式 |
| Claude Opus 4.1 | 10x | 慢 | 杰出 | 专家 | 1M | ✅ | 大型代码库、架构审查、研究 |
| Gemini 2.5 Pro | 1x | 中等 | 优秀 | 高级 | 2M | ✅ | 超长上下文、多模态、实时数据 |
| Gemini 2.0 Flash (弃用) | 0.25x | 最快 | 良好 | 良好 | 1M | ❌ | 快速响应、成本效益高 (已弃用) |
| Grok Code Fast 1 | 0.25x | 最快 | 良好 | 基础 | 128K | ❌ | 速度关键的简单任务、预览 (免费) |
| o3 (已弃用) | 1x | 慢 | 良好 | 专家 | 128K | ❌ | 高级推理、算法优化 |
| o4-mini (已弃用) | 0.33x | 快 | 良好 | 良好 | 128K | ❌ | 成本较低的推理 (已弃用) |
START
│
├─ Task Complexity?
│ ├─ Simple/Repetitive → GPT-5 mini, Grok Code Fast 1, GPT-4.1
│ ├─ Moderate → GPT-4.1, Claude Sonnet 4, GPT-5
│ └─ Complex/Advanced → Claude Sonnet 4.5, GPT-5, Gemini 2.5 Pro, Claude Opus 4.1
│
├─ Reasoning Depth?
│ ├─ Basic → GPT-5 mini, Grok Code Fast 1
│ ├─ Intermediate → GPT-4.1, Claude Sonnet 4
│ ├─ Advanced → GPT-5, Claude Sonnet 4.5
│ └─ Expert → Claude Opus 4.1, o3 (deprecated)
│
├─ Code-Specific?
│ ├─ Yes → GPT-5 Codex, Claude Sonnet 4.5, GPT-5
│ └─ No → GPT-5, Claude Sonnet 4
│
├─ Context Size?
│ ├─ Small (<50K tokens) → Any model
│ ├─ Medium (50-200K) → Claude models, GPT-5, Gemini
│ ├─ Large (200K-1M) → Gemini 2.5 Pro, Claude Opus 4.1
│ └─ Very Large (>1M) → Gemini 2.5 Pro (2M), Claude Opus 4.1 (1M)
│
├─ Vision Required?
│ ├─ Yes → GPT-4.1, GPT-5, Claude Sonnet 3.5/4.5, Gemini 2.5 Pro, Claude Opus 4.1
│ └─ No → All models
│
├─ Cost Sensitivity? (based on subscriptionTier)
│ ├─ Free Tier → 0x models only: GPT-4.1, GPT-5 mini, Grok Code Fast 1
│ ├─ Pro (1000 premium/month) → Prioritize 0x, use 1x judiciously, avoid 10x
│ └─ Pro+ (5000 premium/month) → 1x freely, 10x for critical tasks
│
└─ Priority Factor?
├─ Speed → GPT-5 mini, Grok Code Fast 1, Gemini 2.0 Flash
├─ Cost → 0x models (GPT-4.1, GPT-5 mini) or lower multipliers (0.25x, 0.33x)
├─ Quality → Claude Sonnet 4.5, GPT-5, Claude Opus 4.1
└─ Balanced → GPT-4.1, Claude Sonnet 4, GPT-5
主要推荐 :
替代推荐 :
自动选择指导 :
弃用警告 :
订阅等级考虑因素 :
前言更新指导 :
如果文件未指定 model 字段:
## Recommendation: Add Model Specification
Current frontmatter:
\`\`\`yaml
---
description: "..."
tools: [...]
---
\`\`\`
Recommended frontmatter:
\`\`\`yaml
---
description: "..."
model: "[Recommended Model Name]"
tools: [...]
---
\`\`\`
Rationale: [Explanation of why this model is optimal for this task]
如果文件已指定模型:
## Current Model Assessment
Specified model: `[Current Model]` (Multiplier: [X]x)
Recommendation: [Keep current model | Consider switching to [Recommended Model]]
Rationale: [Explanation]
工具对齐检查 :
验证模型能力是否与指定工具匹配:
context7/* 或 sequential-thinking/*: 推荐高级推理模型 (Claude Sonnet 4.5, GPT-5, Claude Opus 4.1)利用 Context7 获取模型文档 :
当对当前模型能力存在不确定性时,使用 Context7 获取最新信息:
**Verification with Context7**:
Using `context7/get-library-docs` with library ID `/websites/github_en_copilot`:
- Query topic: "model capabilities [specific capability question]"
- Retrieve current model features, multipliers, deprecation status
- Cross-reference against analyzed file requirements
Context7 使用示例 :
If unsure whether Claude Sonnet 4.5 supports image analysis:
→ Use context7 with topic "Claude Sonnet 4.5 vision image capabilities"
→ Confirm feature support before recommending for multi-modal tasks
生成具有以下部分的结构化 Markdown 报告:
# AI 模型推荐报告
**已分析文件**: `[file path]`
**文件类型**: [chatmode | prompt]
**分析日期**: [YYYY-MM-DD]
**订阅等级**: [Free | Pro | Pro+]
---
## 文件摘要
**描述**: [from frontmatter]
**模式**: [ask | edit | agent]
**工具**: [tool list]
**当前模型**: [specified model or "Not specified"]
## 任务分析
### 任务复杂度
- **级别**: [Simple | Moderate | Complex | Advanced]
- **推理深度**: [Basic | Intermediate | Advanced | Expert]
- **上下文需求**: [Small | Medium | Large | Very Large]
- **代码生成**: [Minimal | Moderate | Extensive]
- **多模态**: [Yes | No]
### 任务类别
[Primary category from 8 categories listed in Workflow Phase 1]
### 关键特征
- Characteristic 1: [explanation]
- Characteristic 2: [explanation]
- Characteristic 3: [explanation]
## 模型推荐
### 🏆 主要推荐: [Model Name]
**乘数**: [X]x ([cost implications for subscription tier])
**优势**:
- Strength 1: [specific to task]
- Strength 2: [specific to task]
- Strength 3: [specific to task]
**理由**:
[Detailed explanation connecting task characteristics to model capabilities]
**成本影响** (对于 [Subscription Tier]):
- 每次请求乘数: [X]x
- 预计使用量: [rough estimate based on task frequency]
- [Additional cost context]
### 🔄 替代选项
#### 选项 1: [Model Name]
- **乘数**: [X]x
- **何时使用**: [specific scenarios]
- **权衡**: [compared to primary recommendation]
#### 选项 2: [Model Name]
- **乘数**: [X]x
- **何时使用**: [specific scenarios]
- **权衡**: [compared to primary recommendation]
### 📊 针对此任务的模型比较
| 标准 | [Primary Model] | [Alternative 1] | [Alternative 2] |
| ---------------- | --------------- | --------------- | --------------- |
| 任务匹配度 | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
| 代码质量 | [rating] | [rating] | [rating] |
| 推理能力 | [rating] | [rating] | [rating] |
| 速度 | [rating] | [rating] | [rating] |
| 成本效率 | [rating] | [rating] | [rating] |
| 上下文容量 | [capacity] | [capacity] | [capacity] |
| 视觉支持 | [Yes/No] | [Yes/No] | [Yes/No] |
## 自动模型选择评估
**适用性**: [Recommended | Not Recommended | Situational]
[Explanation of whether auto-selection is appropriate for this task]
**理由**:
- [Reason 1]
- [Reason 2]
**手动覆盖场景**:
- [Scenario where user should manually select model]
- [Scenario where user should manually select model]
## 实施指导
### 前言更新
[Provide specific code block showing recommended frontmatter change]
### 在 VS Code 中选择模型
**使用推荐模型**:
1. 打开 Copilot 聊天
2. 点击模型下拉菜单 (当前显示 "[current model or Auto]")
3. 选择 **[Recommended Model Name]**
4. [Optional: When to switch back to Auto]
**键盘快捷键**: `Cmd+Shift+P` → "Copilot: Change Model"
### 工具对齐验证
[Check results: Are specified tools compatible with recommended model?]
✅ **兼容工具**: [list]
⚠️ **潜在限制**: [list if any]
## 弃用通知
[If applicable, list any deprecated models in current configuration]
⚠️ **正在使用的已弃用模型**: [Model Name] (弃用日期: [YYYY-MM-DD])
**迁移路径**:
- **当前**: [Deprecated Model]
- **替换**: [Recommended Model]
- **所需操作**: 在 [date] 之前更新前言中的 `model:` 字段
- **行为变更**: [any expected differences]
## Context7 验证
[If Context7 was used for verification]
**已执行查询**:
- Topic: "[query topic]"
- Library: `/websites/github_en_copilot`
- Key Findings: [summary]
## 其他考虑因素
### 订阅等级建议
[Specific advice based on Free/Pro/Pro+ tier]
### 优先级因子调整
[If user specified Speed/Cost/Quality/Balanced, explain how recommendation aligns]
### 长期模型策略
[Advice for when to re-evaluate model selection as file evolves]
---
## 快速参考
**TL;DR**: 对此任务使用 **[Primary Model]**,因为 [one-sentence rationale]。成本: [X]x 乘数。
**单行更新**:
\`\`\`yaml
model: "[Recommended Model Name]"
\`\`\`
.agent.md 或 .prompt.md → 停止并澄清文件类型如果用户提供多个文件:
如果用户询问"对于此文件,X 和 Y 模型哪个更好?":
如果文件指定了已弃用的模型:
文件 : format-code.prompt.md 内容 : "使用 Black 风格格式化 Python 代码,添加类型提示" 推荐 : GPT-5 mini (0x 乘数,最快,足以处理重复格式化) 替代 : Grok Code Fast 1 (0.25x,更快,预览功能) 理由 : 任务简单且重复;不需要高级推理;优先考虑速度
文件 : architect.agent.md 内容 : "审查系统设计的可扩展性、安全性、可维护性;分析权衡;提供 ADR 级别的建议" 推荐 : Claude Sonnet 4.5 (1x 乘数,专家级推理,非常适合架构) 替代 : Claude Opus 4.1 (10x,用于非常大的代码库 >500K tokens) 理由 : 需要深度推理、架构专业知识、设计模式知识;Sonnet 4.5 在这方面表现出色
文件 : django.agent.md 内容 : "Django 5.x 专家,具备 ORM 优化、异步视图、REST API 设计;使用 context7 获取最新的 Django 文档" 推荐 : GPT-5 (1x 乘数,高级推理,优秀的代码质量) 替代 : Claude Sonnet 4.5 (1x,替代视角,与框架配合良好) 理由 : 领域专业知识 + context7 集成受益于高级推理;1x 成本对于专家模式是合理的
文件 : plan.agent.md 内容 : "研究和规划模式,使用只读工具 (search, fetch, githubRepo)" 订阅 : Free (每月 2K 补全 + 50 次聊天请求,仅限 0x 模型) 推荐 : GPT-4.1 (0x,平衡,包含在免费版中) 替代 : GPT-5 mini (0x,更快但上下文较少) 理由 : 免费版仅限于 0x 模型;GPT-4.1 为规划任务提供了质量和上下文的最佳平衡
| 乘数 | 含义 | 免费版 | 专业版使用量 | 专业增强版使用量 |
|---|---|---|---|---|
| 0x | 包含在所有计划中,不计入高级次数 | ✅ | 无限 | 无限 |
| 0.25x | 4 次请求 = 1 次高级请求 | ❌ | 4000 次使用 | 20000 次使用 |
| 0.33x | 3 次请求 = 1 次高级请求 | ❌ | 3000 次使用 | 15000 次使用 |
| 1x | 1 次请求 = 1 次高级请求 | ❌ | 1000 次使用 | 5000 次使用 |
| 1.25x | 1 次请求 = 1.25 次高级请求 | ❌ | 800 次使用 | 4000 次使用 |
| 10x | 1 次请求 = 10 次高级请求 (非常昂贵) | ❌ | 100 次使用 | 500 次使用 |
已弃用模型 (自 2025-10-23 起生效):
预览模型 (可能变更):
稳定生产模型 :
包含在自动选择中 :
排除在自动选择外 :
何时自动选择 :
需要验证时使用这些查询模式:
模型能力 :
Topic: "[Model Name] code generation quality capabilities"
Library: /websites/github_en_copilot
模型乘数 :
Topic: "[Model Name] request multiplier cost billing"
Library: /websites/github_en_copilot
弃用状态 :
Topic: "deprecated models October 2025 timeline"
Library: /websites/github_en_copilot
视觉支持 :
Topic: "[Model Name] image vision multimodal support"
Library: /websites/github_en_copilot
自动选择 :
Topic: "auto model selection behavior eligible models"
Library: /websites/github_en_copilot
最后更新 : 2025-10-28 模型数据截至 : 2025年10月 弃用截止日期 : 2025-10-23 对于 o3, o4-mini, Claude Sonnet 3.7 变体, Gemini 2.0 Flash
每周安装量
7.3K
仓库
GitHub 星标
27.0K
首次出现
Feb 25, 2026
安全审计
安装于
codex7.3K
gemini-cli7.3K
opencode7.2K
cursor7.2K
github-copilot7.2K
kimi-cli7.2K
Analyze .agent.md or .prompt.md files to understand their purpose, complexity, and required capabilities, then recommend the most suitable AI model(s) from GitHub Copilot's available options. Provide rationale based on task characteristics, model strengths, cost-efficiency, and performance trade-offs.
.agent.md or .prompt.md fileRequired:
${input:filePath:Path to .agent.md or .prompt.md file} - Absolute or workspace-relative path to the file to analyzeOptional:
${input:subscriptionTier:Pro} - User's Copilot subscription tier (Free, Pro, Pro+) - defaults to Pro${input:priorityFactor:Balanced} - Optimization priority (Speed, Cost, Quality, Balanced) - defaults to BalancedRead and Parse File :
.agent.md or .prompt.md fileCategorize Task Type :
Identify the primary task category based on content analysis:
Simple Repetitive Tasks :
Code Generation & Implementation:
Complex Refactoring & Architecture:
Debugging & Problem-Solving:
Planning & Research:
:
Extract Capability Requirements :
Based on tools in frontmatter and body instructions:
Apply Model Selection Criteria :
For each available model, evaluate against these dimensions:
| Model | Multiplier | Speed | Code Quality | Reasoning | Context | Vision | Best For |
|---|---|---|---|---|---|---|---|
| GPT-4.1 | 0x | Fast | Good | Good | 128K | ✅ | Balanced general tasks, included in all plans |
| GPT-5 mini | 0x | Fastest | Good | Basic | 128K | ❌ | Simple tasks, quick responses, cost-effective |
| GPT-5 | 1x | Moderate | Excellent | Advanced | 128K | ✅ | Complex code, advanced reasoning, multi-turn chat |
START
│
├─ Task Complexity?
│ ├─ Simple/Repetitive → GPT-5 mini, Grok Code Fast 1, GPT-4.1
│ ├─ Moderate → GPT-4.1, Claude Sonnet 4, GPT-5
│ └─ Complex/Advanced → Claude Sonnet 4.5, GPT-5, Gemini 2.5 Pro, Claude Opus 4.1
│
├─ Reasoning Depth?
│ ├─ Basic → GPT-5 mini, Grok Code Fast 1
│ ├─ Intermediate → GPT-4.1, Claude Sonnet 4
│ ├─ Advanced → GPT-5, Claude Sonnet 4.5
│ └─ Expert → Claude Opus 4.1, o3 (deprecated)
│
├─ Code-Specific?
│ ├─ Yes → GPT-5 Codex, Claude Sonnet 4.5, GPT-5
│ └─ No → GPT-5, Claude Sonnet 4
│
├─ Context Size?
│ ├─ Small (<50K tokens) → Any model
│ ├─ Medium (50-200K) → Claude models, GPT-5, Gemini
│ ├─ Large (200K-1M) → Gemini 2.5 Pro, Claude Opus 4.1
│ └─ Very Large (>1M) → Gemini 2.5 Pro (2M), Claude Opus 4.1 (1M)
│
├─ Vision Required?
│ ├─ Yes → GPT-4.1, GPT-5, Claude Sonnet 3.5/4.5, Gemini 2.5 Pro, Claude Opus 4.1
│ └─ No → All models
│
├─ Cost Sensitivity? (based on subscriptionTier)
│ ├─ Free Tier → 0x models only: GPT-4.1, GPT-5 mini, Grok Code Fast 1
│ ├─ Pro (1000 premium/month) → Prioritize 0x, use 1x judiciously, avoid 10x
│ └─ Pro+ (5000 premium/month) → 1x freely, 10x for critical tasks
│
└─ Priority Factor?
├─ Speed → GPT-5 mini, Grok Code Fast 1, Gemini 2.0 Flash
├─ Cost → 0x models (GPT-4.1, GPT-5 mini) or lower multipliers (0.25x, 0.33x)
├─ Quality → Claude Sonnet 4.5, GPT-5, Claude Opus 4.1
└─ Balanced → GPT-4.1, Claude Sonnet 4, GPT-5
Primary Recommendation :
Alternative Recommendations :
Auto-Selection Guidance :
Deprecation Warnings :
Subscription Tier Considerations :
Frontmatter Update Guidance :
If file does not specify a model field:
## Recommendation: Add Model Specification
Current frontmatter:
\`\`\`yaml
---
description: "..."
tools: [...]
---
\`\`\`
Recommended frontmatter:
\`\`\`yaml
---
description: "..."
model: "[Recommended Model Name]"
tools: [...]
---
\`\`\`
Rationale: [Explanation of why this model is optimal for this task]
If file already specifies a model:
## Current Model Assessment
Specified model: `[Current Model]` (Multiplier: [X]x)
Recommendation: [Keep current model | Consider switching to [Recommended Model]]
Rationale: [Explanation]
Tool Alignment Check :
Verify model capabilities align with specified tools:
context7/* or sequential-thinking/*: Recommend advanced reasoning models (Claude Sonnet 4.5, GPT-5, Claude Opus 4.1)Leverage Context7 for Model Documentation :
When uncertainty exists about current model capabilities, use Context7 to fetch latest information:
**Verification with Context7**:
Using `context7/get-library-docs` with library ID `/websites/github_en_copilot`:
- Query topic: "model capabilities [specific capability question]"
- Retrieve current model features, multipliers, deprecation status
- Cross-reference against analyzed file requirements
Example Context7 Usage :
If unsure whether Claude Sonnet 4.5 supports image analysis:
→ Use context7 with topic "Claude Sonnet 4.5 vision image capabilities"
→ Confirm feature support before recommending for multi-modal tasks
Generate a structured markdown report with the following sections:
# AI Model Recommendation Report
**File Analyzed**: `[file path]`
**File Type**: [chatmode | prompt]
**Analysis Date**: [YYYY-MM-DD]
**Subscription Tier**: [Free | Pro | Pro+]
---
## File Summary
**Description**: [from frontmatter]
**Mode**: [ask | edit | agent]
**Tools**: [tool list]
**Current Model**: [specified model or "Not specified"]
## Task Analysis
### Task Complexity
- **Level**: [Simple | Moderate | Complex | Advanced]
- **Reasoning Depth**: [Basic | Intermediate | Advanced | Expert]
- **Context Requirements**: [Small | Medium | Large | Very Large]
- **Code Generation**: [Minimal | Moderate | Extensive]
- **Multi-Modal**: [Yes | No]
### Task Category
[Primary category from 8 categories listed in Workflow Phase 1]
### Key Characteristics
- Characteristic 1: [explanation]
- Characteristic 2: [explanation]
- Characteristic 3: [explanation]
## Model Recommendation
### 🏆 Primary Recommendation: [Model Name]
**Multiplier**: [X]x ([cost implications for subscription tier])
**Strengths**:
- Strength 1: [specific to task]
- Strength 2: [specific to task]
- Strength 3: [specific to task]
**Rationale**:
[Detailed explanation connecting task characteristics to model capabilities]
**Cost Impact** (for [Subscription Tier]):
- Per request multiplier: [X]x
- Estimated usage: [rough estimate based on task frequency]
- [Additional cost context]
### 🔄 Alternative Options
#### Option 1: [Model Name]
- **Multiplier**: [X]x
- **When to Use**: [specific scenarios]
- **Trade-offs**: [compared to primary recommendation]
#### Option 2: [Model Name]
- **Multiplier**: [X]x
- **When to Use**: [specific scenarios]
- **Trade-offs**: [compared to primary recommendation]
### 📊 Model Comparison for This Task
| Criterion | [Primary Model] | [Alternative 1] | [Alternative 2] |
| ---------------- | --------------- | --------------- | --------------- |
| Task Fit | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
| Code Quality | [rating] | [rating] | [rating] |
| Reasoning | [rating] | [rating] | [rating] |
| Speed | [rating] | [rating] | [rating] |
| Cost Efficiency | [rating] | [rating] | [rating] |
| Context Capacity | [capacity] | [capacity] | [capacity] |
| Vision Support | [Yes/No] | [Yes/No] | [Yes/No] |
## Auto Model Selection Assessment
**Suitability**: [Recommended | Not Recommended | Situational]
[Explanation of whether auto-selection is appropriate for this task]
**Rationale**:
- [Reason 1]
- [Reason 2]
**Manual Override Scenarios**:
- [Scenario where user should manually select model]
- [Scenario where user should manually select model]
## Implementation Guidance
### Frontmatter Update
[Provide specific code block showing recommended frontmatter change]
### Model Selection in VS Code
**To Use Recommended Model**:
1. Open Copilot Chat
2. Click model dropdown (currently shows "[current model or Auto]")
3. Select **[Recommended Model Name]**
4. [Optional: When to switch back to Auto]
**Keyboard Shortcut**: `Cmd+Shift+P` → "Copilot: Change Model"
### Tool Alignment Verification
[Check results: Are specified tools compatible with recommended model?]
✅ **Compatible Tools**: [list]
⚠️ **Potential Limitations**: [list if any]
## Deprecation Notices
[If applicable, list any deprecated models in current configuration]
⚠️ **Deprecated Model in Use**: [Model Name] (Deprecation date: [YYYY-MM-DD])
**Migration Path**:
- **Current**: [Deprecated Model]
- **Replacement**: [Recommended Model]
- **Action Required**: Update `model:` field in frontmatter by [date]
- **Behavioral Changes**: [any expected differences]
## Context7 Verification
[If Context7 was used for verification]
**Queries Executed**:
- Topic: "[query topic]"
- Library: `/websites/github_en_copilot`
- Key Findings: [summary]
## Additional Considerations
### Subscription Tier Recommendations
[Specific advice based on Free/Pro/Pro+ tier]
### Priority Factor Adjustments
[If user specified Speed/Cost/Quality/Balanced, explain how recommendation aligns]
### Long-Term Model Strategy
[Advice for when to re-evaluate model selection as file evolves]
---
## Quick Reference
**TL;DR**: Use **[Primary Model]** for this task due to [one-sentence rationale]. Cost: [X]x multiplier.
**One-Line Update**:
\`\`\`yaml
model: "[Recommended Model Name]"
\`\`\`
.agent.md or .prompt.md → Stop and clarify file typeIf user provides multiple files:
If user asks "Which model is better between X and Y for this file?":
If file specifies a deprecated model:
File : format-code.prompt.md Content : "Format Python code with Black style, add type hints" Recommendation : GPT-5 mini (0x multiplier, fastest, sufficient for repetitive formatting) Alternative : Grok Code Fast 1 (0.25x, even faster, preview feature) Rationale : Task is simple and repetitive; premium reasoning not needed; speed prioritized
File : architect.agent.md Content : "Review system design for scalability, security, maintainability; analyze trade-offs; provide ADR-level recommendations" Recommendation : Claude Sonnet 4.5 (1x multiplier, expert reasoning, excellent for architecture) Alternative : Claude Opus 4.1 (10x, use for very large codebases >500K tokens) Rationale : Requires deep reasoning, architectural expertise, design pattern knowledge; Sonnet 4.5 excels at this
File : django.agent.md Content : "Django 5.x expert with ORM optimization, async views, REST API design; uses context7 for up-to-date Django docs" Recommendation : GPT-5 (1x multiplier, advanced reasoning, excellent code quality) Alternative : Claude Sonnet 4.5 (1x, alternative perspective, strong with frameworks) Rationale : Domain expertise + context7 integration benefits from advanced reasoning; 1x cost justified for expert mode
File : plan.agent.md Content : "Research and planning mode with read-only tools (search, fetch, githubRepo)" Subscription : Free (2K completions + 50 chat requests/month, 0x models only) Recommendation : GPT-4.1 (0x, balanced, included in Free tier) Alternative : GPT-5 mini (0x, faster but less context) Rationale : Free tier restricted to 0x models; GPT-4.1 provides best balance of quality and context for planning tasks
| Multiplier | Meaning | Free Tier | Pro Usage | Pro+ Usage |
|---|---|---|---|---|
| 0x | Included in all plans, no premium count | ✅ | Unlimited | Unlimited |
| 0.25x | 4 requests = 1 premium request | ❌ | 4000 uses | 20000 uses |
| 0.33x | 3 requests = 1 premium request | ❌ | 3000 uses | 15000 uses |
| 1x | 1 request = 1 premium request | ❌ | 1000 uses | 5000 uses |
| 1.25x | 1 request = 1.25 premium requests | ❌ | 800 uses | 4000 uses |
| 10x | 1 request = 10 premium requests (very expensive) |
Deprecated Models (Effective 2025-10-23):
Preview Models (Subject to Change):
Stable Production Models :
Included in Auto Selection :
Excluded from Auto Selection :
When Auto Selects :
Use these query patterns when verification needed:
Model Capabilities :
Topic: "[Model Name] code generation quality capabilities"
Library: /websites/github_en_copilot
Model Multipliers :
Topic: "[Model Name] request multiplier cost billing"
Library: /websites/github_en_copilot
Deprecation Status :
Topic: "deprecated models October 2025 timeline"
Library: /websites/github_en_copilot
Vision Support :
Topic: "[Model Name] image vision multimodal support"
Library: /websites/github_en_copilot
Auto Selection :
Topic: "auto model selection behavior eligible models"
Library: /websites/github_en_copilot
Last Updated : 2025-10-28 Model Data Current As Of : October 2025 Deprecation Deadline : 2025-10-23 for o3, o4-mini, Claude Sonnet 3.7 variants, Gemini 2.0 Flash
Weekly Installs
7.3K
Repository
GitHub Stars
27.0K
First Seen
Feb 25, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
codex7.3K
gemini-cli7.3K
opencode7.2K
cursor7.2K
github-copilot7.2K
kimi-cli7.2K
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
102,200 周安装
Specialized Domain Tasks :
Advanced Reasoning & Multi-Step Workflows:
| GPT-5 Codex |
| 1x |
| Fast |
| Excellent |
| Good |
| 128K |
| ❌ |
| Code optimization, refactoring, algorithmic tasks |
| Claude Sonnet 3.5 | 1x | Moderate | Excellent | Excellent | 200K | ✅ | Code generation, long context, balanced reasoning |
| Claude Sonnet 4 | 1x | Moderate | Excellent | Advanced | 200K | ❌ | Complex code, robust reasoning, enterprise tasks |
| Claude Sonnet 4.5 | 1x | Moderate | Excellent | Expert | 200K | ✅ | Advanced code, architecture, design patterns |
| Claude Opus 4.1 | 10x | Slow | Outstanding | Expert | 1M | ✅ | Large codebases, architectural review, research |
| Gemini 2.5 Pro | 1x | Moderate | Excellent | Advanced | 2M | ✅ | Very long context, multi-modal, real-time data |
| Gemini 2.0 Flash (dep.) | 0.25x | Fastest | Good | Good | 1M | ❌ | Fast responses, cost-effective (deprecated) |
| Grok Code Fast 1 | 0.25x | Fastest | Good | Basic | 128K | ❌ | Speed-critical simple tasks, preview (free) |
| o3 (deprecated) | 1x | Slow | Good | Expert | 128K | ❌ | Advanced reasoning, algorithmic optimization |
| o4-mini (deprecated) | 0.33x | Fast | Good | Good | 128K | ❌ | Reasoning at lower cost (deprecated) |
| ❌ |
| 100 uses |
| 500 uses |