重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
llm-fine-tuning-guide by qodex-ai/ai-agent-skills
npx skills add https://github.com/qodex-ai/ai-agent-skills --skill llm-fine-tuning-guide掌握微调大型语言模型的技巧,创建针对特定用例、领域和性能要求优化的专用模型。
微调通过在精心策划的数据集上进行训练,使预训练的 LLM 适应特定任务、领域或风格。这可以提高准确性、减少幻觉并优化成本。
完全微调 :
python examples/full_fine_tuning.py
LoRA(适用于大多数情况,推荐) :
python examples/lora_fine_tuning.py
QLoRA(单 GPU) :
python examples/qlora_fine_tuning.py
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
数据准备 :
python scripts/data_preparation.py
在训练期间更新所有模型参数。
优点 :
缺点 :
计算成本高
需要大数据集(1000+ 示例)
存在灾难性遗忘的风险
训练时间长
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer
model_id = "meta-llama/Llama-2-7b" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id)
training_args = TrainingArguments( output_dir="./fine-tuned-llama", num_train_epochs=3, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-5, weight_decay=0.01, logging_steps=10, save_steps=100, eval_strategy="steps", eval_steps=50, load_best_model_at_end=True, )
trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, )
trainer.train()
仅训练一小部分参数。
向现有权重添加可训练的低秩矩阵。
优点 :
缺点 :
性能略低于完全微调
推理时需要基础模型
from peft import get_peft_model, LoraConfig, TaskType from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "meta-llama/Llama-2-7b" model = AutoModelForCausalLM.from_pretrained(base_model_id) tokenizer = AutoTokenizer.from_pretrained(base_model_id)
lora_config = LoraConfig( r=8, # 低秩矩阵的秩 lora_alpha=16, # 缩放因子 target_modules=["q_proj", "v_proj"], # 要适应的层 lora_dropout=0.05, bias="none", task_type=TaskType.CAUSAL_LM )
model = get_peft_model(model, lora_config) model.print_trainable_parameters()
trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, ) trainer.train()
model.save_pretrained("./llama-lora-adapter")
将 LoRA 与量化结合以实现极高的效率。
from peft import prepare_model_for_kbit_training, get_peft_model, LoraConfig
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
# 量化配置
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
bnb_4bit_use_double_quant=True
)
# 加载量化模型
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b",
quantization_config=bnb_config,
device_map="auto"
)
# 为训练做准备
model = prepare_model_for_kbit_training(model)
# 应用 LoRA
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.CAUSAL_LM
)
model = get_peft_model(model, lora_config)
# 在单 GPU 上训练
trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir="./qlora-output",
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
learning_rate=5e-4,
num_train_epochs=3,
),
train_dataset=train_dataset,
)
trainer.train()
在输入前添加可训练的令牌。
from peft import get_peft_model, PrefixTuningConfig
config = PrefixTuningConfig(
num_virtual_tokens=20,
task_type=TaskType.CAUSAL_LM,
)
model = get_peft_model(model, config)
# 仅训练 20 * embedding_dim 个参数
使用示例训练模型遵循指令。
# 训练数据格式
training_data = [
{
"instruction": "翻译成法语",
"input": "Hello, how are you?",
"output": "Bonjour, comment allez-vous?"
},
{
"instruction": "总结这段文本",
"input": "长文档...",
"output": "摘要..."
}
]
# 训练模板
template = """以下是一个描述任务的指令,配以提供进一步上下文的输入。
### 指令:
{instruction}
### 输入:
{input}
### 响应:
{output}"""
# 创建格式化数据集
formatted_data = [
template.format(**example) for example in training_data
]
为特定行业或领域定制模型。
legal_training_data = [
{
"prompt": "保密协议中的关键条款有哪些?",
"completion": """关键条款通常包括:
1. 保密信息的定义
2. 保密义务
3. 允许的披露
4. 期限与终止
5. 信息返还
6. 救济措施"""
},
# ... 更多法律示例
]
# 在法律领域进行训练
model = fine_tune_on_domain(
base_model="gpt-3.5-turbo",
training_data=legal_training_data,
epochs=3,
learning_rate=0.0002,
)
class DatasetValidator:
def validate_dataset(self, data):
issues = {
"empty_samples": 0,
"duplicates": 0,
"outliers": 0,
"imbalance": {}
}
# 检查空样本
for sample in data:
if not sample.get("text"):
issues["empty_samples"] += 1
# 检查重复项
texts = [s.get("text") for s in data]
issues["duplicates"] = len(texts) - len(set(texts))
# 检查长度异常值
lengths = [len(t.split()) for t in texts]
mean_length = sum(lengths) / len(lengths)
issues["outliers"] = sum(1 for l in lengths if l > mean_length * 3)
return issues
# 训练前验证
validator = DatasetValidator()
issues = validator.validate_dataset(training_data)
print(f"数据集问题: {issues}")
from nlpaug.augmenter.word import SynonymAug, RandomWordAug
import nlpaug.flow as naf
# 创建增强流水线
text = "The quick brown fox jumps over the lazy dog"
# 同义词替换
aug_syn = SynonymAug(aug_p=0.3)
augmented_syn = aug_syn.augment(text)
# 随机词插入
aug_insert = RandomWordAug(action="insert", aug_p=0.3)
augmented_insert = aug_insert.augment(text)
# 组合增强
flow = naf.Sequential([
SynonymAug(aug_p=0.2),
RandomWordAug(action="swap", aug_p=0.2)
])
augmented = flow.augment(text)
from sklearn.model_selection import train_test_split
# 创建划分
train_data, eval_data = train_test_split(
data,
test_size=0.2,
random_state=42
)
eval_data, test_data = train_test_split(
eval_data,
test_size=0.5,
random_state=42
)
print(f"训练集: {len(train_data)}, 验证集: {len(eval_data)}, 测试集: {len(test_data)}")
from torch.optim.lr_scheduler import CosineAnnealingLR, LinearLR
# 线性预热 + 余弦退火
def get_scheduler(optimizer, num_steps):
lr_scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=500,
num_training_steps=num_steps
)
return lr_scheduler
training_args = TrainingArguments(
learning_rate=1e-4,
lr_scheduler_type="cosine",
warmup_steps=500,
warmup_ratio=0.1,
)
training_args = TrainingArguments(
gradient_accumulation_steps=4, # 在 4 个步骤上累积梯度
per_device_train_batch_size=1, # 有效批次大小: 1 * 4 = 4
)
# 在有限的 GPU 内存上模拟更大的批次
training_args = TrainingArguments(
fp16=True, # 使用 16 位浮点数
bf16=False,
)
# 减少 50% 的内存使用,加速训练
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=4,
dataloader_pin_memory=True,
dataloader_num_workers=4,
)
# 自动使用所有可用的 GPU
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-7b")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-7b")
# 在自定义数据上微调
# ... 训练代码
特点 :
model = AutoModelForCausalLM.from_pretrained("google/gemma-3-2b")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-2b")
# Gemma 3 尺寸: 2B, 7B, 27B
# 非常高效,非常适合微调
特点 :
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
# 强大的性能,高效的架构
特点 :
import openai
# 准备训练数据
training_file = openai.File.create(
file=open("training_data.jsonl", "rb"),
purpose="fine-tune"
)
# 创建微调任务
fine_tune_job = openai.FineTuningJob.create(
training_file=training_file.id,
model="gpt-3.5-turbo",
hyperparameters={
"n_epochs": 3,
"learning_rate_multiplier": 0.1,
}
)
# 等待完成
fine_tuned_model = openai.FineTuningJob.retrieve(fine_tune_job.id)
print(f"状态: {fine_tuned_model.status}")
# 使用微调模型
response = openai.ChatCompletion.create(
model=fine_tuned_model.fine_tuned_model,
messages=[{"role": "user", "content": "Hello"}]
)
import torch
from math import exp
def calculate_perplexity(model, eval_dataset):
model.eval()
total_loss = 0
total_tokens = 0
with torch.no_grad():
for batch in eval_dataset:
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.item() * batch["input_ids"].shape[0]
total_tokens += batch["input_ids"].shape[0]
perplexity = exp(total_loss / total_tokens)
return perplexity
perplexity = calculate_perplexity(model, eval_dataset)
print(f"困惑度: {perplexity:.2f}")
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
def evaluate_task(predictions, ground_truth):
return {
"accuracy": accuracy_score(ground_truth, predictions),
"precision": precision_score(ground_truth, predictions, average='weighted'),
"recall": recall_score(ground_truth, predictions, average='weighted'),
"f1": f1_score(ground_truth, predictions, average='weighted'),
}
# 在任务上评估
predictions = [model.predict(x) for x in test_data]
metrics = evaluate_task(predictions, test_labels)
print(f"指标: {metrics}")
class HumanEvaluator:
def evaluate_response(self, prompt, response):
criteria = {
"relevance": self._score_relevance(prompt, response),
"coherence": self._score_coherence(response),
"factuality": self._score_factuality(response),
"helpfulness": self._score_helpfulness(response),
}
return sum(criteria.values()) / len(criteria)
def _score_relevance(self, prompt, response):
# 评分 1-5
pass
def _score_coherence(self, response):
# 评分 1-5
pass
模型在适应新领域时忘记了预训练的知识。
解决方案 :
使用较低的学习率 (2e-5 到 5e-5)
较少的训练轮数 (1-3)
正则化技术
持续学习方法
training_args = TrainingArguments( learning_rate=2e-5, # 较低的学习率 num_train_epochs=2, # 较少的轮数 weight_decay=0.01, # L2 正则化 warmup_steps=500, save_total_limit=3, load_best_model_at_end=True, )
模型在训练数据上表现良好,但在新数据上表现不佳。
解决方案 :
使用更多的训练数据
实现 Dropout
早停法
验证监控
training_args = TrainingArguments( eval_strategy="steps", eval_steps=50, load_best_model_at_end=True, early_stopping_patience=3, metric_for_best_model="eval_loss", )
用于微调的示例很少。
解决方案 :
数据增强
使用 PEFT(LoRA)而不是完全微调
使用提示工程的少样本学习
迁移学习
lora_config = LoraConfig( r=8, lora_alpha=16, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, )
每周安装数
66
代码库
GitHub 星标数
5
首次出现
2026年1月22日
安全审计
安装于
opencode51
gemini-cli50
codex49
cursor45
github-copilot45
cline43
Master the art of fine-tuning large language models to create specialized models optimized for your specific use cases, domains, and performance requirements.
Fine-tuning adapts pre-trained LLMs to specific tasks, domains, or styles by training them on curated datasets. This improves accuracy, reduces hallucinations, and optimizes costs.
Full Fine-Tuning :
python examples/full_fine_tuning.py
LoRA (Recommended for most cases) :
python examples/lora_fine_tuning.py
QLoRA (Single GPU) :
python examples/qlora_fine_tuning.py
Data Preparation :
python scripts/data_preparation.py
Update all model parameters during training.
Pros :
Cons :
High computational cost
Requires large dataset (1000+ examples)
Risk of catastrophic forgetting
Long training time
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer
model_id = "meta-llama/Llama-2-7b" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id)
training_args = TrainingArguments( output_dir="./fine-tuned-llama", num_train_epochs=3, per_device_train_batch_size=4, gradient_accumulation_steps=4, learning_rate=2e-5, weight_decay=0.01, logging_steps=10, save_steps=100, eval_strategy="steps", eval_steps=50, load_best_model_at_end=True, )
trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, )
trainer.train()
Train only a small fraction of parameters.
Adds trainable low-rank matrices to existing weights.
Pros :
Cons :
Slightly lower performance than full fine-tuning
Requires base model at inference
from peft import get_peft_model, LoraConfig, TaskType from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "meta-llama/Llama-2-7b" model = AutoModelForCausalLM.from_pretrained(base_model_id) tokenizer = AutoTokenizer.from_pretrained(base_model_id)
lora_config = LoraConfig( r=8, # Rank of low-rank matrices lora_alpha=16, # Scaling factor target_modules=["q_proj", "v_proj"], # Which layers to adapt lora_dropout=0.05, bias="none", task_type=TaskType.CAUSAL_LM )
model = get_peft_model(model, lora_config) model.print_trainable_parameters()
trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, ) trainer.train()
model.save_pretrained("./llama-lora-adapter")
Combines LoRA with quantization for extreme efficiency.
from peft import prepare_model_for_kbit_training, get_peft_model, LoraConfig
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
# Quantization config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
bnb_4bit_use_double_quant=True
)
# Load quantized model
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b",
quantization_config=bnb_config,
device_map="auto"
)
# Prepare for training
model = prepare_model_for_kbit_training(model)
# Apply LoRA
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.CAUSAL_LM
)
model = get_peft_model(model, lora_config)
# Train on single GPU
trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir="./qlora-output",
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
learning_rate=5e-4,
num_train_epochs=3,
),
train_dataset=train_dataset,
)
trainer.train()
Prepends trainable tokens to input.
from peft import get_peft_model, PrefixTuningConfig
config = PrefixTuningConfig(
num_virtual_tokens=20,
task_type=TaskType.CAUSAL_LM,
)
model = get_peft_model(model, config)
# Only 20 * embedding_dim parameters trained
Train model to follow instructions with examples.
# Training data format
training_data = [
{
"instruction": "Translate to French",
"input": "Hello, how are you?",
"output": "Bonjour, comment allez-vous?"
},
{
"instruction": "Summarize this text",
"input": "Long document...",
"output": "Summary..."
}
]
# Template for training
template = """Below is an instruction that describes a task, paired with an input that provides further context.
### Instruction:
{instruction}
### Input:
{input}
### Response:
{output}"""
# Create formatted dataset
formatted_data = [
template.format(**example) for example in training_data
]
Tailor models for specific industries or fields.
legal_training_data = [
{
"prompt": "What are the key clauses in an NDA?",
"completion": """Key clauses typically include:
1. Definition of Confidential Information
2. Non-Disclosure Obligations
3. Permitted Disclosures
4. Term and Termination
5. Return of Information
6. Remedies"""
},
# ... more legal examples
]
# Train on legal domain
model = fine_tune_on_domain(
base_model="gpt-3.5-turbo",
training_data=legal_training_data,
epochs=3,
learning_rate=0.0002,
)
class DatasetValidator:
def validate_dataset(self, data):
issues = {
"empty_samples": 0,
"duplicates": 0,
"outliers": 0,
"imbalance": {}
}
# Check for empty samples
for sample in data:
if not sample.get("text"):
issues["empty_samples"] += 1
# Check for duplicates
texts = [s.get("text") for s in data]
issues["duplicates"] = len(texts) - len(set(texts))
# Check for length outliers
lengths = [len(t.split()) for t in texts]
mean_length = sum(lengths) / len(lengths)
issues["outliers"] = sum(1 for l in lengths if l > mean_length * 3)
return issues
# Validate before training
validator = DatasetValidator()
issues = validator.validate_dataset(training_data)
print(f"Dataset Issues: {issues}")
from nlpaug.augmenter.word import SynonymAug, RandomWordAug
import nlpaug.flow as naf
# Create augmentation pipeline
text = "The quick brown fox jumps over the lazy dog"
# Synonym replacement
aug_syn = SynonymAug(aug_p=0.3)
augmented_syn = aug_syn.augment(text)
# Random word insertion
aug_insert = RandomWordAug(action="insert", aug_p=0.3)
augmented_insert = aug_insert.augment(text)
# Combine augmentations
flow = naf.Sequential([
SynonymAug(aug_p=0.2),
RandomWordAug(action="swap", aug_p=0.2)
])
augmented = flow.augment(text)
from sklearn.model_selection import train_test_split
# Create splits
train_data, eval_data = train_test_split(
data,
test_size=0.2,
random_state=42
)
eval_data, test_data = train_test_split(
eval_data,
test_size=0.5,
random_state=42
)
print(f"Train: {len(train_data)}, Eval: {len(eval_data)}, Test: {len(test_data)}")
from torch.optim.lr_scheduler import CosineAnnealingLR, LinearLR
# Linear warmup + cosine annealing
def get_scheduler(optimizer, num_steps):
lr_scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=500,
num_training_steps=num_steps
)
return lr_scheduler
training_args = TrainingArguments(
learning_rate=1e-4,
lr_scheduler_type="cosine",
warmup_steps=500,
warmup_ratio=0.1,
)
training_args = TrainingArguments(
gradient_accumulation_steps=4, # Accumulate gradients over 4 steps
per_device_train_batch_size=1, # Effective batch size: 1 * 4 = 4
)
# Simulates larger batch on limited GPU memory
training_args = TrainingArguments(
fp16=True, # Use 16-bit floats
bf16=False,
)
# Reduces memory usage by 50%, speeds up training
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
gradient_accumulation_steps=4,
dataloader_pin_memory=True,
dataloader_num_workers=4,
)
# Automatically uses all available GPUs
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-7b")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-7b")
# Fine-tune on custom data
# ... training code
Characteristics :
model = AutoModelForCausalLM.from_pretrained("google/gemma-3-2b")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-2b")
# Gemma 3 sizes: 2B, 7B, 27B
# Very efficient, great for fine-tuning
Characteristics :
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
# Strong performance, efficient architecture
Characteristics :
import openai
# Prepare training data
training_file = openai.File.create(
file=open("training_data.jsonl", "rb"),
purpose="fine-tune"
)
# Create fine-tuning job
fine_tune_job = openai.FineTuningJob.create(
training_file=training_file.id,
model="gpt-3.5-turbo",
hyperparameters={
"n_epochs": 3,
"learning_rate_multiplier": 0.1,
}
)
# Wait for completion
fine_tuned_model = openai.FineTuningJob.retrieve(fine_tune_job.id)
print(f"Status: {fine_tuned_model.status}")
# Use fine-tuned model
response = openai.ChatCompletion.create(
model=fine_tuned_model.fine_tuned_model,
messages=[{"role": "user", "content": "Hello"}]
)
import torch
from math import exp
def calculate_perplexity(model, eval_dataset):
model.eval()
total_loss = 0
total_tokens = 0
with torch.no_grad():
for batch in eval_dataset:
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.item() * batch["input_ids"].shape[0]
total_tokens += batch["input_ids"].shape[0]
perplexity = exp(total_loss / total_tokens)
return perplexity
perplexity = calculate_perplexity(model, eval_dataset)
print(f"Perplexity: {perplexity:.2f}")
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
def evaluate_task(predictions, ground_truth):
return {
"accuracy": accuracy_score(ground_truth, predictions),
"precision": precision_score(ground_truth, predictions, average='weighted'),
"recall": recall_score(ground_truth, predictions, average='weighted'),
"f1": f1_score(ground_truth, predictions, average='weighted'),
}
# Evaluate on task
predictions = [model.predict(x) for x in test_data]
metrics = evaluate_task(predictions, test_labels)
print(f"Metrics: {metrics}")
class HumanEvaluator:
def evaluate_response(self, prompt, response):
criteria = {
"relevance": self._score_relevance(prompt, response),
"coherence": self._score_coherence(response),
"factuality": self._score_factuality(response),
"helpfulness": self._score_helpfulness(response),
}
return sum(criteria.values()) / len(criteria)
def _score_relevance(self, prompt, response):
# Score 1-5
pass
def _score_coherence(self, response):
# Score 1-5
pass
Model forgets pre-trained knowledge while adapting to new domain.
Solutions :
Use lower learning rates (2e-5 to 5e-5)
Smaller training epochs (1-3)
Regularization techniques
Continual learning approaches
training_args = TrainingArguments( learning_rate=2e-5, # Lower learning rate num_train_epochs=2, # Few epochs weight_decay=0.01, # L2 regularization warmup_steps=500, save_total_limit=3, load_best_model_at_end=True, )
Model performs well on training data but poorly on new data.
Solutions :
Use more training data
Implement dropout
Early stopping
Validation monitoring
training_args = TrainingArguments( eval_strategy="steps", eval_steps=50, load_best_model_at_end=True, early_stopping_patience=3, metric_for_best_model="eval_loss", )
Few examples for fine-tuning.
Solutions :
Data augmentation
Use PEFT (LoRA) instead of full fine-tuning
Few-shot learning with prompting
Transfer learning
lora_config = LoraConfig( r=8, lora_alpha=16, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, )
Weekly Installs
66
Repository
GitHub Stars
5
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode51
gemini-cli50
codex49
cursor45
github-copilot45
cline43
超能力技能使用指南:AI助手技能调用优先级与工作流程详解
53,700 周安装