transformers-js by huggingface/skills
npx skills add https://github.com/huggingface/skills --skill transformers-jsTransformers.js 使得能够在 JavaScript 中直接运行最先进的机器学习模型,无论是在浏览器还是 Node.js 环境中,都无需服务器。
在以下情况下使用此技能:
npm install @huggingface/transformers
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers';
</script>
pipeline API 是使用模型最简单的方式。它将预处理、模型推理和后处理组合在一起:
import { pipeline } from '@huggingface/transformers';
// 为特定任务创建 pipeline
const pipe = await pipeline('sentiment-analysis');
// 使用 pipeline
const result = await pipe('I love transformers!');
// 输出: [{ label: 'POSITIVE', score: 0.999817686 }]
// 重要:完成后务必释放以释放内存
await classifier.dispose();
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
⚠️ 内存管理: 所有 pipeline 在使用完毕后必须通过 pipe.dispose() 进行释放,以防止内存泄漏。有关不同环境的清理模式,请参阅 代码示例 中的示例。
您可以将自定义模型指定为第二个参数:
const pipe = await pipeline(
'sentiment-analysis',
'Xenova/bert-base-multilingual-uncased-sentiment'
);
查找模型:
在 Hugging Face Hub 上浏览可用的 Transformers.js 模型:
pipeline_tag 参数
提示: 按任务类型筛选,按趋势/下载量排序,并查看模型卡片以获取性能指标和使用示例。
选择在何处运行模型:
// 在 CPU 上运行(WASM 的默认设置)
const pipe = await pipeline('sentiment-analysis', 'model-id');
// 在 GPU 上运行(WebGPU - 实验性)
const pipe = await pipeline('sentiment-analysis', 'model-id', {
device: 'webgpu',
});
控制模型精度与性能:
// 使用量化模型(更快,更小)
const pipe = await pipeline('sentiment-analysis', 'model-id', {
dtype: 'q4', // 选项: 'fp32', 'fp16', 'q8', 'q4'
});
注意: 以下所有示例均显示基本用法。
const classifier = await pipeline('text-classification');
const result = await classifier('This movie was amazing!');
const ner = await pipeline('token-classification');
const entities = await ner('My name is John and I live in New York.');
const qa = await pipeline('question-answering');
const answer = await qa({
question: 'What is the capital of France?',
context: 'Paris is the capital and largest city of France.'
});
const generator = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX');
const text = await generator('Once upon a time', {
max_new_tokens: 100,
temperature: 0.7
});
对于流式传输和聊天: 请参阅 文本生成指南 以了解:
TextStreamer 进行逐令牌流式输出const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
const output = await translator('Hello, how are you?', {
src_lang: 'eng_Latn',
tgt_lang: 'fra_Latn'
});
const summarizer = await pipeline('summarization');
const summary = await summarizer(longText, {
max_length: 100,
min_length: 30
});
const classifier = await pipeline('zero-shot-classification');
const result = await classifier('This is a story about sports.', ['politics', 'sports', 'technology']);
const classifier = await pipeline('image-classification');
const result = await classifier('https://example.com/image.jpg');
// 或者使用本地文件
const result = await classifier(imageUrl);
const detector = await pipeline('object-detection');
const objects = await detector('https://example.com/image.jpg');
// 返回: [{ label: 'person', score: 0.95, box: { xmin, ymin, xmax, ymax } }, ...]
const segmenter = await pipeline('image-segmentation');
const segments = await segmenter('https://example.com/image.jpg');
const depthEstimator = await pipeline('depth-estimation');
const depth = await depthEstimator('https://example.com/image.jpg');
const classifier = await pipeline('zero-shot-image-classification');
const result = await classifier('image.jpg', ['cat', 'dog', 'bird']);
const transcriber = await pipeline('automatic-speech-recognition');
const result = await transcriber('audio.wav');
// 返回: { text: 'transcribed text here' }
const classifier = await pipeline('audio-classification');
const result = await classifier('audio.wav');
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts');
const audio = await synthesizer('Hello, this is a test.', {
speaker_embeddings: speakerEmbeddings
});
const captioner = await pipeline('image-to-text');
const caption = await captioner('image.jpg');
const docQA = await pipeline('document-question-answering');
const answer = await docQA('document-image.jpg', 'What is the total amount?');
const detector = await pipeline('zero-shot-object-detection');
const objects = await detector('image.jpg', ['person', 'car', 'tree']);
const extractor = await pipeline('feature-extraction');
const embeddings = await extractor('This is a sentence to embed.');
// 返回: 形状为 [1, sequence_length, hidden_size] 的张量
// 对于句子嵌入(平均池化)
const extractor = await pipeline('feature-extraction', 'onnx-community/all-MiniLM-L6-v2-ONNX');
const embeddings = await extractor('Text to embed', { pooling: 'mean', normalize: true });
在 Hugging Face Hub 上发现兼容的 Transformers.js 模型:
基础 URL(所有模型):
https://huggingface.co/models?library=transformers.js&sort=trending
使用 pipeline_tag 参数按任务筛选:
排序选项:
&sort=trending - 最近最受欢迎&sort=downloads - 总下载量最多&sort=likes - 社区点赞最多&sort=modified - 最近更新选择模型时考虑以下因素:
1. 模型大小
2. 量化 模型通常提供不同的量化级别:
fp32 - 全精度(最大,最准确)fp16 - 半精度(较小,仍然准确)q8 - 8 位量化(小得多,精度略有损失)q4 - 4 位量化(最小,精度损失明显)3. 任务兼容性 检查模型卡片以了解:
4. 性能指标 模型卡片通常显示:
// 1. 访问: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending
// 2. 浏览并选择一个模型(例如,onnx-community/gemma-3-270m-it-ONNX)
// 3. 检查模型卡片以了解:
// - 模型大小: ~270M 参数
// - 量化: 提供 q4
// - 语言: 英语
// - 用例: 遵循指令的聊天
// 4. 使用模型:
import { pipeline } from '@huggingface/transformers';
const generator = await pipeline(
'text-generation',
'onnx-community/gemma-3-270m-it-ONNX',
{ dtype: 'q4' } // 使用量化版本以获得更快的推理速度
);
const output = await generator('Explain quantum computing in simple terms.', {
max_new_tokens: 100
});
await generator.dispose();
从小开始 : 先用较小的模型测试,如果需要再升级
检查 ONNX 支持 : 确保模型有 ONNX 文件(查看模型仓库中的 onnx 文件夹)
阅读模型卡片 : 模型卡片包含使用示例、限制和基准测试
本地测试 : 在您的环境中对推理速度和内存使用进行基准测试
社区模型 : 寻找由 Xenova(Transformers.js 维护者)或 onnx-community 提供的模型
版本固定 : 在生产环境中使用特定的 git 提交以确保稳定性:
const pipe = await pipeline('task', 'model-id', { revision: 'abc123' });
env)env 对象提供了对 Transformers.js 执行、缓存和模型加载的全面控制。
快速概览:
import { env } from '@huggingface/transformers';
// 查看版本
console.log(env.version); // 例如,'3.8.1'
// 常用设置
env.allowRemoteModels = true; // 从 Hugging Face Hub 加载
env.allowLocalModels = false; // 从文件系统加载
env.localModelPath = '/models/'; // 本地模型目录
env.useFSCache = true; // 在磁盘上缓存模型(Node.js)
env.useBrowserCache = true; // 在浏览器中缓存模型
env.cacheDir = './.cache'; // 缓存目录位置
配置模式:
// 开发:使用远程模型快速迭代
env.allowRemoteModels = true;
env.useFSCache = true;
// 生产:仅使用本地模型
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.localModelPath = '/app/models/';
// 自定义 CDN
env.remoteHost = 'https://cdn.example.com/models';
// 禁用缓存(测试)
env.useFSCache = false;
env.useBrowserCache = false;
有关所有配置选项、缓存策略、缓存管理、预下载模型等的完整文档,请参阅:
→配置参考
import { AutoTokenizer, AutoModel } from '@huggingface/transformers';
// 单独加载分词器和模型以获得更多控制
const tokenizer = await AutoTokenizer.from_pretrained('bert-base-uncased');
const model = await AutoModel.from_pretrained('bert-base-uncased');
// 对输入进行分词
const inputs = await tokenizer('Hello world!');
// 运行模型
const outputs = await model(inputs);
const classifier = await pipeline('sentiment-analysis');
// 处理多个文本
const results = await classifier([
'I love this!',
'This is terrible.',
'It was okay.'
]);
WebGPU 在浏览器中提供 GPU 加速:
const pipe = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX', {
device: 'webgpu',
dtype: 'fp32'
});
注意 : WebGPU 是实验性的。检查浏览器兼容性,如果出现问题请提交问题。
默认的浏览器执行使用 WASM:
// 针对浏览器进行优化,使用量化
const pipe = await pipeline('sentiment-analysis', 'model-id', {
dtype: 'q8' // 或者使用 'q4' 以获得更小的尺寸
});
模型可能很大(从几 MB 到几 GB 不等),并且由多个文件组成。通过向 pipeline() 函数传递回调函数来跟踪下载进度:
import { pipeline } from '@huggingface/transformers';
// 跟踪每个文件的进度
const fileProgress = {};
function onProgress(info) {
console.log(`${info.status}: ${info.file}`);
if (info.status === 'progress') {
fileProgress[info.file] = info.progress;
console.log(`${info.file}: ${info.progress.toFixed(1)}%`);
}
if (info.status === 'done') {
console.log(`✓ ${info.file} complete`);
}
}
// 将回调传递给 pipeline
const classifier = await pipeline('sentiment-analysis', null, {
progress_callback: onProgress
});
进度信息属性:
interface ProgressInfo {
status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';
name: string; // 模型 ID 或路径
file: string; // 正在处理的文件
progress?: number; // 百分比(0-100,仅适用于 'progress' 状态)
loaded?: number; // 已下载字节数(仅适用于 'progress' 状态)
total?: number; // 总字节数(仅适用于 'progress' 状态)
}
有关包括浏览器 UI、React 组件、CLI 进度条和重试逻辑在内的完整示例,请参阅:
try {
const pipe = await pipeline('sentiment-analysis', 'model-id');
const result = await pipe('text to analyze');
} catch (error) {
if (error.message.includes('fetch')) {
console.error('模型下载失败。检查网络连接。');
} else if (error.message.includes('ONNX')) {
console.error('模型执行失败。检查模型兼容性。');
} else {
console.error('未知错误:', error);
}
}
q8 或 q4 开始以获得更快的推理速度max_new_tokens 以避免内存问题pipe.dispose() 以释放内存重要: 完成后务必调用 pipe.dispose() 以防止内存泄漏。
const pipe = await pipeline('sentiment-analysis');
const result = await pipe('Great product!');
await pipe.dispose(); // ✓ 释放内存(每个模型 100MB - 几 GB)
何时释放:
模型会消耗大量内存并占用 GPU/CPU 资源。释放对于浏览器内存限制和服务器稳定性至关重要。
有关详细模式(React 清理、服务器、浏览器),请参阅 代码示例
onnx 文件夹)dtype: 'q4')max_length 限制序列长度fp32 失败,尝试 dtype: 'fp16'pipe.dispose() - 对于防止内存泄漏至关重要| 任务 | 任务 ID |
|---|---|
| 文本分类 | text-classification 或 sentiment-analysis |
| 令牌分类 | token-classification 或 ner |
| 问答 | question-answering |
| 填充掩码 | fill-mask |
| 摘要 | summarization |
| 翻译 | translation |
| 文本生成 | text-generation |
| 文本到文本生成 | text2text-generation |
| 零样本分类 | zero-shot-classification |
| 图像分类 | image-classification |
| 图像分割 | image-segmentation |
| 目标检测 | object-detection |
| 深度估计 | depth-estimation |
| 图像到图像 | image-to-image |
| 零样本图像分类 | zero-shot-image-classification |
| 零样本目标检测 | zero-shot-object-detection |
| 自动语音识别 | automatic-speech-recognition |
| 音频分类 | audio-classification |
| 文本转语音 | text-to-speech 或 text-to-audio |
| 图像到文本 | image-to-text |
| 文档问答 | document-question-answering |
| 特征提取 | feature-extraction |
| 句子相似度 | sentence-similarity |
此技能使您能够将最先进的机器学习功能直接集成到 JavaScript 应用程序中,而无需单独的 ML 服务器或 Python 环境。
每周安装量
83
仓库
GitHub 星标数
9.9K
首次出现
14 天前
安全审计
安装于
github-copilot74
codex74
opencode74
gemini-cli73
amp73
cline73
Transformers.js enables running state-of-the-art machine learning models directly in JavaScript, both in browsers and Node.js environments, with no server required.
Use this skill when you need to:
npm install @huggingface/transformers
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers';
</script>
The pipeline API is the easiest way to use models. It groups together preprocessing, model inference, and postprocessing:
import { pipeline } from '@huggingface/transformers';
// Create a pipeline for a specific task
const pipe = await pipeline('sentiment-analysis');
// Use the pipeline
const result = await pipe('I love transformers!');
// Output: [{ label: 'POSITIVE', score: 0.999817686 }]
// IMPORTANT: Always dispose when done to free memory
await classifier.dispose();
⚠️ Memory Management: All pipelines must be disposed with pipe.dispose() when finished to prevent memory leaks. See examples in Code Examples for cleanup patterns across different environments.
You can specify a custom model as the second argument:
const pipe = await pipeline(
'sentiment-analysis',
'Xenova/bert-base-multilingual-uncased-sentiment'
);
Finding Models:
Browse available Transformers.js models on Hugging Face Hub:
pipeline_tag parameter
Tip: Filter by task type, sort by trending/downloads, and check model cards for performance metrics and usage examples.
Choose where to run the model:
// Run on CPU (default for WASM)
const pipe = await pipeline('sentiment-analysis', 'model-id');
// Run on GPU (WebGPU - experimental)
const pipe = await pipeline('sentiment-analysis', 'model-id', {
device: 'webgpu',
});
Control model precision vs. performance:
// Use quantized model (faster, smaller)
const pipe = await pipeline('sentiment-analysis', 'model-id', {
dtype: 'q4', // Options: 'fp32', 'fp16', 'q8', 'q4'
});
Note: All examples below show basic usage.
const classifier = await pipeline('text-classification');
const result = await classifier('This movie was amazing!');
const ner = await pipeline('token-classification');
const entities = await ner('My name is John and I live in New York.');
const qa = await pipeline('question-answering');
const answer = await qa({
question: 'What is the capital of France?',
context: 'Paris is the capital and largest city of France.'
});
const generator = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX');
const text = await generator('Once upon a time', {
max_new_tokens: 100,
temperature: 0.7
});
For streaming and chat: See Text Generation Guide for:
TextStreamerconst translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
const output = await translator('Hello, how are you?', {
src_lang: 'eng_Latn',
tgt_lang: 'fra_Latn'
});
const summarizer = await pipeline('summarization');
const summary = await summarizer(longText, {
max_length: 100,
min_length: 30
});
const classifier = await pipeline('zero-shot-classification');
const result = await classifier('This is a story about sports.', ['politics', 'sports', 'technology']);
const classifier = await pipeline('image-classification');
const result = await classifier('https://example.com/image.jpg');
// Or with local file
const result = await classifier(imageUrl);
const detector = await pipeline('object-detection');
const objects = await detector('https://example.com/image.jpg');
// Returns: [{ label: 'person', score: 0.95, box: { xmin, ymin, xmax, ymax } }, ...]
const segmenter = await pipeline('image-segmentation');
const segments = await segmenter('https://example.com/image.jpg');
const depthEstimator = await pipeline('depth-estimation');
const depth = await depthEstimator('https://example.com/image.jpg');
const classifier = await pipeline('zero-shot-image-classification');
const result = await classifier('image.jpg', ['cat', 'dog', 'bird']);
const transcriber = await pipeline('automatic-speech-recognition');
const result = await transcriber('audio.wav');
// Returns: { text: 'transcribed text here' }
const classifier = await pipeline('audio-classification');
const result = await classifier('audio.wav');
const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts');
const audio = await synthesizer('Hello, this is a test.', {
speaker_embeddings: speakerEmbeddings
});
const captioner = await pipeline('image-to-text');
const caption = await captioner('image.jpg');
const docQA = await pipeline('document-question-answering');
const answer = await docQA('document-image.jpg', 'What is the total amount?');
const detector = await pipeline('zero-shot-object-detection');
const objects = await detector('image.jpg', ['person', 'car', 'tree']);
const extractor = await pipeline('feature-extraction');
const embeddings = await extractor('This is a sentence to embed.');
// Returns: tensor of shape [1, sequence_length, hidden_size]
// For sentence embeddings (mean pooling)
const extractor = await pipeline('feature-extraction', 'onnx-community/all-MiniLM-L6-v2-ONNX');
const embeddings = await extractor('Text to embed', { pooling: 'mean', normalize: true });
Discover compatible Transformers.js models on Hugging Face Hub:
Base URL (all models):
https://huggingface.co/models?library=transformers.js&sort=trending
Filter by task using the pipeline_tag parameter:
Sort options:
&sort=trending - Most popular recently&sort=downloads - Most downloaded overall&sort=likes - Most liked by community&sort=modified - Recently updatedConsider these factors when selecting a model:
1. Model Size
2. Quantization Models are often available in different quantization levels:
fp32 - Full precision (largest, most accurate)fp16 - Half precision (smaller, still accurate)q8 - 8-bit quantized (much smaller, slight accuracy loss)q4 - 4-bit quantized (smallest, noticeable accuracy loss)3. Task Compatibility Check the model card for:
4. Performance Metrics Model cards typically show:
// 1. Visit: https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending
// 2. Browse and select a model (e.g., onnx-community/gemma-3-270m-it-ONNX)
// 3. Check model card for:
// - Model size: ~270M parameters
// - Quantization: q4 available
// - Language: English
// - Use case: Instruction-following chat
// 4. Use the model:
import { pipeline } from '@huggingface/transformers';
const generator = await pipeline(
'text-generation',
'onnx-community/gemma-3-270m-it-ONNX',
{ dtype: 'q4' } // Use quantized version for faster inference
);
const output = await generator('Explain quantum computing in simple terms.', {
max_new_tokens: 100
});
await generator.dispose();
Start Small : Test with a smaller model first, then upgrade if needed
Check ONNX Support : Ensure the model has ONNX files (look for onnx folder in model repo)
Read Model Cards : Model cards contain usage examples, limitations, and benchmarks
Test Locally : Benchmark inference speed and memory usage in your environment
Community Models : Look for models by Xenova (Transformers.js maintainer) or onnx-community
Version Pin : Use specific git commits in production for stability:
const pipe = await pipeline('task', 'model-id', { revision: 'abc123' });
env)The env object provides comprehensive control over Transformers.js execution, caching, and model loading.
Quick Overview:
import { env } from '@huggingface/transformers';
// View version
console.log(env.version); // e.g., '3.8.1'
// Common settings
env.allowRemoteModels = true; // Load from Hugging Face Hub
env.allowLocalModels = false; // Load from file system
env.localModelPath = '/models/'; // Local model directory
env.useFSCache = true; // Cache models on disk (Node.js)
env.useBrowserCache = true; // Cache models in browser
env.cacheDir = './.cache'; // Cache directory location
Configuration Patterns:
// Development: Fast iteration with remote models
env.allowRemoteModels = true;
env.useFSCache = true;
// Production: Local models only
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.localModelPath = '/app/models/';
// Custom CDN
env.remoteHost = 'https://cdn.example.com/models';
// Disable caching (testing)
env.useFSCache = false;
env.useBrowserCache = false;
For complete documentation on all configuration options, caching strategies, cache management, pre-downloading models, and more, see:
import { AutoTokenizer, AutoModel } from '@huggingface/transformers';
// Load tokenizer and model separately for more control
const tokenizer = await AutoTokenizer.from_pretrained('bert-base-uncased');
const model = await AutoModel.from_pretrained('bert-base-uncased');
// Tokenize input
const inputs = await tokenizer('Hello world!');
// Run model
const outputs = await model(inputs);
const classifier = await pipeline('sentiment-analysis');
// Process multiple texts
const results = await classifier([
'I love this!',
'This is terrible.',
'It was okay.'
]);
WebGPU provides GPU acceleration in browsers:
const pipe = await pipeline('text-generation', 'onnx-community/gemma-3-270m-it-ONNX', {
device: 'webgpu',
dtype: 'fp32'
});
Note : WebGPU is experimental. Check browser compatibility and file issues if problems occur.
Default browser execution uses WASM:
// Optimized for browsers with quantization
const pipe = await pipeline('sentiment-analysis', 'model-id', {
dtype: 'q8' // or 'q4' for even smaller size
});
Models can be large (ranging from a few MB to several GB) and consist of multiple files. Track download progress by passing a callback to the pipeline() function:
import { pipeline } from '@huggingface/transformers';
// Track progress for each file
const fileProgress = {};
function onProgress(info) {
console.log(`${info.status}: ${info.file}`);
if (info.status === 'progress') {
fileProgress[info.file] = info.progress;
console.log(`${info.file}: ${info.progress.toFixed(1)}%`);
}
if (info.status === 'done') {
console.log(`✓ ${info.file} complete`);
}
}
// Pass callback to pipeline
const classifier = await pipeline('sentiment-analysis', null, {
progress_callback: onProgress
});
Progress Info Properties:
interface ProgressInfo {
status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';
name: string; // Model id or path
file: string; // File being processed
progress?: number; // Percentage (0-100, only for 'progress' status)
loaded?: number; // Bytes downloaded (only for 'progress' status)
total?: number; // Total bytes (only for 'progress' status)
}
For complete examples including browser UIs, React components, CLI progress bars, and retry logic, see:
→Pipeline Options - Progress Callback
try {
const pipe = await pipeline('sentiment-analysis', 'model-id');
const result = await pipe('text to analyze');
} catch (error) {
if (error.message.includes('fetch')) {
console.error('Model download failed. Check internet connection.');
} else if (error.message.includes('ONNX')) {
console.error('Model execution failed. Check model compatibility.');
} else {
console.error('Unknown error:', error);
}
}
q8 or q4 for faster inferencemax_new_tokens to avoid memory issuespipe.dispose() when done to free memoryIMPORTANT: Always call pipe.dispose() when finished to prevent memory leaks.
const pipe = await pipeline('sentiment-analysis');
const result = await pipe('Great product!');
await pipe.dispose(); // ✓ Free memory (100MB - several GB per model)
When to dispose:
Models consume significant memory and hold GPU/CPU resources. Disposal is critical for browser memory limits and server stability.
For detailed patterns (React cleanup, servers, browser), see Code Examples
onnx folder in model repo)dtype: 'q4')max_lengthdtype: 'fp16' if fp32 failspipeline() with progress_callback, device, dtype, etc.env configuration for caching and model loadingpipe.dispose() when done - critical for preventing memory leaks| Task | Task ID |
|---|---|
| Text classification | text-classification or sentiment-analysis |
| Token classification | token-classification or ner |
| Question answering | question-answering |
| Fill mask | fill-mask |
| Summarization | summarization |
This skill enables you to integrate state-of-the-art machine learning capabilities directly into JavaScript applications without requiring separate ML servers or Python environments.
Weekly Installs
83
Repository
GitHub Stars
9.9K
First Seen
14 days ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
github-copilot74
codex74
opencode74
gemini-cli73
amp73
cline73
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
50,900 周安装
后端架构模式指南:整洁架构、六边形架构与领域驱动设计实践
123 周安装
GitHub PR 自动创建工具 - 支持任务验证、测试执行与 Conventional Commits 规范
121 周安装
Anthropic Claude API 开发指南:Messages API、工具使用与提示工程实战
122 周安装
演讲幻灯片内容创作指南:掌握陈述式、提问式标题与极简设计原则
121 周安装
响应式设计完整指南:移动优先、Flexbox、CSS Grid与性能优化实践
124 周安装
ToolUniverse 技能自动安装脚本 - 一键安装50+专业研究工具,支持Cursor、Claude等AI助手
122 周安装
| https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&sort=trending |
| Object Detection | https://huggingface.co/models?pipeline_tag=object-detection&library=transformers.js&sort=trending |
| Image Segmentation | https://huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js&sort=trending |
| Speech Recognition | https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js&sort=trending |
| Audio Classification | https://huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js&sort=trending |
| Image-to-Text | https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js&sort=trending |
| Feature Extraction | https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending |
| Zero-Shot Classification | https://huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js&sort=trending |
| Translation | translation |
| Text generation | text-generation |
| Text-to-text generation | text2text-generation |
| Zero-shot classification | zero-shot-classification |
| Image classification | image-classification |
| Image segmentation | image-segmentation |
| Object detection | object-detection |
| Depth estimation | depth-estimation |
| Image-to-image | image-to-image |
| Zero-shot image classification | zero-shot-image-classification |
| Zero-shot object detection | zero-shot-object-detection |
| Automatic speech recognition | automatic-speech-recognition |
| Audio classification | audio-classification |
| Text-to-speech | text-to-speech or text-to-audio |
| Image-to-text | image-to-text |
| Document question answering | document-question-answering |
| Feature extraction | feature-extraction |
| Sentence similarity | sentence-similarity |