whisper by davila7/claude-code-templates
npx skills add https://github.com/davila7/claude-code-templates --skill whisperOpenAI 的多语言语音识别模型。
适用场景:
指标:
替代方案:
# 需要 Python 3.8-3.11
pip install -U openai-whisper
# 需要 ffmpeg
# macOS: brew install ffmpeg
# Ubuntu: sudo apt install ffmpeg
# Windows: choco install ffmpeg
import whisper
# 加载模型
model = whisper.load_model("base")
# 转录
result = model.transcribe("audio.mp3")
# 打印文本
print(result["text"])
# 访问片段
for segment in result["segments"]:
print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
# 可用模型
models = ["tiny", "base", "small", "medium", "large", "turbo"]
# 加载特定模型
model = whisper.load_model("turbo") # 最快,质量好
| 模型 | 参数 | 仅英语 | 多语言 | 速度 | 显存 |
|---|---|---|---|---|---|
| tiny | 39M | ✓ | ✓ | ~32x | ~1 GB |
| base | 74M | ✓ | ✓ | ~16x | ~1 GB |
| small | 244M | ✓ | ✓ | ~6x | ~2 GB |
| medium | 769M | ✓ | ✓ | ~2x | ~5 GB |
| large | 1550M | ✗ | ✓ | 1x | ~10 GB |
| turbo | 809M | ✗ | ✓ | ~8x | ~6 GB |
推荐: 使用 turbo 获得最佳速度/质量,使用 base 进行原型设计
# 自动检测语言
result = model.transcribe("audio.mp3")
# 指定语言(更快)
result = model.transcribe("audio.mp3", language="en")
# 支持:en, es, fr, de, it, pt, ru, ja, ko, zh 等 89 种以上语言
# 转录(默认)
result = model.transcribe("audio.mp3", task="transcribe")
# 翻译成英语
result = model.transcribe("spanish.mp3", task="translate")
# 输入:西班牙语音频 → 输出:英语文本
# 通过上下文提高准确性
result = model.transcribe(
"audio.mp3",
initial_prompt="这是一个关于机器学习和人工智能的技术播客。"
)
# 有助于:
# - 技术术语
# - 专有名词
# - 领域特定词汇
# 词级时间戳
result = model.transcribe("audio.mp3", word_timestamps=True)
for segment in result["segments"]:
for word in segment["words"]:
print(f"{word['word']} ({word['start']:.2f}s - {word['end']:.2f}s)")
# 如果置信度低,使用不同温度重试
result = model.transcribe(
"audio.mp3",
temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
)
# 基本转录
whisper audio.mp3
# 指定模型
whisper audio.mp3 --model turbo
# 输出格式
whisper audio.mp3 --output_format txt # 纯文本
whisper audio.mp3 --output_format srt # 字幕
whisper audio.mp3 --output_format vtt # WebVTT
whisper audio.mp3 --output_format json # 带时间戳的 JSON
# 语言
whisper audio.mp3 --language Spanish
# 翻译
whisper spanish.mp3 --task translate
import os
audio_files = ["file1.mp3", "file2.mp3", "file3.mp3"]
for audio_file in audio_files:
print(f"正在转录 {audio_file}...")
result = model.transcribe(audio_file)
# 保存到文件
output_file = audio_file.replace(".mp3", ".txt")
with open(output_file, "w") as f:
f.write(result["text"])
# 对于流式音频,使用 faster-whisper
# pip install faster-whisper
from faster_whisper import WhisperModel
model = WhisperModel("base", device="cuda", compute_type="float16")
# 流式转录
segments, info = model.transcribe("audio.mp3", beam_size=5)
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
import whisper
# 如果可用,自动使用 GPU
model = whisper.load_model("turbo")
# 强制使用 CPU
model = whisper.load_model("turbo", device="cpu")
# 强制使用 GPU
model = whisper.load_model("turbo", device="cuda")
# 在 GPU 上快 10-20 倍
# 生成 SRT 字幕
whisper video.mp4 --output_format srt --language English
# 输出:video.srt
from langchain.document_loaders import WhisperTranscriptionLoader
loader = WhisperTranscriptionLoader(file_path="audio.mp3")
docs = loader.load()
# 在 RAG 中使用转录
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())
# 使用 ffmpeg 提取音频
ffmpeg -i video.mp4 -vn -acodec pcm_s16le audio.wav
# 然后转录
whisper audio.wav
| 模型 | 实时因子 (CPU) | 实时因子 (GPU) |
|---|---|---|
| tiny | ~0.32 | ~0.01 |
| base | ~0.16 | ~0.01 |
| turbo | ~0.08 | ~0.01 |
| large | ~1.0 | ~0.05 |
实时因子:0.1 = 比实时快 10 倍
主要支持的语言:
完整列表:总共 99 种语言
每周安装次数
618
仓库
GitHub 星标数
23.4K
首次出现
Jan 21, 2026
安全审计
安装于
opencode549
gemini-cli535
codex514
cursor500
github-copilot493
amp446
OpenAI's multilingual speech recognition model.
Use when:
Metrics :
Use alternatives instead :
# Requires Python 3.8-3.11
pip install -U openai-whisper
# Requires ffmpeg
# macOS: brew install ffmpeg
# Ubuntu: sudo apt install ffmpeg
# Windows: choco install ffmpeg
import whisper
# Load model
model = whisper.load_model("base")
# Transcribe
result = model.transcribe("audio.mp3")
# Print text
print(result["text"])
# Access segments
for segment in result["segments"]:
print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")
# Available models
models = ["tiny", "base", "small", "medium", "large", "turbo"]
# Load specific model
model = whisper.load_model("turbo") # Fastest, good quality
| Model | Parameters | English-only | Multilingual | Speed | VRAM |
|---|---|---|---|---|---|
| tiny | 39M | ✓ | ✓ | ~32x | ~1 GB |
| base | 74M | ✓ | ✓ | ~16x | ~1 GB |
| small | 244M | ✓ | ✓ | ~6x | ~2 GB |
| medium | 769M | ✓ | ✓ | ~2x | ~5 GB |
| large | 1550M | ✗ | ✓ | 1x |
Recommendation : Use turbo for best speed/quality, base for prototyping
# Auto-detect language
result = model.transcribe("audio.mp3")
# Specify language (faster)
result = model.transcribe("audio.mp3", language="en")
# Supported: en, es, fr, de, it, pt, ru, ja, ko, zh, and 89 more
# Transcription (default)
result = model.transcribe("audio.mp3", task="transcribe")
# Translation to English
result = model.transcribe("spanish.mp3", task="translate")
# Input: Spanish audio → Output: English text
# Improve accuracy with context
result = model.transcribe(
"audio.mp3",
initial_prompt="This is a technical podcast about machine learning and AI."
)
# Helps with:
# - Technical terms
# - Proper nouns
# - Domain-specific vocabulary
# Word-level timestamps
result = model.transcribe("audio.mp3", word_timestamps=True)
for segment in result["segments"]:
for word in segment["words"]:
print(f"{word['word']} ({word['start']:.2f}s - {word['end']:.2f}s)")
# Retry with different temperatures if confidence low
result = model.transcribe(
"audio.mp3",
temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
)
# Basic transcription
whisper audio.mp3
# Specify model
whisper audio.mp3 --model turbo
# Output formats
whisper audio.mp3 --output_format txt # Plain text
whisper audio.mp3 --output_format srt # Subtitles
whisper audio.mp3 --output_format vtt # WebVTT
whisper audio.mp3 --output_format json # JSON with timestamps
# Language
whisper audio.mp3 --language Spanish
# Translation
whisper spanish.mp3 --task translate
import os
audio_files = ["file1.mp3", "file2.mp3", "file3.mp3"]
for audio_file in audio_files:
print(f"Transcribing {audio_file}...")
result = model.transcribe(audio_file)
# Save to file
output_file = audio_file.replace(".mp3", ".txt")
with open(output_file, "w") as f:
f.write(result["text"])
# For streaming audio, use faster-whisper
# pip install faster-whisper
from faster_whisper import WhisperModel
model = WhisperModel("base", device="cuda", compute_type="float16")
# Transcribe with streaming
segments, info = model.transcribe("audio.mp3", beam_size=5)
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
import whisper
# Automatically uses GPU if available
model = whisper.load_model("turbo")
# Force CPU
model = whisper.load_model("turbo", device="cpu")
# Force GPU
model = whisper.load_model("turbo", device="cuda")
# 10-20× faster on GPU
# Generate SRT subtitles
whisper video.mp4 --output_format srt --language English
# Output: video.srt
from langchain.document_loaders import WhisperTranscriptionLoader
loader = WhisperTranscriptionLoader(file_path="audio.mp3")
docs = loader.load()
# Use transcription in RAG
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())
# Use ffmpeg to extract audio
ffmpeg -i video.mp4 -vn -acodec pcm_s16le audio.wav
# Then transcribe
whisper audio.wav
| Model | Real-time factor (CPU) | Real-time factor (GPU) |
|---|---|---|
| tiny | ~0.32 | ~0.01 |
| base | ~0.16 | ~0.01 |
| turbo | ~0.08 | ~0.01 |
| large | ~1.0 | ~0.05 |
Real-time factor: 0.1 = 10× faster than real-time
Top-supported languages:
Full list: 99 languages total
Weekly Installs
618
Repository
GitHub Stars
23.4K
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode549
gemini-cli535
codex514
cursor500
github-copilot493
amp446
AI 代码实施计划编写技能 | 自动化开发任务分解与 TDD 流程规划工具
41,400 周安装
| ~10 GB |
| turbo | 809M | ✗ | ✓ | ~8x | ~6 GB |