Whisper — OpenAI's general-purpose speech recognition model
Whisper
Section titled “Whisper”OpenAI’s general-purpose speech recognition model. Supports 99 languages, transcription, translation to English, and language identification. Six model sizes from tiny (39M params) to large (1550M params). Use for speech-to-text, podcast transcription, or multilingual audio processing. Best for robust, multilingual ASR.
Skill metadata
Section titled “Skill metadata”| Source | Optional — install with hermes skills install official/mlops/whisper |
| Path | optional-skills/mlops/whisper |
| Version | 1.0.0 |
| Author | Orchestra Research |
| License | MIT |
| Dependencies | openai-whisper, transformers, torch |
| Tags | Whisper, Speech Recognition, ASR, Multimodal, Multilingual, OpenAI, Speech-To-Text, Transcription, Translation, Audio Processing |
Reference: full SKILL.md
Section titled “Reference: full SKILL.md”The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.
Whisper - Robust Speech Recognition
Section titled “Whisper - Robust Speech Recognition”OpenAI’s multilingual speech recognition model.
When to use Whisper
Section titled “When to use Whisper”Use when:
- Speech-to-text transcription (99 languages)
- Podcast/video transcription
- Meeting notes automation
- Translation to English
- Noisy audio transcription
- Multilingual audio processing
Metrics:
- 72,900+ GitHub stars
- 99 languages supported
- Trained on 680,000 hours of audio
- MIT License
Use alternatives instead:
- AssemblyAI: Managed API, speaker diarization
- Deepgram: Real-time streaming ASR
- Google Speech-to-Text: Cloud-based
Quick start
Section titled “Quick start”Installation
Section titled “Installation”# Requires Python 3.8-3.11pip install -U openai-whisper
# Requires ffmpeg# macOS: brew install ffmpeg# Ubuntu: sudo apt install ffmpeg# Windows: choco install ffmpegBasic transcription
Section titled “Basic transcription”import whisper
# Load modelmodel = whisper.load_model("base")
# Transcriberesult = model.transcribe("audio.mp3")
# Print textprint(result["text"])
# Access segmentsfor segment in result["segments"]: print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")Model sizes
Section titled “Model sizes”# Available modelsmodels = ["tiny", "base", "small", "medium", "large", "turbo"]
# Load specific modelmodel = whisper.load_model("turbo") # Fastest, good quality| Model | Parameters | English-only | Multilingual | Speed | VRAM |
|---|---|---|---|---|---|
| tiny | 39M | ✓ | ✓ | ~32x | ~1 GB |
| base | 74M | ✓ | ✓ | ~16x | ~1 GB |
| small | 244M | ✓ | ✓ | ~6x | ~2 GB |
| medium | 769M | ✓ | ✓ | ~2x | ~5 GB |
| large | 1550M | ✗ | ✓ | 1x | ~10 GB |
| turbo | 809M | ✗ | ✓ | ~8x | ~6 GB |
Recommendation: Use turbo for best speed/quality, base for prototyping
Transcription options
Section titled “Transcription options”Language specification
Section titled “Language specification”# Auto-detect languageresult = model.transcribe("audio.mp3")
# Specify language (faster)result = model.transcribe("audio.mp3", language="en")
# Supported: en, es, fr, de, it, pt, ru, ja, ko, zh, and 89 moreTask selection
Section titled “Task selection”# Transcription (default)result = model.transcribe("audio.mp3", task="transcribe")
# Translation to Englishresult = model.transcribe("spanish.mp3", task="translate")# Input: Spanish audio → Output: English textInitial prompt
Section titled “Initial prompt”# Improve accuracy with contextresult = model.transcribe( "audio.mp3", initial_prompt="This is a technical podcast about machine learning and AI.")
# Helps with:# - Technical terms# - Proper nouns# - Domain-specific vocabularyTimestamps
Section titled “Timestamps”# Word-level timestampsresult = model.transcribe("audio.mp3", word_timestamps=True)
for segment in result["segments"]: for word in segment["words"]: print(f"{word['word']} ({word['start']:.2f}s - {word['end']:.2f}s)")Temperature fallback
Section titled “Temperature fallback”# Retry with different temperatures if confidence lowresult = model.transcribe( "audio.mp3", temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0))Command line usage
Section titled “Command line usage”# Basic transcriptionwhisper audio.mp3
# Specify modelwhisper audio.mp3 --model turbo
# Output formatswhisper audio.mp3 --output_format txt # Plain textwhisper audio.mp3 --output_format srt # Subtitleswhisper audio.mp3 --output_format vtt # WebVTTwhisper audio.mp3 --output_format json # JSON with timestamps
# Languagewhisper audio.mp3 --language Spanish
# Translationwhisper spanish.mp3 --task translateBatch processing
Section titled “Batch processing”import os
audio_files = ["file1.mp3", "file2.mp3", "file3.mp3"]
for audio_file in audio_files: print(f"Transcribing {audio_file}...") result = model.transcribe(audio_file)
# Save to file output_file = audio_file.replace(".mp3", ".txt") with open(output_file, "w") as f: f.write(result["text"])Real-time transcription
Section titled “Real-time transcription”# For streaming audio, use faster-whisper# pip install faster-whisper
from faster_whisper import WhisperModel
model = WhisperModel("base", device="cuda", compute_type="float16")
# Transcribe with streamingsegments, info = model.transcribe("audio.mp3", beam_size=5)
for segment in segments: print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")GPU acceleration
Section titled “GPU acceleration”import whisper
# Automatically uses GPU if availablemodel = whisper.load_model("turbo")
# Force CPUmodel = whisper.load_model("turbo", device="cpu")
# Force GPUmodel = whisper.load_model("turbo", device="cuda")
# 10-20× faster on GPUIntegration with other tools
Section titled “Integration with other tools”Subtitle generation
Section titled “Subtitle generation”# Generate SRT subtitleswhisper video.mp4 --output_format srt --language English
# Output: video.srtWith LangChain
Section titled “With LangChain”from langchain.document_loaders import WhisperTranscriptionLoader
loader = WhisperTranscriptionLoader(file_path="audio.mp3")docs = loader.load()
# Use transcription in RAGfrom langchain_chroma import Chromafrom langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())Extract audio from video
Section titled “Extract audio from video”# Use ffmpeg to extract audioffmpeg -i video.mp4 -vn -acodec pcm_s16le audio.wav
# Then transcribewhisper audio.wavBest practices
Section titled “Best practices”- Use turbo model - Best speed/quality for English
- Specify language - Faster than auto-detect
- Add initial prompt - Improves technical terms
- Use GPU - 10-20× faster
- Batch process - More efficient
- Convert to WAV - Better compatibility
- Split long audio - <30 min chunks
- Check language support - Quality varies by language
- Use faster-whisper - 4× faster than openai-whisper
- Monitor VRAM - Scale model size to hardware
Performance
Section titled “Performance”| Model | Real-time factor (CPU) | Real-time factor (GPU) |
|---|---|---|
| tiny | ~0.32 | ~0.01 |
| base | ~0.16 | ~0.01 |
| turbo | ~0.08 | ~0.01 |
| large | ~1.0 | ~0.05 |
Real-time factor: 0.1 = 10× faster than real-time
Language support
Section titled “Language support”Top-supported languages:
- English (en)
- Spanish (es)
- French (fr)
- German (de)
- Italian (it)
- Portuguese (pt)
- Russian (ru)
- Japanese (ja)
- Korean (ko)
- Chinese (zh)
Full list: 99 languages total
Limitations
Section titled “Limitations”- Hallucinations - May repeat or invent text
- Long-form accuracy - Degrades on >30 min audio
- Speaker identification - No diarization
- Accents - Quality varies
- Background noise - Can affect accuracy
- Real-time latency - Not suitable for live captioning
Resources
Section titled “Resources”- GitHub: https://github.com/openai/whisper ⭐ 72,900+
- Paper: https://arxiv.org/abs/2212.04356
- Model Card: https://github.com/openai/whisper/blob/main/model-card.md
- Colab: Available in repo
- License: MIT