A Model Context Protocol (MCP) server that transcribes videos from 1000+ platforms using OpenAI's Whisper model. Built with TypeScript for type safety and available via npx for easy installation.
🚀 Looking for better performance? Check out the Rust version which uses whisper.cpp for significantly faster transcription with lower memory usage. Available on crates.io with
cargo install video-transcriber-mcp.
- 🌍 Multi-Platform Support: Now supports 1000+ video platforms (YouTube, Vimeo, TikTok, Twitter/X, Facebook, Instagram, Twitch, educational sites, and more) via yt-dlp
- 💻 Cross-Platform: Works on macOS, Linux, and Windows
- 🎛️ Configurable Whisper Models: Choose from tiny, base, small, medium, or large models
- 🌐 Language Support: Transcribe in 90+ languages or use auto-detection
- 🔄 Automatic Retries: Network failures are handled automatically with exponential backoff
- 🎯 Platform Detection: Automatically detects the video platform
- 📋 List Supported Sites: New tool to see all 1000+ supported platforms
- ⚡ Improved Error Handling: More specific and helpful error messages
- 🔒 Better Filename Handling: Improved sanitization preserving more characters
This tool is intended for educational, accessibility, and research purposes only.
Before using this tool, please understand:
- Most platforms' Terms of Service generally prohibit downloading content
- You are responsible for ensuring your use complies with applicable laws
- This tool should primarily be used for:
- ✅ Your own content
- ✅ Creating accessibility features (captions for deaf/hard of hearing)
- ✅ Educational and research purposes (where permitted)
- ✅ Content you have explicit permission to download
Please read LEGAL.md for detailed legal information before using this tool.
We do not encourage or endorse violation of any platform's Terms of Service or copyright infringement. Use responsibly and ethically.
- 🎥 Download audio from 1000+ video platforms (powered by yt-dlp)
- 📂 Transcribe local video files** (mp4, avi, mov, mkv, and more)
- 🎤 Transcribe using OpenAI Whisper (local, no API key needed)
- 🎛️ Configurable Whisper models (tiny, base, small, medium, large)
- 🌐 Support for 90+ languages with auto-detection
- 📝 Generate transcripts in multiple formats (TXT, JSON, Markdown)
- 📚 List and read previous transcripts as MCP resources
- 🔌 Integrate seamlessly with Claude Code or any MCP client
- ⚡ TypeScript + npx for easy installation
- 🔒 Full type safety with TypeScript
- 🔍 Automatic dependency checking
- 🔄 Automatic retry logic for network failures
- 🎯 Platform detection (shows which platform you're transcribing from)
Thanks to yt-dlp, this tool supports 1000+ video platforms including:
- Social Media: YouTube, TikTok, Twitter/X, Facebook, Instagram, Reddit, LinkedIn
- Video Hosting: Vimeo, Dailymotion, Twitch
- Educational: Coursera, Udemy, Khan Academy, LinkedIn Learning, edX
- News: BBC, CNN, NBC, PBS
- Conference/Tech: YouTube (tech talks), Vimeo (conferences)
- And many, many more!
Run the list_supported_sites tool to see the complete list of 1000+ supported platforms.
brew install yt-dlp # Video downloader (supports 1000+ sites)
brew install openai-whisper # Whisper transcription
brew install ffmpeg # Audio processing# Ubuntu/Debian
sudo apt update
sudo apt install ffmpeg
pip install yt-dlp openai-whisper
# Fedora/RHEL
sudo dnf install ffmpeg
pip install yt-dlp openai-whisper
# Arch Linux
sudo pacman -S ffmpeg
pip install yt-dlp openai-whisperOption 1: Using pip (recommended)
# Install Python from python.org first
pip install yt-dlp openai-whisper
# Install ffmpeg using Chocolatey
choco install ffmpeg
# Or download ffmpeg from: https://ffmpeg.org/download.htmlOption 2: Using winget
winget install yt-dlp.yt-dlp
winget install Gyan.FFmpeg
pip install openai-whisperyt-dlp --version
whisper --version
ffmpeg -versionAdd to your Claude Code config (~/.claude/settings.json):
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "video-transcriber-mcp"]
}
}
}Or use directly from GitHub:
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": [
"-y",
"github:nhatvu148/video-transcriber-mcp"
]
}
}
}That's it! No installation needed. npx will automatically download and run the package.
# Clone the repository
git clone https://github.com/nhatvu148/video-transcriber-mcp.git
cd video-transcriber-mcp
# Install dependencies
npm install
# or
bun install
# Build the project
npm run build
# Use in Claude Code with local path
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "/path/to/video-transcriber-mcp"]
}
}
}Once configured, you can use these tools in Claude Code:
Please transcribe this YouTube video: https://www.youtube.com/watch?v=VIDEO_ID
Transcribe this TikTok video: https://www.tiktok.com/@user/video/123456789
Get the transcript from this Vimeo video with high accuracy: https://vimeo.com/123456789
(use model: large)
Transcribe this Spanish tutorial video: https://youtube.com/watch?v=VIDEO_ID
(language: es)
Transcribe this local video file: /Users/myname/Videos/meeting.mp4
Transcribe ~/Downloads/lecture.mov with high accuracy
(use model: medium)
Claude will use the transcribe_video tool automatically with optional parameters for model and language.
What platforms can you transcribe videos from?
List all my video transcripts
Check if my video transcriber dependencies are installed
Show me the transcript for [video name]
If you install the package:
npm install video-transcriber-mcpYou can import and use it programmatically:
import { transcribeVideo, checkDependencies, WhisperModel } from 'video-transcriber-mcp';
// Check dependencies
checkDependencies();
// Transcribe a video from URL with custom options
const result = await transcribeVideo({
url: 'https://www.youtube.com/watch?v=VIDEO_ID',
outputDir: '/path/to/output',
model: 'medium', // tiny, base, small, medium, large
language: 'en', // or 'auto' for auto-detection
onProgress: (progress) => console.log(progress)
});
// Or transcribe a local video file
const localResult = await transcribeVideo({
url: '/path/to/video.mp4', // Local file path instead of URL
outputDir: '/path/to/output',
model: 'base',
language: 'auto',
onProgress: (progress) => console.log(progress)
});
console.log('Title:', result.metadata.title);
console.log('Platform:', result.metadata.platform);
console.log('Files:', result.files);Transcripts are saved to ~/Downloads/video-transcripts/ by default.
For each video, three files are generated:
.txt- Plain text transcript.json- JSON with timestamps and metadata.md- Markdown with video metadata and formatted transcript
~/Downloads/video-transcripts/
├── 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.txt
├── 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.json
└── 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.md
Transcribe videos from 1000+ platforms or local video files to text.
Parameters:
url(required): Video URL from any supported platform OR path to a local video file (mp4, avi, mov, mkv, etc.)output_dir(optional): Output directory pathmodel(optional): Whisper model - "tiny", "base" (default), "small", "medium", "large"language(optional): Language code (ISO 639-1: "en", "es", "fr", etc.) or "auto" (default)
Model Comparison:
| Model | Speed | Accuracy | Use Case |
|---|---|---|---|
| tiny | ⚡⚡⚡⚡⚡ | ⭐⭐ | Quick drafts, testing |
| base | ⚡⚡⚡⚡ | ⭐⭐⭐ | General use (default) |
| small | ⚡⚡⚡ | ⭐⭐⭐⭐ | Better accuracy |
| medium | ⚡⚡ | ⭐⭐⭐⭐⭐ | High accuracy |
| large | ⚡ | ⭐⭐⭐⭐⭐⭐ | Best accuracy, slow |
List all available transcripts with metadata.
Parameters:
output_dir(optional): Directory to list
Verify that all required dependencies are installed.
List all 1000+ supported video platforms.
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "video-transcriber-mcp"]
}
}
}{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "github:nhatvu148/video-transcriber-mcp"]
}
}
}{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "/absolute/path/to/video-transcriber-mcp"]
}
}
}# Install dependencies
npm install
# Build the project
npm run build
# Type check
npm run check
# Development mode (requires Bun)
bun run dev
# Clean build artifacts
npm run cleanvideo-transcriber-mcp/
├── src/
│ ├── index.ts # MCP server implementation
│ └── transcriber.ts # Core transcription logic
├── dist/ # Built JavaScript (generated)
├── package.json # Package configuration
├── tsconfig.json # TypeScript configuration
├── LICENSE # MIT License
└── README.md # This file
| Command | Description |
|---|---|
npm run build |
Compile TypeScript to JavaScript |
npm run dev |
Development mode with hot reload (Bun) |
npm run check |
TypeScript type checking |
npm run clean |
Remove dist/ directory |
npm run prepublishOnly |
Pre-publish build (automatic) |
# Build the project
npm run build
# Test locally first
npx . --help
# Publish to npm (bump version first)
npm version patch # or minor, major
npm publish
# Or publish from GitHub
# Push to GitHub and users can use:
# npx github:username/video-transcriber-mcpSee the Prerequisites section above for platform-specific installation instructions.
Make sure the package is:
- Published to npm, OR
- Available on GitHub with proper package.json
npm run checkThe build process automatically makes dist/index.js executable via the fix-shebang script.
The platform might not be supported by yt-dlp. Run list_supported_sites to see all supported platforms.
| Video Length | Processing Time (base model) | Output Size |
|---|---|---|
| 5 minutes | ~1-2 minutes | ~5-10 KB |
| 10 minutes | ~2-4 minutes | ~10-20 KB |
| 30 minutes | ~5-10 minutes | ~30-50 KB |
| 1 hour | ~10-20 minutes | ~60-100 KB |
Times are approximate and depend on CPU speed and model choice
Specify in the tool call parameters:
{
"url": "https://youtube.com/watch?v=...",
"model": "large"
}Specify the language code:
{
"url": "https://youtube.com/watch?v=...",
"language": "es"
}Specify in the tool call:
{
"url": "https://youtube.com/watch?v=...",
"output_dir": "/custom/path"
}Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details
This is the TypeScript version - great for:
- ✅ Quick setup with npx (no installation)
- ✅ Node.js ecosystem familiarity
- ✅ Easy to modify and extend
- ✅ Good for learning and prototyping
Consider the Rust version if you need:
- 🚀 Faster transcription (uses whisper.cpp)
- 💾 Lower memory usage
- ⚡ Native performance
- 📦 Standalone binary (no Node.js required)
Both versions support the same MCP protocol and work identically with Claude Code!
- GitHub Repository
- npm Package
- 🦀 Rust Version ← For better performance
- Issues
- Model Context Protocol
- OpenAI Whisper for transcription
- yt-dlp for multi-platform video downloading (1000+ sites)
- Model Context Protocol SDK
- Claude by Anthropic
Made with ❤️ for the MCP community