A TypeScript toolkit for building AI-driven video workflows on the server, powered by Mux!
@mux/ai does this by providing:
Easy to use, purpose-driven, cost effective, configurable workflow functions that integrate with a variety of popular AI/LLM providers (OpenAI, Anthropic, Google).
- Examples:
getSummaryAndTags,getModerationScores,hasBurnedInCaptions,generateChapters,generateVideoEmbeddings,translateCaptions,translateAudio - Workflows automatically ship with
"use workflow"compatability with Workflow DevKit
Convenient, parameterized, commonly needed primitive functions backed by Mux Video for building your own media-based AI workflows and integrations.
- Examples:
getStoryboardUrl,chunkVTTCues,fetchTranscriptForAsset
import { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("your-asset-id", {
provider: "openai",
tone: "professional",
includeTranscript: true
});
console.log(result.title); // "Getting Started with TypeScript"
console.log(result.description); // "A comprehensive guide to..."
console.log(result.tags); // ["typescript", "tutorial", "programming"]
⚠️ Important: Many workflows rely on video transcripts for best results. Consider enabling auto-generated captions on your Mux assets to unlock the full potential of transcript-based workflows like summarization, chapters, and embeddings.
- Node.js (≥ 21.0.0)
- A Mux account and necessary credentials for your environment (sign up here for free!)
- Accounts and credentials for any AI providers you intend to use for your workflows
- (For some workflows only) AWS S3 and other credentials
npm install @mux/aiWe support dotenv, so you can simply add the following environment variables to your .env file:
# Required
MUX_TOKEN_ID=your_mux_token_id
MUX_TOKEN_SECRET=your_mux_token_secret
# Needed if your assets _only_ have signed playback IDs
MUX_SIGNING_KEY=your_signing_key_id
MUX_PRIVATE_KEY=your_base64_encoded_private_key
# You only need to configure API keys for the AI platforms and workflows you're using
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_key
ELEVENLABS_API_KEY=your_elevenlabs_api_key
# S3-Compatible Storage (required for translation & audio dubbing)
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-keyđź’ˇ Tip: If you're using
.envin a repository or version tracking system, make sure you add this file to your.gitignoreor equivalent to avoid unintentionally committing secure credentials.
| Workflow | Description | Providers | Default Models | Mux Asset Requirements | Cloud Infrastructure Requirements |
|---|---|---|---|---|---|
getSummaryAndTagsAPI · Source |
Generate titles, descriptions, and tags for an asset | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-3-flash-preview (Google) |
Video (required), Captions (optional) | None |
getModerationScoresAPI · Source |
Detect inappropriate (sexual or violent) content in an asset | OpenAI, Hive | omni-moderation-latest (OpenAI) or Hive visual moderation task |
Video (required) | None |
hasBurnedInCaptionsAPI · Source |
Detect burned-in captions (hardcoded subtitles) in an asset | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-3-flash-preview (Google) |
Video (required) | None |
generateChaptersAPI · Source |
Generate chapter markers for an asset using the transcript | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-3-flash-preview (Google) |
Video (required), Captions (required) | None |
generateVideoEmbeddingsAPI · Source |
Generate vector embeddings for an asset's transcript chunks | OpenAI, Google | text-embedding-3-small (OpenAI), gemini-embedding-001 (Google) |
Video (required), Captions (required) | None |
translateCaptionsAPI · Source |
Translate an asset's captions into different languages | OpenAI, Anthropic, Google | gpt-5.1 (OpenAI), claude-sonnet-4-5 (Anthropic), gemini-3-flash-preview (Google) |
Video (required), Captions (required) | AWS S3 (if uploadToMux=true) |
translateAudioAPI · Source |
Create AI-dubbed audio tracks in different languages for an asset | ElevenLabs only | ElevenLabs Dubbing API | Video (required), Audio (required) | AWS S3 (if uploadToMux=true) |
All workflows are compatible with Workflow DevKit. The workflows in this SDK are exported with "use workflow" directives and "use step" directives in the code.
If you are using Workflow DevKit in your project, then you must call workflow functions like this:
import { start } from 'workflow/api';
import { getSummaryAndTags } from '@mux/ai/workflows';
const assetId = 'YOUR_ASSET_ID';
const run = await start(getSummaryAndTags, [assetId]);
// optionally, wait for the workflow run return value:
// const result = await run.returnValue- Observability Dashboard
- Control Flow Patterns like Parallel Execution.
- Errors and Retrying
- Hooks and Webhooks
- Patterns for building Agents with Human in the Loop
Workflows can be nested
import { start } from "workflow/api";
import { getSummaryAndTags } from '@mux/ai/workflows';
async function processVideoSummary (assetId: string) {
'use workflow'
const summary = await getSummaryAndTags(assetId);
const emailResp = await emailSummaryToAdmins(summary: summary);
return { assetId, summary, emailResp }
}
async function emailSummaryToAdmins (assetId: string) {
'use step';
return { sent: true }
}
//
// this will call the processVideoSummary workflow that is defined above
// in that workflow, it calls `getSummaryAndTags()` workflow
//
const run = await start(processVideoSummary, [assetId]);Generate SEO-friendly titles, descriptions, and tags from your video content:
import { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("your-asset-id", {
provider: "openai",
tone: "professional",
includeTranscript: true
});
console.log(result.title); // "Getting Started with TypeScript"
console.log(result.description); // "A comprehensive guide to..."
console.log(result.tags); // ["typescript", "tutorial", "programming"]Automatically detect inappropriate content in videos:
import { getModerationScores } from "@mux/ai/workflows";
const result = await getModerationScores("your-asset-id", {
provider: "openai",
thresholds: { sexual: 0.7, violence: 0.8 }
});
if (result.exceedsThreshold) {
console.log("Content flagged for review");
console.log(`Max scores: ${result.maxScores}`);
}Create automatic chapter markers for better video navigation:
import { generateChapters } from "@mux/ai/workflows";
const result = await generateChapters("your-asset-id", "en", {
provider: "anthropic"
});
// Use with Mux Player
player.addChapters(result.chapters);
// [
// { startTime: 0, title: "Introduction" },
// { startTime: 45, title: "Main Content" },
// { startTime: 120, title: "Conclusion" }
// ]Generate embeddings for semantic video search:
import { generateVideoEmbeddings } from "@mux/ai/workflows";
const result = await generateVideoEmbeddings("your-asset-id", {
provider: "openai",
languageCode: "en",
chunkingStrategy: {
type: "token",
maxTokens: 500,
overlap: 100
}
});
// Store embeddings in your vector database
for (const chunk of result.chunks) {
await vectorDB.insert({
embedding: chunk.embedding,
metadata: {
assetId: result.assetId,
startTime: chunk.metadata.startTime,
endTime: chunk.metadata.endTime
}
});
}- Cost-Effective by Default: Uses affordable frontier models like
gpt-5.1,claude-sonnet-4-5, andgemini-3-flash-previewto keep analysis costs low while maintaining high quality results - Multi-modal Analysis: Combines storyboard images with video transcripts for richer understanding
- Tone Control: Choose between neutral, playful, or professional analysis styles for summarization
- Prompt Customization: Override specific prompt sections to tune workflows to your exact use case
- Configurable Thresholds: Set custom sensitivity levels for content moderation
- Full TypeScript Support: Comprehensive types for excellent developer experience and IDE autocomplete
- Provider Flexibility: Switch between OpenAI, Anthropic, Google, and other providers based on your needs
- Composable Building Blocks: Use primitives to fetch transcripts, thumbnails, and storyboards for custom workflows
- Universal Language Support: Automatic language name detection using
Intl.DisplayNamesfor all ISO 639-1 codes - Production Ready: Built-in retry logic, error handling, and edge case management
@mux/ai is built around two complementary abstractions:
Workflows are functions that handle complete video AI tasks end-to-end. Each workflow orchestrates the entire process: fetching video data from Mux (transcripts, thumbnails, storyboards), formatting it for AI providers, and returning structured results.
import { getSummaryAndTags } from "@mux/ai/workflows";
const result = await getSummaryAndTags("asset-id", { provider: "openai" });Use workflows when you need battle-tested solutions for common tasks like summarization, content moderation, chapter generation, or translation.
Primitives are low-level building blocks that give you direct access to Mux video data and utilities. They provide functions for fetching transcripts, storyboards, thumbnails, and processing text—perfect for building custom workflows.
import { fetchTranscriptForAsset, getStoryboardUrl } from "@mux/ai/primitives";
const transcript = await fetchTranscriptForAsset("asset-id", "en");
const storyboard = getStoryboardUrl("playback-id", { width: 640 });Use primitives when you need complete control over your AI prompts or want to build custom workflows not covered by the pre-built options.
// Import workflows
import { generateChapters } from "@mux/ai/workflows";
// Import primitives
import { fetchTranscriptForAsset } from "@mux/ai/primitives";
// Or import everything
import { workflows, primitives } from "@mux/ai";You'll need to set up credentials for Mux as well as any AI provider you want to use for a particular workflow. In addition, some workflows will need other cloud-hosted access (e.g. cloud storage via AWS S3).
All workflows require a Mux API access token to interact with your video assets. If you're already logged into the dashboard, you can create a new access token here.
Required Permissions:
- Mux Video: Read + Write access
- Mux Data: Read access
These permissions cover all current workflows. You can set these when creating your token in the dashboard.
đź’ˇ Tip: For security reasons, consider creating a dedicated access token specifically for your AI workflows rather than reusing existing tokens.
If your Mux assets use signed playback URLs for security, you'll need to provide signing credentials so @mux/ai can access the video data.
When needed: Only if your assets have signed playback policies enabled and no public playback ID.
How to get:
- Go to Settings > Signing Keys in your Mux dashboard
- Create a new signing key or use an existing one
- Save both the Signing Key ID and the Base64-encoded Private Key
Configuration:
MUX_SIGNING_KEY=your_signing_key_id
MUX_PRIVATE_KEY=your_base64_encoded_private_keyDifferent workflows support various AI providers. You only need to configure API keys for the providers you plan to use.
Used by: getSummaryAndTags, getModerationScores, hasBurnedInCaptions, generateChapters, generateVideoEmbeddings, translateCaptions
Get your API key: OpenAI API Keys
OPENAI_API_KEY=your_openai_api_keyUsed by: getSummaryAndTags, hasBurnedInCaptions, generateChapters, translateCaptions
Get your API key: Anthropic Console
ANTHROPIC_API_KEY=your_anthropic_api_keyUsed by: getSummaryAndTags, hasBurnedInCaptions, generateChapters, generateVideoEmbeddings, translateCaptions
Get your API key: Google AI Studio
GOOGLE_GENERATIVE_AI_API_KEY=your_google_api_keyUsed by: translateAudio (audio dubbing)
Get your API key: ElevenLabs API Keys
Note: Requires a Creator plan or higher for dubbing features.
ELEVENLABS_API_KEY=your_elevenlabs_api_keyUsed by: getModerationScores (alternative to OpenAI moderation)
Get your API key: Hive Console
HIVE_API_KEY=your_hive_api_keyRequired for: translateCaptions, translateAudio (only if uploadToMux is true, which is the default)
Translation workflows need temporary storage to upload translated files before attaching them to your Mux assets. Any S3-compatible storage service works (AWS S3, Cloudflare R2, DigitalOcean Spaces, etc.).
AWS S3 Setup:
- Create an S3 bucket
- Create an IAM user with programmatic access
- Attach a policy with
s3:PutObject,s3:GetObject, ands3:PutObjectAclpermissions for your bucket
Configuration:
S3_ENDPOINT=https://s3.amazonaws.com # Or your S3-compatible endpoint
S3_REGION=us-east-1 # Your bucket region
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-keyCloudflare R2 Example:
S3_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
S3_REGION=auto
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-r2-access-key
S3_SECRET_ACCESS_KEY=your-r2-secret-key- Workflows Guide - Detailed guide to each pre-built workflow with examples
- API Reference - Complete API documentation for all functions, parameters, and return types
- Primitives Guide - Low-level building blocks for custom workflows
- Examples - Running examples from the repository
- Mux Video API Docs - Learn about Mux Video features
- Auto-generated Captions - Enable transcripts for your assets
- GitHub Repository - Source code, issues, and contributions
- npm Package - Package page and version history
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, we'd love your help.
Please see our Contributing Guide for details on:
- Setting up your development environment
- Running examples and tests
- Code style and conventions
- Submitting pull requests
- Reporting issues
For questions or discussions, feel free to open an issue.