Skip to content

NVIDIA-NeMo/DFM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NeMo DFM: Diffusion Foundation Models

CICD NeMo Python 3.10+ GitHub Stars

Documentation | Supported Models | Examples | Contributing

Overview

NeMo DFM (Diffusion Foundation Models) is a library under NeMo Framework, focusing on diffusion models for Video, Image, and Text generation. It unifies cutting-edge diffusion-based architectures and training techniques, prioritizing efficiency and performance from research prototyping to production deployment.

Dual-Path Architecture: DFM provides two complementary training paths to maximize flexibility:

  • πŸŒ‰ Megatron Bridge Path: Built on NeMo Megatron Bridge which leverages Megatron Core for maximum scalability with n-D parallelism (TP, PP, CP, EP, VPP, DP)
  • πŸš€ AutoModel Path: Built on NeMo AutoModel for PyTorch DTensor-native SPMD training, for easy experimentation and also Day-0 support on πŸ€— Hugging Face models.

Choose the path that best fits your workflowβ€”or use both for different stages of development!

πŸ”§ Installation

🐳 Build your own Container

1. Build the container

# Initialize all submodules (Megatron-Bridge, Automodel, and nested Megatron-LM)
git submodule update --init --recursive

# Build the container
docker build -f docker/Dockerfile.ci -t dfm:dev .

2. Start the container

docker run --rm -it --gpus all \
  --entrypoint bash \
  -v $(pwd):/opt/DFM -it dfm:dev

πŸ“¦ Using DFM Docker (Coming Soon)

⚑ Quickstart

Megatron Bridge Path

Run a Recipe

You can find all predefined recipes under recipes directory.

Note: You will have to use uv to run the recipes. Please use --group as megatron-bridge.

uv run --group megatron-bridge python -m torch.distributed.run --nproc-per-node $num_gpus \
  examples/megatron/recipes/wan/pretrain_wan.py \
  --config-file examples/megatron/recipes/wan/conf/wan_1_3B.yaml \
  --training-mode pretrain \
  --mock

AutoModel Path

Train with PyTorch-native DTensor parallelism and direct πŸ€— HF integration:

Run a Recipe

You can find pre-configured recipes under automodel/finetune and automodel/pretrain directories.

Note: AutoModel examples live under dfm/examples/automodel. Use uv with --group automodel. Configs are YAML-driven; pass -c <path> to override the default.

The fine-tune recipe sets up WAN 2.1 Text-to-Video training with Flow Matching using FSDP2 Hybrid Sharding. It parallelizes heavy transformer blocks while keeping lightweight modules (e.g., VAE) unsharded for efficiency. Adjust batch sizes, LR, and parallel sizes in dfm/examples/automodel/finetune/wan2_1_t2v_flow.yaml. The generation script demonstrates distributed inference with AutoModel DTensor managers, producing an MP4 on rank 0. You can tweak frame size, frames, steps, and CFG in flags.

# Fine-tune WAN 2.1 T2V with FSDP2 (single node, 8 GPUs)
uv run --group automodel torchrun --nproc-per-node=8 \
  dfm/examples/automodel/finetune/finetune.py \
  -c dfm/examples/automodel/finetune/wan2_1_t2v_flow.yaml

# Generate videos with FSDP2 (distributed inference)
uv run --group automodel torchrun --nproc-per-node=8 \
  dfm/examples/automodel/generate/wan_generate.py

πŸš€ Key Features

Dual Training Paths

Megatron Bridge delivers maximum throughput and scalability with near-linear performance to thousands of nodes. AutoModel provides an easy on-ramp for experimentation and research with PyTorch-native SPMD training.

Shared Capabilities

  • πŸŽ₯ Multi-Modal Diffusion: Support for video, image, and text generation
  • πŸ”¬ Advanced Samplers: EDM, Flow Matching, and custom diffusion schedules
  • 🎭 Flexible Architectures: DiT (Diffusion Transformers), WAN (World Action Networks)
  • πŸ“Š Efficient Data Loading: Data pipelines with sequence packing
  • πŸ’Ύ Distributed Checkpointing: SafeTensors-based sharded checkpoints
  • 🌟 Memory Optimization: Gradient checkpointing, mixed precision, efficient attention
  • πŸ€— HuggingFace Integration: Seamless integration with the HF ecosystem

Supported Models

DFM provides out-of-the-box support for state-of-the-art diffusion architectures:

Model Type Megatron Bridge AutoModel Description
DiT Image/Video pretrain, inference πŸ”œ Diffusion Transformers with scalable architecture
WAN 2.1 Video inference, pretrain, finetune pretrain, finetune,inference World Action Networks for video generation

Performance Benchmarking

For detailed performance benchmarks including throughput metrics across different GPU systems and model configurations, see the (Performance Summary)[https://github.com/NVIDIA-NeMo/DFM/blob/main/docs/performance-summary.md] in our documentation.

Project Structure

DFM/
β”œβ”€β”€ dfm/
β”‚   └── src/
β”‚       β”œβ”€β”€ megatron/              # Megatron Bridge path
β”‚       β”‚   β”œβ”€β”€ base/              # Base utilities for Megatron
β”‚       β”‚   β”œβ”€β”€ data/              # Data loaders and task encoders
β”‚       β”‚   β”‚   β”œβ”€β”€ common/        # Shared data utilities
β”‚       β”‚   β”‚   β”œβ”€β”€ <model_name>/  # model-specific data handling
β”‚       β”‚   β”œβ”€β”€ model/             # Model implementations
β”‚       β”‚   β”‚   β”œβ”€β”€ common/        # Shared model components
β”‚       β”‚   β”‚   β”œβ”€β”€ <model_name>/  # model-specific implementations
β”‚       β”‚   └── recipes/           # Training recipes
β”‚       β”‚       β”œβ”€β”€ <model_name>/  # model-specific training configs
β”‚       β”œβ”€β”€ automodel              # AutoModel path (DTensor-native)
β”‚       β”‚   β”œβ”€β”€ _diffusers/        # Diffusion pipeline integrations
β”‚       β”‚   β”œβ”€β”€ datasets/          # Dataset implementations
β”‚       β”‚   β”œβ”€β”€ distributed/       # Parallelization strategies
β”‚       β”‚   β”œβ”€β”€ flow_matching/     # Flow matching implementations
β”‚       β”‚   β”œβ”€β”€ recipes/           # Training scripts
β”‚       β”‚   └── utils/             # Utilities and validation
β”‚       └── common/                # Shared across both paths
β”‚           β”œβ”€β”€ data/              # Common data utilities
β”‚           └── utils/             # Batch ops, video utils, etc.
β”œβ”€β”€ examples/                      # Example scripts and configs

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details on:

  • Setting up your development environment
  • Code style and testing guidelines
  • Submitting pull requests
  • Reporting issues

For questions or discussions, please open an issue on GitHub.

Acknowledgements

NeMo DFM builds upon the excellent work of:

About

State-of-the-art framework for fast, large-scale training and inference of diffusion models

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 11