Implementation of Latent Diffusion Transformer Model in Tensorflow / Keras
-
Updated
Apr 30, 2024 - Python
Implementation of Latent Diffusion Transformer Model in Tensorflow / Keras
🔥🔥🔥Official Codebase of "DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation"
FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.
Minimal DDPM/DiT-based generation of MNIST digits
Code for our ICCV 2025 paper "Adaptive Caching for Faster Video Generation with Diffusion Transformers"
[ICCV 2023] Efficient Diffusion Training via Min-SNR Weighting Strategy
Implementation of F5-TTS in Swift using MLX
Flux Schnell diffusion transformer model fine tuning across hardware configurations
This repo implements Diffusion Transformers(DiT) in PyTorch and provides training and inference code on CelebHQ dataset
This repo implements Video generation model using Latent Diffusion Transformers(Latte) in PyTorch and provides training and inference code on Moving mnist dataset and UCF101 dataset
TQ-DiT: Efficient Time-Aware Quantization for Diffusion Transformers
Lumina-T2X is a unified framework for Text to Any Modality Generation
Leverage SANA's capabilities using LitServe.
DiT-VTON: Exploring Diffusion Transformer Framework for Multi-Category Virtual Try-On with Integrated Image Customization
Implementation of F5-TTS in MLX
[NeurIPS 2024] DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
[ACL 2023] The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"
[🚀ICML 2025] "Taming Rectified Flow for Inversion and Editing" Using FLUX and HunyuanVideo for image and video editing!
Implementation of Diffusion Transformer Model in Pytorch
Add a description, image, and links to the diffusion-transformer topic page so that developers can more easily learn about it.
To associate your repository with the diffusion-transformer topic, visit your repo's landing page and select "manage topics."