[CVPR2025] Unveil Inversion and Invariance in Flow Transformer for Versatile Image Editing
-
Updated
Aug 23, 2025 - Python
[CVPR2025] Unveil Inversion and Invariance in Flow Transformer for Versatile Image Editing
(ICCV 2025) 🎨 Lay-Your-Scene: Natural Scene Layout Generation with Diffusion Transformers
This repository implements multiple generative diffusion frameworks (EDM, Consistency Models, etc.). It also implements some architectures (U-Net, Diffusion Transformers, etc.).
Official training code for MUG-V 10B video generation model. Built on Megatron-LM (v0.14.0) with production-ready distributed training for 10B DiT.
Torchsmith is a minimalist library that focuses on understanding generative AI by building it using primitive PyTorch operations
TQ-DiT: Efficient Time-Aware Quantization for Diffusion Transformers
InvarDiff: Cross-Scale Invariance Caching for Accelerated Diffusion Models
Flux Schnell diffusion transformer model fine tuning across hardware configurations
Derf (Dynamic erf) - Normalization-Free Transformer Activation. Reimplementation of arXiv:2512.10938
Pytorch and JAX Implementation of Scalable Diffusion Models with Transformers | Diffusion Transformers in Pytorch and JAX
[NeurIPS2024 (Spotlight)] "Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement" by Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang
A diffusion transformer implementation in Flax
Leverage SANA's capabilities using LitServe.
Implementation of Latent Diffusion Transformer Model in Tensorflow / Keras
Democratising RGBA Image Generation With No $$$ (AI4VA@ECCV24)
CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151
[NeurIPS 2025] Official implementation for our paper "Scaling Diffusion Transformers Efficiently via μP".
[ICML 2025] This is the official PyTorch implementation of "🎵 HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration".
FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.
Add a description, image, and links to the diffusion-transformer topic page so that developers can more easily learn about it.
To associate your repository with the diffusion-transformer topic, visit your repo's landing page and select "manage topics."