-
BIT & UTS
- Sydney
Stars
Post-training with Tinker
Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight)
Taming large-scale full-parameter few-step training with self-adversarial flows! 👏🏻
ModelTC / Wan2.2-Lightning
Forked from Wan-Video/Wan2.2Wan2.2-Lightning: Speed up wan2.2 model with distillation
The world's first open-source multimodal creative assistant This is a substitute for Canva and Manus that prioritizes privacy and is usable locally.
Official inference repo for FLUX.2 models
HunyuanVideo-1.5: A leading lightweight video generation model
[CVPR-2024] Official Pytorch implementation of "Misalignment-Robust Frequency Distribution Loss for Image Transformation"
[ICCV2025]LeanVAE: An Ultra-Efficient Reconstruction VAE for Video Diffusion Models
pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
Krea Realtime 14B. An open-source realtime AI video model.
Optimal transport tools implemented with the JAX framework, to solve large scale matching problems of any flavor.
Kandinsky 5.0: A family of diffusion models for Video & Image generation
HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation
Official Code for "Rethinking Diffusion Model in High Dimension"
Flash Attention Triton kernel with support for second-order derivatives
HunyuanImage-2.1: An Efficient Diffusion Model for High-Resolution (2K) Text-to-Image Generation
Official Github Repo for Neurips 2024 Paper Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment
Foundation Model for Multiplex Spatial Proteomic Images
TeEFusion: Blending Text Embeddings to Distill Classifier-Free Guidance (ICCV 2025)
Industry-level video foundation model for unified Text-to-Video (T2V) and Image-to-Video (I2V) generation.
PyTorch re-implementation for MeanFlow