Accelerate attention with Mixture-of-Depths Attention (MoDA) for efficient transformer scaling across model depth
-
Updated
Apr 2, 2026 - Python
Accelerate attention with Mixture-of-Depths Attention (MoDA) for efficient transformer scaling across model depth
Provide organized course materials, code examples, and exercises for the IIC2133 Data Structures and Algorithms class, term 2026-1.
🎥 Generate high-quality video editing data with Ditto, a scalable pipeline that trains advanced instruction-based editing models.
About [ICMR 2025] CFSynthesis: Controllable and Free-view 3D Human Video Synthesis
PyTorch implementation of the human neural rendering in unseen positions presented at WACV 2022 "Creating and Reenacting Controllable 3D Humans with Differentiable Rendering"
Official Code for "CanonicalFusion: Generating Drivable 3D Human Avatars from Multiple Images (ECCV 2024)"
MoDA: Multi-modal Diffusion Architecture for Talking Head Generation
[ACM MM 2025] Ditto: Motion-Space Diffusion for Controllable Realtime Talking Head Synthesis
[AAAI 2026] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation
Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"
[AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
[ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
[CVPR 2025] EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation
Add a description, image, and links to the human-animation topic page so that developers can more easily learn about it.
To associate your repository with the human-animation topic, visit your repo's landing page and select "manage topics."