This repository contains the source code for the paper First Order Motion Model for Image Animation
-
Updated
Nov 14, 2024 - Jupyter Notebook
This repository contains the source code for the paper First Order Motion Model for Image Animation
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Bring portraits to life!
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Code for Motion Representations for Articulated Animation paper
[ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Wav2Lip UHQ extension for Automatic1111
Wav2Lip version 288 and pipeline to train
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Diffusion-based Portrait and Animal Animation
PyTorch implementation of "Improved Techniques for Training Single-Image GANs" (WACV-21)
Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"
The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"
[CVPR2023] OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.
[AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
Codes for ID-Specific Video Customized Diffusion
[CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
Add a description, image, and links to the image-animation topic page so that developers can more easily learn about it.
To associate your repository with the image-animation topic, visit your repo's landing page and select "manage topics."