Skip to content

aim-uofa/GVM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generative Video Matting

SIGGRAPH2025

       

📖 Table of Contents

🔥 News

  • August 10, 2025: Release the inference code and model checkpoints.
  • June 11, 2025: Repo created. The code and dataset for this project are currently being prepared for release and will be available here soon. Please stay tuned!

🚀 Getting Started

Environment Requirement 🌍

First, clone the repo:

git clone https://github.com/aim-uofa/GVM.git
cd GVM

Then, we recommend you first use conda to create virtual environment, and install needed libraries. For example:

conda create -n gvm python=3.10 -y
conda activate gvm
pip install -r requirements.txt
python setup.py develop

Download Model Weights ⬇️

You need to download the model weights by:

hugginface-cli download geyongtao/gvm --local-dir data/weights

The ckpt structure should be like:

|-- GVM    
    |-- data
        |-- weights
            |-- vae
                |-- config.json
                |-- diffusion_pytorch_model.safetensors
            |-- unet
                |-- config.json
                |-- diffusion_pytorch_model.safetensors
            |-- scheduler
                |-- scheduler_config.json  
        |-- datasets
        |-- demo_videos

🏃🏼 Run

Inference 📜

You can run generative video matting with:

python demo.py \
--model_base 'data/weights/' \
--unet_base data/weights/unet \
--lora_base data/weights/unet \
--mode 'matte' \
--num_frames_per_batch 8 \
--num_interp_frames 1 \
--num_overlap_frames 1 \
--denoise_steps 1 \
--decode_chunk_size 8 \
--max_resolution 960 \
--pretrain_type 'svd' \
--data_dir 'data/demo_videos/xxx.mp4' \
--output_dir 'output_path'

Evaluation 📏

TODO

🎫 License

For academic usage, this project is licensed under the 2-clause BSD License. For commercial inquiries, please contact Chunhua Shen.

📢 Disclaimer

This repository provides a one-step model for faster inference speed. Its performance is slightly different from the results reported in the original SIGRRAPH paper.

🤝 Cite Us

If you find this work helpful for your research, please cite:

@inproceedings{ge2025gvm,
author = {Ge, Yongtao and Xie, Kangyang and Xu, Guangkai and Ke, Li and Liu, Mingyu and Huang, Longtao and Xue, Hui and Chen, Hao and Shen, Chunhua},
title = {Generative Video Matting},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3721238.3730642},
doi = {10.1145/3721238.3730642},
booktitle = {Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
series = {SIGGRAPH Conference Papers '25}
}

About

[SIGGRAPH2025] Generative Video Matting

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages