TL;DR: Inference with One model, One stage; Training in One day using One GPU
human3r.mp4
- Clone Human3R.
git clone https://github.com/fanegg/Human3R.git
cd Human3R- Create the environment.
conda create -n human3r python=3.11 cmake
conda activate human3r
conda install pytorch torchvision pytorch-cuda=12.4 -c pytorch -c nvidia # use the correct version of cuda for your system
pip install -r requirements.txt
# issues with pytorch dataloader, see https://github.com/pytorch/pytorch/issues/99625
conda install 'llvm-openmp<16'
# for training logging
conda install -y gcc_linux-64 gxx_linux-64
pip install git+https://github.com/nerfstudio-project/gsplat.git
# for evaluation
pip install evo
pip install open3d- Compile the cuda kernels for RoPE (as in CroCo v2).
cd src/croco/models/curope/
python setup.py build_ext --inplace
cd ../../../../Run the following commands to download all models and checkpoints into the src/ directory. The first command will prompt you to register and log in to access each version of SMPL.
# SMPLX family models
bash scripts/fetch_smplx.sh
# Human3R checkpoints
huggingface-cli download faneggg/human3r human3r_896L.pth --local-dir ./srcTo run the inference demo, you can use the following command:
# input can be a folder or a video
# the following script will run inference with Human3R and visualize the output with viser on port 8080
CUDA_VISIBLE_DEVICES=0 python demo.py --model_path MODEL_PATH --size 512 \
--seq_path SEQ_PATH --output_dir OUT_DIR --subsample 1 --use_ttt3r \
--vis_threshold 2 --downsample_factor 1 --reset_interval 100
# Example:
# To save the results, append `--save --output_dir tmp` to the command.
CUDA_VISIBLE_DEVICES=0 python demo.py --model_path src/human3r_896L.pth \
--size 512 --seq_path examples/GoodMornin1.mp4 \
--subsample 1 --use_ttt3r --vis_threshold 2 \
--downsample_factor 1 --reset_interval 100
Please refer to the eval.md for more details.
Please refer to the inference.md for using different backbones.
Please refer to the train.md for more details.
Our code is based on the following awesome repositories:
We thank the authors for releasing their code!
If you find our work useful, please cite:
@article{chen2025human3r,
title={Human3R: Everyone Everywhere All at Once},
author={Chen, Yue and Chen, Xingyu and Xue, Yuxuan and Chen, Anpei and Xiu, Yuliang and Gerard, Pons-Moll},
journal={arXiv preprint arXiv:2510.06219},
year={2025}
}