Skip to content

MIT-SPARK/BUFFER-X

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

BUFFER-X (ICCV 2025, ๐ŸŒŸHighlight๐ŸŒŸ)

Minkyun Seo*, Hyungtae Lim*, Kanghee Lee, Luca Carlone, Jaesik Park



BUFFER-X

Towards zero-shot and beyond! ๐Ÿš€
Official repository of BUFFER-X, a zero-shot point cloud registration method
across diverse scenes without retraining or tuning.


๐Ÿงญ Structure Overview

fig1

๐Ÿ’ป Installation of BUFFER-X

Set up environment

After cloning this repository:

git clone https://github.com/MIT-SPARK/BUFFER-X && cd BUFFER-X

Setup your own virtual environment (e.g., conda create -n bufferx python=3.x or setting your Nvidia Docker env.) and then install the required libraries. We present some shellscripts as follows.

[Python 3.8, Pytorch 1.9.1, CUDA 11.1 on Ubuntu 22.04]

./scripts/install_py3_8_cuda11_1.sh

[Python 3.10, Pytorch 2.7.1, CUDA 11.8, Cudnn 9.1.0 on Ubuntu 24.04]

./scripts/install_py3_10_cuda11_8.sh

[Python 3.11, Pytorch 2.6.0, CUDA 12.4, Cudnn 9.1.0 on Ubuntu 24.04]

./scripts/install_py3_11_cuda12_4.sh

๐Ÿš€ Quick Start

Training and Test

Test on Our Generalization Benchmark

You can easily run our generalization benchmark with BUFFER-X. First, download the model using the following script:

./scripts/download_pretrained_models.sh
Detailed explanation about file directory

The structure should be as follows:

  • BUFFER-X
    • snapshot # <- this directory is generated by the command above
      • threedmatch
        • Desc
        • Pose
      • kitti
        • Desc
        • Pose
    • config
    • dataset
    • ...

Next, to evaluate BUFFER-X in diverse scenes, please download the preprocessed data by running the following command. It requires around 130 GB. However, to include all other datasets (i.e., Scannetpp_iphone, Scannetpp_faro), approximately 150 GB more storage is required. Due to the data copyrights, we cannot provide preprocessed data for ScanNet++, so if you want to reproduce whole results, please refer to here

./scripts/download_all_data.sh

Then, you can run the below command as follows:

python test.py --dataset <LIST OF DATASET NAMES> --experiment_id <EXPERIMENT ID> --verbose

Experiment ID refers to the saved modelโ€™s filename. Provided snapshots include threedmatch and kitti, each trained on the corresponding dataset.

e.g.,

python test.py --dataset 3DMatch TIERS Oxford MIT --experiment_id threedmatch --verbose

You can also run the evaluation script for all datasets at once by using the provided script:

./scripts/eval_all.sh <EXPERIMENT ID>
Detailed explanation about configuration
  • --dataset: The name of the dataset to test on. Must be one of:

    • 3DMatch
    • 3DLoMatch
    • Scannetpp_iphone
    • Scannetpp_faro
    • TIERS
    • KITTI
    • WOD
    • MIT
    • KAIST
    • ETH
    • Oxford
  • --experiment_id: The ID of the experiment to use for testing.

Due to the large number and variety of datasets used in our experiments, we provide detailed download instructions and folder structures in a separate document:

DATASETS.md


Training

BUFFER-X supports training on either the 3DMatch or KITTI dataset. As un example, run the following command to train the model:

python train.py --dataset 3DMatch

๐Ÿ“ Citation

If you find our work useful in your research, please consider citing:

@article{Seo_BUFFERX_arXiv_2025,
Title={BUFFER-X: Towards Zero-Shot Point Cloud Registration in Diverse Scenes},
Author={Minkyun Seo and Hyungtae Lim and Kanghee Lee and Luca Carlone and Jaesik Park},
Journal={2503.07940 (arXiv)},
Year={2025}
}

๐Ÿ™ Acknowledgements

This work was supported by IITP grant (RS-2021-II211343: AI Graduate School Program at Seoul National University) (5%), and by NRF grants funded by the Korea government (MSIT) (No. 2023R1A1C200781211 (65%) and No. RS-2024-00461409 (30%), respectively).

In addition, we appreciate the open-source contributions of previous authors, and especially thank Sheng Ao, the first author of BUFFER, for allowing us to use the term 'BUFFER' as part of the title of our study.


Updates

  • 25/07/2025: This work is selected as a Highlight paper at ICCV 2025.
  • 25/06/2025: This work is accepted by ICCV 2025.