π [CoRL 2025] This repository is the official repository for ImLPR
π [Paper]β|βπ₯ [Video]
Minwoo Jung, Lanke Frank Tarimo Fu, Maurice Fallon, Ayoung Kimβ
π€ Collaboration with Robust Perception and Mobile Robotics Lab (RPM) and Dynamic Robot Systems Group (DRS)
- [2025/09/25] Code released!
- [2025/08/11] First release of ImLPR repository!
Our work makes the following contributions:
- ImLPR is the first LPR pipeline using a VFM while retaining the majority of pre-trained knowledge: Our key innovation lies in a tailored three-channel RIV representation and lightweight convolutional adapters, which seamlessly bridge the 3D LiDAR and 2D vision domain gap. Freezing most DINOv2 layers preserves pre-trained knowledge during training, ensuring strong generalization and outperforming task-specific LPR networks.
- We introduce the Patch-InfoNCE loss: A patch-level contrastive loss to enhance the local discriminability and robustness of learned LiDAR features. We demonstrate that our patch-level contrastive learning strategy achieves a performance boost in LPR.
- ImLPR demonstrates versatility on multiple public datasets: Outperforming SOTA methods. Furthermore, we also validate the importance of each component of the ImLPR pipeline, with code available post-review for robotics community integration.
- Table of Contents
- Environment (Docker)
- Data Preparation
- Training
- Evaluation
- Pretrained Weights
- Results & Notes
- Citations
- Contact
- Acknowledgments
Tested with Ubuntu 22.04, CUDA 11.8, Python 3.11, PyTorch 2.6.0, and GPUs such as RTX 3090 and A6000.
We provide a simple Dockerfile:
cd docker
sudo docker build -t imlpr:latest .
sudo docker run --gpus all -it --rm --env="DISPLAY" -e DISPLAY=:1.0 --net=host --ipc=host --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /:/mydata imlpr:latestAdjust mount points to your needs.
Place your .npy RIV files and poses.txt in the following layout:
ImLPR/
βββ data
βββ training
β βββ SequenceA
β β βββ 000000.npy
β β βββ poses.txt
β βββ SequenceB
β βββ SequenceC
βββ validation
β βββ SequenceA
β β βββ 000000.npy
β β βββ poses.txt
β βββ SequenceB
β βββ SequenceC
βββ training.pickle
βββ db_eval.pickle
βββ query_eval.pickle
poses.txt format (one line per frame):
timestamp x y z qx qy qz qw
Note: poses.txt is generated in Step 2) Generate RIV & Trajectories (see the preprocessing section), where timestamps and global poses are extracted from the dataset.
Each .npy is an H Γ W Γ 3 RIV image. The three channels are reflectivity (intensity), range, and normal ratio, each scaled to [0, 255] as described in the paper.
You can download the training/test sets and the corresponding pickle files from this Google Drive folder and then jump to the Training section.
We provide:
- Training sets (HeLiPR): DCC, KAIST, Riverside
- Evaluation sets (HeLiPR): Roundabout01β03, Town01β03
If your raw LiDAR is not yet converted to RIV .npy, use your own converter or the helper in preprocess/ (adapt paths inside as needed):
python3 preprocess/generate_riv_images.pyWe provide two utilities:
- Training tuples β
data/training.pickle
python3 preprocess/generate_training_tuples_baseline_img.py- Evaluation sets β
data/db_eval.pickle&data/query_eval.pickle
python3 preprocess/generate_test_sets_img.pyThese scripts scan data/training/*/poses.txt and data/validation/*/poses.txt, to find the true positives for each sequences, and create pickles compatible with the training & eval code.
If you already have your own pickles, just place them at:
data/training_helipr.pickle
data/db_{dataset}_{sequence}.pickle
data/query_{dataset}_{sequence}.pickle
Use the provided configs:
config/config_helipr.txt(training/eval params)config/config_model_imlpr.txt(backbone/aggregator params)
Run:
cd training
python3 train.py \
--config config/config_helipr.txt \
--model_config config/config_model_imlpr.txtNotes:
- Multi-GPU is supported via
torch.nn.DataParallel. - By default, the trainer will try to load
weights/ImLPR_default.pthif present (remove/rename to start from scratch).
Once trained (or using the default weights), evaluate:
cd eval
python3 evaluate.py \
--config config/config_helipr.txt \
--model_config config/config_model_imlpr.txtThis will compute average Recall@1 and One-Percent Recall across configured sets.
Currently, MulRan and HeLiPR are supported out of the box.
Please adjust the dataset configuration in eval/evaluate.py.
Place pretrained weights (if provided) under:
weights/ImLPR_default.pth
Training and Evaluation will automatically load them (unless you change the loading logic in training/trainer.py). This checkpoint can be downloaded from the Google Drive folder.
- We freeze most DINOv2 blocks and use lightweight multi-conv adapters to bridge LiDAR to vision.
- Training employs Truncated Smooth-AP for global retrieval and Patch-InfoNCE for local discriminability.
- Minor numerical differences can occur across GPUs/CUDA/flash-attn variants; typically within the first decimal place.
We will add more pretrained weights and evaluation files soon.
- Upload RIV files (.npy) for MulRan evaluation sequences
- Upload Python file to aggregate sparse LiDAR scans (used in zero-shot experiments)
If you use ImLPR, please cite:
@INPROCEEDINGS { mwjung-2025-corl,
AUTHOR = { Minwoo Jung and Lanke Frank Tarimo Fu and Maurice Fallon and Ayoung Kim },
TITLE = { ImLPR: Image-based LiDAR Place Recognition using Vision Foundation Models },
BOOKTITLE = { Conference on Robot Learning (CoRL) },
YEAR = { 2025 },
MONTH = { Sep. },
ADDRESS = { Seoul },
}If you also use HeLiPR dataset, please cite:
@article{jung2024helipr,
title={HeLiPR: Heterogeneous LiDAR dataset for inter-LiDAR place recognition under spatiotemporal variations},
author={Jung, Minwoo and Yang, Wooseong and Lee, Dongjae and Gil, Hyeonjae and Kim, Giseop and Kim, Ayoung},
journal={The International Journal of Robotics Research},
volume={43},
number={12},
pages={1867--1883},
year={2024},
publisher={SAGE}
}Questions or issues?
- Open a GitHub issue, or
- Email: moonshot@snu.ac.kr
Our codebase builds upon great open-source projects:
Thanks to the communities for sharing code and insights.