EffoNAV: an Effective Foundation-model-based Visual Navigation Approach for Challenging Environment.
EffoNAV can realize navigation under challenging situations such as large lighting variations and object variations.
conda env create -f train/train_environment.yml
conda activate EffoNAV
pip install -e train/
In this paper, we train on 4 publicly available datasets:
You can use some sample scripts to process these datasets, either directly from a rosbag or from a custom format like HDF5s:
- Run
process_bags.pywith the relevant args, orprocess_recon.pyfor processing RECON HDF5s. - Run
data_split.pyon your dataset folder with the relevant args.
Run this inside the ./train directory:
python train.py -c config/EffoNAV.yaml
You can use our pretrained checkpoints from here.
For the deployment procedures and details, please refer to Deployment
Our work references the training and deployment methods of visualnav-transformer. We are grateful for their contributions to the field of robot navigation.
@article{shen2025effonav,
title={EffoNAV: An Effective Foundation-Model-Based Visual Navigation Approach in Challenging Environment},
author={Shen, Wangtian and Gu, Pengfei and Qin, Haijian and Meng, Ziyang},
journal={IEEE Robotics and Automation Letters},
year={2025},
publisher={IEEE}
}