Skip to content

robotnav-bot/EffoNAV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EffoNAV

EffoNAV: an Effective Foundation-model-based Visual Navigation Approach for Challenging Environment. image

EffoNAV can realize navigation under challenging situations such as large lighting variations and object variations.

Setup

conda env create -f train/train_environment.yml
conda activate EffoNAV
pip install -e train/

Data Preparation

In this paper, we train on 4 publicly available datasets:

You can use some sample scripts to process these datasets, either directly from a rosbag or from a custom format like HDF5s:

  1. Run process_bags.py with the relevant args, or process_recon.py for processing RECON HDF5s.
  2. Run data_split.py on your dataset folder with the relevant args.

Training

Run this inside the ./train directory:

python train.py -c config/EffoNAV.yaml

Checkpoint

You can use our pretrained checkpoints from here.

Deployment

For the deployment procedures and details, please refer to Deployment

Acknowledgement

Our work references the training and deployment methods of visualnav-transformer. We are grateful for their contributions to the field of robot navigation.

Citing

@article{shen2025effonav,
  title={EffoNAV: An Effective Foundation-Model-Based Visual Navigation Approach in Challenging Environment},
  author={Shen, Wangtian and Gu, Pengfei and Qin, Haijian and Meng, Ziyang},
  journal={IEEE Robotics and Automation Letters},
  year={2025},
  publisher={IEEE}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published