The implementation of the approach from the FRUCT Conference Paper "Transformer-Based Deep Monocular Visual Odometry for Edge Devices"
Create folder data.
Download KITTI dataset for odometry into data/kitti_dataset.
Download pretrained FlowNet weights (flownets_bn_EPE2.459.pth.tar) in data/checkpoints from repository.
Install dependencies
# clone project
git clone https://github.com/toshiks/TBDVO.git
cd TBDVO
# create conda environment
conda env create -f conda_env_gpu.yaml -n myenv
conda activate myenvTrain model with default configuration (deepvo original)
# default
python run.py
# train on CPU
python run.py trainer.gpus=0
# train on GPU
python run.py trainer.gpus=1Train model with chosen experiment configuration from configs/experiment/
python run.py experiment=experiment_nameYou can override any parameter from command line like this
python run.py trainer.max_epochs=20 datamodule.batch_size=64Run benchmarks:
export PYTHONPATH=$PWD
python util_scripts/benchmarks.py