FrodoBots-2K is an exciting dataset. We'll provide details on how to use it for visual SLAM. This toolkit accompanies our dataset paper https://arxiv.org/abs/2407.05735v1, and we will keep updating it to make the process easier.
- vSLAM on FrodoBots-2K
- Real FrodoBots Deployments
- Advance exploration
- Citation
Download the dataset from the following link: FrodoBots-2K Dataset
According to our experiments, the calibration parameters are as specified in the file Robot_Zero.yaml
.
The video frames in FrodoBots-2K are discrete, so you need to merge them into a longer video. The script merge_ts_files.sh
will help you:
mv merge_ts_files.sh /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650
chmod +x merge_ts_files.sh
./merge_ts_files.sh
The sequence is ready to use!
Add the following lines to your CMakeLists.txt
:
add_executable(Robot_zero
Examples/Monocular/Robot_zero.cc)
target_link_libraries(Robot_zero ${PROJECT_NAME})
Run ORBSLAM3 with the following command:
./Examples/Monocular/Robot_Zero Vocabulary/ORBvoc.txt Examples/Monocular/Robot_zero.yaml /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650
We refer to the below rep to estimate the traj through GPS and Controlling data as the ground truth and save it in tum traj format.
https://github.com/catglossop/frodo_dataset/blob/master/convert_frodo_to_gnm_vGPS_and_rpm.py
Firstly, You need to download the "traj_est_env.yaml" to prepare the environment.
conda env create --file traj_est_env.yaml
conda activate traj_est
So, then u just download "gt_traj_est.py" in this rep to the fold "output_rides_21", and input follow command
python gt_traj_est.py --input_path ./ --output_path ./result --num_workers 4 --overwrite
The traj in ORBSLAM3 also is saved in TUM format, so u can use this command to evaluate the performance.
Again, merge the video frames into a longer video using merge_ts_files.sh
:
mv merge_ts_files.sh /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650
chmod +x merge_ts_files.sh
./merge_ts_files.sh
Download the script run_video.py
and run the object detection:
python run_video.py video -f /home/zhangqi/Documents/Library/YOLOX/exps/default/yolox_s.py -c /home/zhangqi/Documents/Library/YOLOX/yolox_s.pth --path /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650/recordings/rgb.ts --save_result
Find the resulting video in ./YOLOX_outputs/yolox_s/vis_res/2024_07_07_21_30_59
.
Merge the video frames using merge_ts_files.sh
:
mv merge_ts_files.sh /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650
chmod +x merge_ts_files.sh
./merge_ts_files.sh
Download the script video_depth_prediction.py
and move it to the LiteMono directory. Run the depth estimation:
python video_depth_prediction.py --video_path /home/zhangqi/Downloads/output_rides_21/ride_38222_20240501013650/recordings/rgb.ts --output_path output_video_depth.avi --load_weights_folder /home/zhangqi/Documents/Library/Lite-Mono/pretrained_model --model lite-mono8m
Find the resulting video in LiteMono/
.
You are soooooo cool! Pls refer to the Calibration file fold to do that!
You also need to refer to this URL to get THE TOKEN. You should buy the FrodoBots and ask their team about the TOKEN
Then you need to run your code! We need to prepare a checkerboard grid to be fixed in front of the robot, and the robot will automatically perform the calibration and corresponding movements.
But in the real world, you need more pictures to make the calibration accurate. So, u can use auto_calibration1.py to make the robot move randomly.
pip install requirements.txt
python auto_calibration1.py
Before Calibration:
python Orbslam3_deployments.py
I tried the aruco chessboard to do more accurate calibration. This is hard as the OpenCV-python only provides normal calibration demos(I use the opencv4.10). So Opencv C++ have a more detailed demo.https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html
The effect is not so good with longer computing time.
refer to keyboard_control.py file. Sending the command every 0.5s.
Just place the CHROME_EXECUTABLE_PATH="/usr/bin/google-chrome" with the /usr/bin/chromium-browser
To cite this work, please use the following reference in English:
@misc{zhang2024earthroverdatasetrecorded,
title={An Earth Rover dataset recorded at the ICRA@40 party},
author={Qi Zhang and Zhihao Lin and Arnoud Visser},
year={2024},
eprint={2407.05735},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2407.05735}
}