Skip to content

[ICLR 2025] Scalable Benchmarking and Robust Learning for Noise-Free Ego-Motion and 3D Reconstruction from Noisy Video

License

Notifications You must be signed in to change notification settings

Xiaohao-Xu/SLAM-under-Perturbation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

49 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Customizable Perturbations for RGB-D SLAM Robustness Evaluation

University of Michigan Robotics Β· CMU Robotics Β· CMU ECE

πŸ“„ Preprint | πŸŽ₯ Video Demo


πŸ”§ Benchmarking Code Released!

Check out the full instructions here πŸš€πŸ”₯

Modern generative models like Sora produce stunning videos β€” but they often fall short in simulating the physics and dynamics of the real world.
This project highlights the strengths of physics-aware simulation via a customizable perturbation synthesis pipeline, enabling transformation from a Clean World to a Noisy World in a structured and controllable way.


🧩 Pipeline Overview

Pipeline Overview

Our pipeline synthesizes noisy RGB-D data for evaluating SLAM robustness under real-world perturbations:

  1. Robot System & Trajectory Input β†’ Global trajectory & system parameters.
  2. Local Trajectory Generator β†’ Physics engine simulates each sensor's motion.
  3. Trajectory Perturbation Composer β†’ Injects motion deviations.
  4. Rendering Engine β†’ Combines 3D scenes and perturbed sensor paths to produce clean sensor streams.
  5. Sensor Perturbation Composer β†’ Adds realistic corruptions to RGB-D streams.
  6. Output β†’ Fully perturbed datasets for robust SLAM benchmarking.

πŸ“Œ Abstract

Robust SLAM is essential for real-world robot deployment. In this work:

  • We introduce a modular pipeline to synthesize perturbations and evaluate SLAM robustness.
  • A rich perturbation taxonomy and toolbox enable transformation from ideal simulations to challenging environments.
  • We benchmark several top-performing RGB-D SLAM models under diverse, composed perturbations.
  • Our analysis reveals model-specific vulnerabilities, often hidden under standard benchmark results.

🎞️ Visualizations

βœ… SplaTAM-S: Success Case

SplaTAM-S Success

❌ SplaTAM-S: Failure Case

SplaTAM-S Failure

βœ… ORB-SLAM3: Success Case

ORB-SLAM3 Success

❌ ORB-SLAM3: Failure Case

ORB-SLAM3 Failure


🌱 Research Directions

  • Perturbation Types: Evaluate under mixed or novel perturbations.
  • Realism: Enhance the fidelity of simulated environments and distortions.
  • Modalities: Extend beyond RGB-D β€” include LiDAR, sonar, etc.
  • Model Development: Design more robust SLAM architectures.
  • Broader Applications: Apply the evaluation to 3D reconstruction or navigation.

πŸ“Œ See our paper for more details!


πŸ“– Citation

If you find Biblex helpful, please cite us:

@inproceedings{xu2025scalable,
  title     = {Scalable Benchmarking and Robust Learning for Noise-Free Ego-Motion and 3D Reconstruction from Noisy Video},
  author    = {Xiaohao Xu and Tianyi Zhang and Shibo Zhao and Xiang Li and Sibo Wang and Yongqi Chen and Ye Li and Bhiksha Raj and Matthew Johnson-Roberson and Sebastian Scherer and Xiaonan Huang},
  booktitle = {The Thirteenth International Conference on Learning Representations (ICLR)},
  year      = {2025},
  url       = {https://openreview.net/forum?id=Pz9zFea4MQ}
}

πŸ“« Contact

Got questions? Reach out to: xiaohaox@umich.edu


πŸ“š Public Resources Used

We gratefully acknowledge the following open-source projects:


πŸ“„ License

This project is licensed under the Apache License 2.0.