- Karlsruhe, Germany
- https://mbreuss.github.io
- @moritz_reuss
- in/moritzreuss
Highlights
- Pro
Lists (8)
Sort Name ascending (A-Z)
Stars
LAP: Language-Action Pre-Training Enables Zero-Shot Cross Embodiment Transfer
High-Level Control Library for Franka Robots with Python and C++ Support
REALM: A Real-to-Sim Validated Benchmark for Generalization in Robotic Manipulation
SHAILAB-IPEC / EO1
Forked from EO-Robotics/EO1EO: Open-source Unified Embodied Foundation Model Series
[ICRA 2026] VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
[ICLR 2026] The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
Fully Open Framework for Democratized Multimodal Training
Solve puzzles. Improve your pytorch.
[CoRL 2025] ManiFlow: A General Robot Manipulation Policy via Consistency Flow Training
Code for the paper "3D FlowMatch Actor: Unified 3D Policy for Single- and Dual-Arm Manipulation"
Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation" (ICLR2026)
[NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"
[AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling
[CoRL 2025] Pretraining code for FLOWER VLA on OXE
PyTorch code and models for VJEPA2 self-supervised learning from video.
RoboBrain 2.5: Advanced version of RoboBrain. Depth in Sight, Time in Mind. πππ
Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world models for downstream applications.
[NeurIPS 2025] Code for BEAST Experiments on CALVIN and LIBERO.
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
PyTorch implementation of Shortcut Models [Frans, 2025] with little modification