- Karlsruhe, Germany
- https://mbreuss.github.io
- @moritz_reuss
- in/moritzreuss
Highlights
- Pro
Lists (8)
Sort Name ascending (A-Z)
Stars
SHAILAB-IPEC / EO1
Forked from EO-Robotics/EO1EO: Open-source Unified Embodied Foundation Model Series
VITRA: Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
The offical Implementation of "Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model"
Fully Open Framework for Democratized Multimodal Training
Solve puzzles. Improve your pytorch.
[CoRL 2025] ManiFlow: A General Robot Manipulation Policy via Consistency Flow Training
Code for the paper "3D FlowMatch Actor: Unified 3D Policy for Single- and Dual-Arm Manipulation"
Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"
[NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"
[AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling
[CoRL 2025] Pretraining code for FLOWER VLA on OXE
PyTorch code and models for VJEPA2 self-supervised learning from video.
RoboBrain 2.0: Advanced version of RoboBrain. See Better. Think Harder. Do Smarter. πππ
Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world models for downstream applications.
[NeurIPS 2025] Code for BEAST Experiments on CALVIN and LIBERO.
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
PyTorch implementation of Shortcut Models [Frans, 2025] with little modification
MAGI-1: Autoregressive Video Generation at Scale
RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning
[CoRL 25] Code for FLOWER VLA for finetuning FLOWER on CALVIN and all LIBERO environments