-
TU München
- Munich
Highlights
- Pro
Stars
A curated, continuously updated reading list, paper blogs, and resources for World Action Models (WAMs) in embodied AI.
Agent skill for production-grade ROS 2 development. Progressive-disclosure SKILL.md covering workspace, nodes, executors, QoS, ros2_control, Nav2, MoveIt 2, real-time, and deployment. Works with Cl…
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.
21 writing rules for AI coding and writing agents. Drop-in for Claude Code, Codex, Copilot, Cursor, and Aider, so their output reads like a tech pro.
A Multimodal Dataset for Autonomous Probe Placement and Needle Retrieval in Ultrasound-Guided Liver Biopsy (US-PPNR)
Official code base for LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels
Zotero MCP: Connects your Zotero research library with Claude and other AI assistants via the Model Context Protocol to discuss papers, get summaries, analyze citations, and more.
ARIS ⚔️ (Auto-Research-In-Sleep) — Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in — works…
【Zotero AI 管家】调用大模型,自动精读论文库里的论文,总结为Zotero笔记。支持主流大模型平台!您只需像往常一样把文献丢进 Zotero, 管家会自动帮您精读论文,将文章揉碎了总结为笔记,让您“十分钟完全了解”这篇论文!
Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
In-the-Wild Compliant Manipulation with UMI-FT
[ICRA'24] Real-time Whole-body Motion Planning for Mobile Manipulators Using Environment-adaptive Search and Spatial-temporal Optimization
Recommend new arxiv papers of your interest daily according to your Zotero libarary.
Official development repository of the Large Time Frequency Analysis Toolbox
Using AI for high quality writing
A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Elevate your AI research writing, no more tedious polishing ✨
A general purpose scientific writer
PaperBanana: Automating Academic Illustration For AI Scientists
Code and data of CVPR 2025 paper "Noise Calibration and Spatial-Frequency Interactive Network for STEM Image Enhancement"
Code tu use learnable wavelet transforms like L-WPT and DeSPaWN methods in pytorch
About Code release for "FECAM: Frequency Enhanced Channel Attention Mechanism for Time Series Forecasting" ⌚
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
ROS2 package for integrating RUITONG optical tracking system (RUITONG SE/MAX series) into robotics applications. This package provides real-time surgical tool tracking with 60Hz publishing frequency.