Stars
ReactMotion: Generating Reactive Listener Motions from Speaker Utterance
An agentic skills framework & software development methodology that works.
Simulation Distillation: Pretraining World Models in Simulation for Rapid Real-World Adaptation
😎 Awesome lists about all kinds of interesting topics
An Agentic Framework for Reflective PowerPoint Generation
RynnVLA-002: A Unified Vision-Language-Action and World Model
[CVPR 2026] TeamHOI: Learning a Unified Policy for Cooperative Human-Object Interactions with Any Team Size
Test-Time Mixture of World Models for Embodied Agents in Dynamic Environments [ICLR 2026]
Collection of awesome test-time (domain/batch/instance) adaptation methods
Reinforcement Learning via Self-Distillation (SDPO)
Official PyTorch implementation of One-Minute Video Generation with Test-Time Training
A community collection of OpenClaw use cases for making life easier.
The awesome collection of OpenClaw skills. 5,400+ skills filtered and categorized from the official OpenClaw Skills Registry.🦞
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
ICLR 2026: Towards Bridging the Gap between Large-Scale Pretraining and Efficient Finetuning for Humanoid Controls
This repository contains the official implementation of "The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion".
ProtoMotions is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots.
Simple implementation of simulator environments for sim2sim, with a wrapped ROS2 environment for Unitree robots' real deployment with same interface as simulators, provide seamless transfer between…
yixxuan-li / Sim2Everything
Forked from Yutang-Lin/Sim2EverythingSimple implementation of simulator environments for sim2sim, with a wrapped ROS2 environment for Unitree robots' real deployment with same interface as simulators, provide seamless transfer between…
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related webs…
A simple framework for experimenting with Reinforcement Learning in Python.
Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and performing real-time speech generation.
[CVPR 2025] InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation