Skip to content
View iChubai's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Block or report iChubai

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results
Python 636 49 Updated Apr 4, 2026

JoyAI-Image is the unified multimodal foundation model for image understanding, text-to-image generation, and instruction-guided image editing.

Python 338 5 Updated Apr 6, 2026
Python 33 1 Updated Dec 18, 2025
Python 46 4 Updated Feb 9, 2025

Python 3.8+ toolbox for submitting jobs to Slurm

Python 1,603 146 Updated Jan 14, 2026

Generative World Renderer

Python 346 4 Updated Apr 4, 2026

[SIGGRAPH 2026] OmniRoam: World Wandering via Long-Horizon Panoramic Video Generation

Python 40 2 Updated Apr 1, 2026

VP2 Benchmark (A Control-Centric Benchmark for Video Prediction, ICLR 2023)

Python 31 2 Updated Mar 3, 2025

The repo is finally unlocked. enjoy the party! The fastest repo in history to surpass 100K stars ⭐. Join Discord: https://discord.gg/5TUQKqFWd Built in Rust using oh-my-codex.

Rust 171,290 103,851 Updated Apr 6, 2026

Official Codebase for "DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos"

Python 684 46 Updated Mar 21, 2026

Code to pretrain, fine-tune, and evaluate DreamZero and run sim & real-world evals

Python 1,581 123 Updated Mar 18, 2026

[CVPR 2026] Ditto: Scaling Instruction-Based Video Editing with a High-Quality Synthetic Dataset

Python 589 50 Updated Oct 29, 2025

Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world models for downstream applications.

Python 760 102 Updated Oct 29, 2025

Scan the Hallucination Citation of Academic papers. Convert second-hand citation to official version

Python 231 36 Updated Apr 1, 2026

从嘉靖帝学习的极简主义Agent团队管理,免去上朝和微操

Python 36 Updated Apr 1, 2026

The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics

Python 56 5 Updated Mar 26, 2026

Offical Implementation of Captain-Safari [CVPR 2026]

Python 37 2 Updated Apr 5, 2026

The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)

Python 116 5 Updated Nov 15, 2025

World Simulator Assistant for Physics-Aware Text-to-Video Generation

Python 271 42 Updated Sep 22, 2025

[ICLR 2026] Astra : General Interactive World Model with Autoregressive Denoising"

Python 238 5 Updated Mar 13, 2026
JavaScript 220 20 Updated Mar 9, 2026

Official code and data from DexWM ("World Models Can Leverage Human Videos for Dexterous Manipulation").

Python 35 1 Updated Mar 30, 2026

[ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation

Python 417 18 Updated May 30, 2025

SynthVerse: A Large-Scale Diverse Synthetic Dataset for Point Tracking

35 Updated Mar 24, 2026

Official implementation of "Repurposing Geometric Foundation Models for Multi-view Diffusion"

Python 166 6 Updated Apr 1, 2026

Official code for "LagerNVS Latent Geometry for Fully Neural Real-time Novel View Synthesis" (CVPR 2026)

Python 253 9 Updated Mar 27, 2026

A generative world for general-purpose robotics & embodied AI learning.

Python 28,417 2,648 Updated Apr 5, 2026

Code for "EgoX: Egocentric Video Generation from a Single Exocentric Video"

Python 668 40 Updated Apr 2, 2026

A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.

2,873 127 Updated Apr 4, 2026

official code for "magicworld: towards long-horizon stability for interactive video world exploration"

Python 51 2 Updated Apr 2, 2026
Next