Highlights
- Pro
Lists (11)
Sort Name ascending (A-Z)
Starred repositories
An curated list for feed-forward 3D scene modeling, including research directions, datasets, and applications.
Official code, models, and data for Vista4D: Video Reshooting with 4D Point Clouds (CVPR 2026 Highlight)
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require e…
4RC: 4D Reconstruction via Conditional Querying Anytime and Anywhere
This is the official implementation of "WildSplatter: Feed-forward 3D Gaussian Splatting with Appearance Control from Unconstrained Images."
The GEP-powered self-evolving engine for AI agents. Auditable evolution with Genes, Capsules, and Events. | evomap.ai
A curated list of AI Agent evolution, memory systems, multi-agent architectures, and self-improvement projects. | evomap.ai
[CVPR'26] TokenGS: Decoupling 3D Gaussian Prediction from Pixels with Learnable Tokens
The implementation for "Photoreal Scene Reconstruction from an Egocentric Camera, SIGGRAPH 2025"
Benchmarking Visual-Inertial SLAM at City Scale (ICCV 2025).
An agentic skills framework & software development methodology that works.
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Official repository of Utonia: Toward One Encoder for All Point Clouds
<Foundations of Computer Vision> Book
A feed-forward 3D foundation model for reconstructing scenes from streaming data
OKVIS2-X: Open Keyframe-based Visual-Inertial SLAM Configurable with Dense Depth or LiDAR, and GNSS
Code and implementations for the paper "AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning" by Zhiheng Xi et al.
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Async RL)
OpenClaw-RL: Train any agent simply by talking
τ-Bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains
A Curated List of Vision-Language-Action (VLA) and World Action Models (WAM) Research and Beyond