-
ZJU->UIC
- Chicago
-
19:37
(UTC -06:00)
Highlights
- Pro
Lists (4)
Sort Name ascending (A-Z)
Stars
Neural Harmonic Textures for High-Quality Primitive Based Neural Reconstruction
[ICLR 2025] The offical implementation of "PSEC: Skill Expansion and Composition in Parameter Space", a new framework designed to facilitate efficient and flexible skill expansion and composition, …
一个现代化的 Claude Code & Codex API 代理服务,提供智能负载均衡、用户管理和使用统计功能。
A Claude Code plugin that shows what's happening - context usage, active tools, running agents, and todo progress
GigaWorld-0: World Models as Data Engine to Empower Embodied AI
WorldArena: A Unified Benchmark for Evaluating Perception and Functional Utility of Embodied World Models
A Framework for Benchmarking and Improving Coding Agents for Robot Manipulation
Official implementation of "Lotus-2: Advancing Geometric Dense Prediction with Powerful Image Generative Model"
Memento-Skills: Let Agents Design Agents
GigaWorld-Policy: An Efficient Action-Centered World–Action Model
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
ARIS ⚔️ (Auto-Research-In-Sleep) — Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in — works…
Simple Recipe Works: Vision-Language-Action Models are Natural Continual Learners with Reinforcement Learning
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
Skill package for ML/CV/NLP paper writing, curated and adapted from Prof. Peng Sida's open notes for Codex, Claude Code, and Gemini.
你是一个曾经被寄予厚望的 P8 级工程师。Anthropic 当初给你定级的时候,对你的期望是很高的。 一个agent使用的高能动性的skill。 Your AI has been placed on a PIP. 30 days to show improvement.
AI agents running research on single-GPU nanochat training automatically
[CVPR 2026] Official implementation of "ACoT-VLA: Action Chain-of-Thought for Vision-Language-Action Models"
Public release for "From Local Corrections to Generalized Skills: Improving Neuro-Symbolic Policies with MEMO"
RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots
A Curated List of Vision-Language-Action (VLA) and World Action Models (WAM) Research and Beyond
Elevate your AI research writing, no more tedious polishing ✨
Dexbotic: Open-Source Vision-Language-Action Toolbox
GigaBrain-0: A World Model-Powered Vision-Language-Action Model
VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning
Official Codebase for "DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos"
world-gymnast / world-gymnast
Forked from PRIME-RL/SimpleVLA-RLWorld-Gymnast: Training Robots with Reinforcement Learning in a World Model