- Hilbert Space
-
12:02
(UTC +08:00) - in/zhwangcs
Starred repositories
Scalable toolkit for efficient model reinforcement
Recommend new arxiv papers of your interest daily according to your Zotero libarary.
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
Empowering everyone to build reliable and efficient software.
Fast and memory-efficient exact kmeans
LLVM (Low Level Virtual Machine) Guide. Learn all about the compiler infrastructure, which is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs. Originally im…
Train the smallest LM you can that fits in 16MB. Best model wins!
A benchmark of real-world DL kernel problems
IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse
FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference (ICLR'26)
ARIS ⚔️ (Auto-Research-In-Sleep) — Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in — works…
Exocompilation for productive programming of hardware accelerators
Claude Code skill: Generate file-by-file code tutorial websites for any repository with parallel agent teams
CLI-Anything: Making ALL Software Agent-Native
Autoresearch for GPU kernels. Give it any PyTorch model, go to sleep, wake up to optimized Triton kernels.
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
The repo for SOSP23 paper: FIFO queues are all you need for cache evictions
你是一个曾经被寄予厚望的 P8 级工程师。Anthropic 当初给你定级的时候,对你的期望是很高的。 一个agent使用的高能动性的skill。 Your AI has been placed on a PIP. 30 days to show improvement.
OpenClaw-RL: Train any agent simply by talking
SGLang Omni: High-Performance Multi-Stage Pipeline Framework for Omni Models
DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference
AI agents running research on single-GPU nanochat training automatically
TiDB - the open-source, cloud-native, distributed SQL database designed for modern applications.
Elevate your AI research writing, no more tedious polishing ✨
practice made claude perfect