-
Tsinghua University
- Beijing, China
-
18:11
(UTC +08:00)
Stars
[SIGGRAPH 2026] Pixal3D: Pixel-Aligned 3D Generation from Images
Benchmark harness and code for "SWE-fficiency: Can Language Models Optimize Real World Repositories on Real World Workloads?"
KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
A high-throughput and memory-efficient inference and serving engine for LLMs
JittorInfer is a high-performance C++ inference framework designed for large language models on Huawei's Ascend AI processor.
SGLang is a high-performance serving framework for large language models and multimodal models.
Transformer training code for sequential tasks
SoftVC VITS Singing Voice Conversion
Fast and memory-efficient exact attention
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.