nerfstudio-project / gsplat
CUDA accelerated rasterization of gaussian splatting
See what the GitHub community is most excited about this week.
CUDA accelerated rasterization of gaussian splatting
Causal depthwise conv1d in CUDA, with a PyTorch interface
FlashInfer: Kernel Library for LLM Serving
DeepEP: an efficient expert-parallel communication library
NCCL Tests
Fast CUDA matrix multiplication from scratch
Instant neural graphics primitives: lightning fast NeRF and more
LLM training in simple, raw C/CUDA
CUDA Library Samples
Tile primitives for speedy kernels
RCCL Performance Benchmark Tests
CUDA Kernel Benchmarking Library
Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
Distributed multigrid linear solver library on GPU