Starred repositories
11
stars
written in Cuda
Clear filter
Instant neural graphics primitives: lightning fast NeRF and more
This package contains the original 2012 AlexNet code.
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
NeRFshop: Interactive Editing of Neural Radiance Fields
[ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference