Lists (1)
Sort Name ascending (A-Z)
Stars
4
stars
written in Cuda
Clear filter
Instant neural graphics primitives: lightning fast NeRF and more
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
A CUDA Mesh RayTracer with BVH acceleration, with python bindings and a GUI.