#
Starred repositories
3
stars
written in Cuda
Clear filter
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.