👋
Highlights
- Pro
Stars
3
stars
written in Cuda
Clear filter
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
Efficient GPU kernels for block-sparse matrix multiplication and convolution
Benchmark code for the "Online normalizer calculation for softmax" paper