FlashAttention for sliding window attention in Triton (fwd + bwd pass)
-
Updated
Jun 25, 2025 - Python
FlashAttention for sliding window attention in Triton (fwd + bwd pass)
This repository contains multiple implementations of Flash Attention optimized with Triton kernels, showcasing progressive performance improvements through hardware-aware optimizations. The implementations range from basic block-wise processing to advanced techniques like FP8 quantization and prefetching
Cross-platform FlashAttention-2 Triton implementation for Turing+ with custom configuration mode
HRM-sMoE LLM training toolkit.
An minimal CUDA implementation of FlashAttention v1 and v2
A high-performance kernel implementation of multi-head attention using Triton. Focused on minimizing memory overhead and maximizing throughput for large-scale transformer layers. Includes clean-tensor layouts, head-grouping optimisations, and ready-to-benchmark code you can plug into custom models.
FlashAttention2 Analysis in Triton
Pytorch implementation of the paper FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Add a description, image, and links to the flashattention topic page so that developers can more easily learn about it.
To associate your repository with the flashattention topic, visit your repo's landing page and select "manage topics."