Skip to content
View blap's full-sized avatar

Block or report blap

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
6 stars written in Cuda
Clear filter

LLM training in simple, raw C/CUDA

Cuda 29,450 3,498 Updated Jun 26, 2025

Tile primitives for speedy kernels

Cuda 3,307 275 Updated Apr 8, 2026

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda 3,281 389 Updated Jan 17, 2026

[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.

Cuda 973 91 Updated Feb 25, 2026

CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge techniques in sparse architecture, speculative sampling and qua…

Cuda 236 22 Updated Jan 14, 2026