Skip to content
View bckim92's full-sized avatar
👋
👋

Highlights

  • Pro

Organizations

@SNUVL @ctr4si

Block or report bckim92

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
3 stars written in Cuda
Clear filter

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda 3,333 401 Updated Jan 17, 2026

Efficient GPU kernels for block-sparse matrix multiplication and convolution

Cuda 1,065 198 Updated Jun 8, 2023

Benchmark code for the "Online normalizer calculation for softmax" paper

Cuda 110 10 Updated Jul 27, 2018