Skip to content
View akbar2habibullah's full-sized avatar

Organizations

@pabryk-org

Block or report akbar2habibullah

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
5 stars written in Cuda
Clear filter

DeepEP: an efficient expert-parallel communication library

Cuda 8,691 972 Updated Nov 5, 2025

DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling

Cuda 5,861 736 Updated Oct 15, 2025

FlashInfer: Kernel Library for LLM Serving

Cuda 4,018 558 Updated Nov 5, 2025

Tile primitives for speedy kernels

Cuda 2,865 191 Updated Nov 4, 2025

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda 2,622 257 Updated Oct 28, 2025