Skip to content
View vipmath's full-sized avatar

Block or report vipmath

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
11 stars written in Cuda
Clear filter

LLM training in simple, raw C/CUDA

Cuda 28,807 3,376 Updated Jun 26, 2025

A massively parallel, optimal functional runtime in Rust

Cuda 11,206 434 Updated Nov 21, 2024

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda 3,146 338 Updated Jan 17, 2026

Reference implementation of Megalodon 7B model

Cuda 528 53 Updated May 17, 2025

[ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference

Cuda 372 40 Updated Jul 10, 2025

An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).

Cuda 276 23 Updated Jul 16, 2025

Implementation of fused cosine similarity attention in the same style as Flash Attention

Cuda 220 12 Updated Feb 13, 2023

Differentiable Weightless Neural Networks

Cuda 33 9 Updated Feb 2, 2026

Code for the paper "Cottention: Linear Transformers With Cosine Attention"

Cuda 20 Updated Nov 15, 2025

CUDA implementation of Wavelet KAN.

Cuda 16 2 Updated Jun 8, 2024