- Espoo,Finland
- www.linkedin.co/in/saiksaketh
Highlights
- Pro
Lists (3)
Sort Name ascending (A-Z)
Stars
Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model smaller while preserving accuracy.
Measure and optimize the energy consumption of your AI applications!
saiksaketh / mdx
Forked from Mahdi-Abdollahpour/mdxEfficient AI-Enhanced 5G PUSCH Receiver
Implementation of Axial attention - attending to multi-dimensional data efficiently
A curated list of materials on AI efficiency
Python code for "Probabilistic Machine learning" book by Kevin Murphy
Efficient Knowledge Injection in LLMs via Self-Distillation (TMLR)
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
An official implementation of "Scheduling Weight Transitions for Quantization-Aware Training" (ICCV 2025) in PyTorch.
Neural network quantization for research and prototyping
Real-Time Inference of 5G NR Multi-user MIMO Neural Receivers
Complete solutions to the Programming Massively Parallel Processors Edition 4
Source code of the Paper "Sparse Bayesian Generative Modeling for Compressive Sensing" (NeurIPS 24)
Code for the book "The Elements of Differentiable Programming".
Sionna Research Kit: A GPU-Accelerated Research Platform for AI-RAN
[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Course materials for MIT6.5940: TinyML and Efficient Deep Learning Computing
CS433 project. Implement Post-training Quantization method ACIQ and ADAROUND.
Code for the paper "Cauchy-Schwarz Regularizers" from ICLR 2025