Lossy PNG compressor — pngquant command based on libimagequant library
-
Updated
Jul 7, 2025 - C
Lossy PNG compressor — pngquant command based on libimagequant library
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
Neural Networks with low bit weights on low end 32 bit microcontrollers such as the CH32V003 RISC-V Microcontroller and others
🎨 Convert images to 15/16-bit RGB color with dither
off the charts color quantization 🎨
Subband filtering with ADPCM
Color quantization/palette generation for png images
Quantized Memory-Augmented Neural Networks (AAAI-18)
Extremely fast color quantization. Reduce color information of a 24-bit RGB bitmap down to 8-bit.
V-lang api wrapper for llm-inference chatllm.cpp
The purpose of this project is to compare different means of computing convolution operation, and see if naive quantiization actually speed ups operation.
Clean C language version of quantizing llama2 model and running quantized llama2 model
Code and resources for the paper: "Cognitive Radio Spectrum Sensing on the Edge: A Quantization-Aware Deep Learning Approach"
Nim api-wrapper for llm-inference chatllm.cpp
The Quantizer - A Swift-based reimplementation of ImageAlpha
A quantized TensorFlow Lite–based real-time object detection system on ESP32-CAM, optimized with the EON™ Compiler for low-latency, low-memory, and portable AI-IoT deployment.
kotlin api wrapper for llm-inference chatllm.cpp
Code for "Characterising Across Stack Optimisations for Deep Convolutional Neural Networks"
Neural Network C is an advanced neural network implementation in pure C, optimized for high performance on CPUs and NVIDIA GPUs.
Add a description, image, and links to the quantization topic page so that developers can more easily learn about it.
To associate your repository with the quantization topic, visit your repo's landing page and select "manage topics."