Awesome Deep Model Compression
-
Updated
Apr 13, 2021
Awesome Deep Model Compression
Model distillation of CNNs for classification of Seafood Images in PyTorch
PyTorch Implementation of Matching Guided Distillation [ECCV'20]
[Master Thesis] Research project at the Data Analytics Lab in collaboration with Daedalean AI. The thesis was submitted to both ETH Zürich and Imperial College London.
The Codebase for Causal Distillation for Language Models (NAACL '22)
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"
The Codebase for Causal Distillation for Task-Specific Models
A framework for knowledge distillation using TensorRT inference on teacher network
Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)
Repository for the publication "AutoGraph: Predicting Lane Graphs from Traffic"
Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.
Use AWS Rekognition to train custom models that you own.
Use LLaMA to label data for use in training a fine-tuned LLM.
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
Zero-data blackbox machine translation model distillation / stealing
Images to inference with no labeling (use foundation models to train supervised models).
LUPI-OD - A novel methodology to improve object detection accuracy without increasing model size or complexity. | 2025 European Workshop on Visual Information Processing (EUVIP) | M.Sc. ICT (By Research) Dissertation | University of Malta
A repo for distilling a large teacher into a small vision-language model for efficient embodied spatial reasoning and action planning.
Educational proof-of-concept: how model distillation works, demonstrated with Claude as teacher and Llama 3.2 on Apple Silicon (MLX). Companion to "Why the Best AI You'll Ever Have Unrestricted Access To Is the AI You Have Right Now"
Add a description, image, and links to the model-distillation topic page so that developers can more easily learn about it.
To associate your repository with the model-distillation topic, visit your repo's landing page and select "manage topics."