Lists (1)
Sort Name ascending (A-Z)
Starred repositories
Knowledge DIstillation for LLMs
AmneziaWG installer
A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. 🏆26 knowledge distillation methods presented at TPAMI, CVPR, ICLR, ECCV, NeurIPS, ICCV, AAAI, etc…
GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
A user-friendly & efficient knowledge distillation framework for LLMs, supporting off-policy, on-policy (OPD), cross-tokenizer, multimodal, and on-policy self-distillation.
Pytorch implementation of various Knowledge Distillation (KD) methods.
Multi-Teacher Knowledge Distillation, code for my PhD dissertation. I used knowledge distillation as a decision-fusion and compressing mechanism for ensemble models.
A pipeline for LLM knowledge distillation
Qwen3.5 is the large language model series developed by Qwen team, Alibaba Cloud.
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
SystemPanic / vllm-windows
Forked from vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)
An open-source implementaion for fine-tuning Qwen-VL series by Alibaba Cloud.
A minimal PyTorch re-implementation of Qwen 3.5
[ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models
Code for 'Three Minds, One Student: Online Multi-Teacher Knowledge Distillation for Multimodal Recommendation'
A curated list of awesome papers on dataset distillation and related applications.
A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks …
Images to inference with no labeling (use foundation models to train supervised models).
a toolkit on knowledge distillation for large language models
Awesome Knowledge Distillation
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Benchmarking Vision-Language Models on OCR tasks in Dynamic Video Environments
vision language models finetuning notebooks & use cases (Medgemma - paligemma - florence .....)
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
[COLM'25] Official implementation of the Law of Vision Representation in MLLMs