Stars
Implement a reasoning LLM in PyTorch from scratch, step by step
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)
A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AI
Library that provides metrics to assess representation quality
ImageNet-1K data download, processing for using as a dataset
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)
CKA (Centered Kernel Alignment) implemented in PyTorch
An open-source toolbox for fast sampling of diffusion models. Official implementations of our works published in ICML, NeurIPS, CVPR, J. Stat. Mech.
A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models
The mouse and trackpad utility for Mac.
An emoji guide for your commit messages. 😜
Stable Diffusion 3.5 Text-to-Image Fine-tuning with LoRA
Pre-training without Natural Images (ACCV 2020 Best Paper Honorable Mention Award)
Collect Text-Guided Image Editing Methods
Logseq Slide Reveal Support
TorchCFM: a Conditional Flow Matching library
Small Python library to automatically set CUDA_VISIBLE_DEVICES to the least loaded device on multi-GPU systems.
[ICLR 2025] A Closer Look at Machine Unlearning for Large Language Models
When are Concepts Erased from Diffusion Models? (NeurIPS 2025)
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024
Official implementation of "Diffusion Models as Cartoonists" (ICLR 2025) and "Devil is in the Details" (ICML 2025)
Simple and readable code for training and sampling from diffusion models
Yazi plugin that uses duckdb to preview data files.
DiffusionNFT: Online Diffusion Reinforcement with Forward Process