Highlights
- Pro
Stars
Tensors and Dynamic neural networks in Python with strong GPU acceleration
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Fast and memory-efficient exact attention
Open source code for AlphaFold 2.
LLM Council works together to answer your hardest questions
An open source implementation of CLIP.
Enjoy the magic of Diffusion models!
PyTorch package for the discrete VAE used for DALL·E.
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Open-source implementation of AlphaEvolve
PyTorch Lightning + Hydra. A very user-friendly template for ML experimentation. ⚡🔥⚡
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
GLIDE: a diffusion-based text-conditional image synthesis model
An autoregressive character-level language model for making more things
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
PyTorch code and models for VJEPA2 self-supervised learning from video.
Reference implementation for DPO (Direct Preference Optimization)
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
This API provides programmatic access to the AlphaGenome model developed by Google DeepMind.