Stars
An open-source AI agent that brings the power of Gemini directly into your terminal.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A bibliography and survey of the papers surrounding o1
Composable building blocks to build LLM Apps
Code for "Diffusion Model Alignment Using Direct Preference Optimization"
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
Training and serving large-scale neural networks with auto parallelization.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
StableLM: Stability AI Language Models
Instruct-tune LLaMA on consumer hardware
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🦜🔗 The platform for reliable agents.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Source code for the X Recommendation Algorithm
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
High-Resolution Image Synthesis with Latent Diffusion Models
Code and documentation to train Stanford's Alpaca models, and generate the data.
Hackable and optimized Transformers building blocks, supporting a composable construction.
VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
PyTorch extensions for high performance and large scale training.