Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Robust Speech Recognition via Large-Scale Weak Supervision
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Making large AI models cheaper, faster and more accessible
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Code and documentation to train Stanford's Alpaca models, and generate the data.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
A Deep Learning based project for colorizing and restoring old images (and video!)
Private AI platform for agents, assistants and enterprise search. Built-in Agent Builder, Deep research, Document analysis, Multi-model support, and API connectivity for agents.
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
Code for the paper Hybrid Spectrogram and Waveform Source Separation
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
🚴 Call stack profiler for Python. Shows you why your code is slow!
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
This repo contains source code and materials for the TEmporally COherent GAN SIGGRAPH project.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Muzic: Music Understanding and Generation with Artificial Intelligence
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.
A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.