Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 The platform for reliable agents.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Robust Speech Recognition via Large-Scale Weak Supervision
Models and examples built with TensorFlow
scikit-learn: machine learning in Python
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
💫 Industrial-strength Natural Language Processing (NLP) in Python
Code and documentation to train Stanford's Alpaca models, and generate the data.
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
An open-source RAG-based tool for chatting with your documents.
Official inference framework for 1-bit LLMs
Code for the paper "Language Models are Unsupervised Multitask Learners"
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed.
Magenta: Music and Art Generation with Machine Intelligence
ChatterBot is a machine learning, conversational dialog engine for creating chat bots
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"