Lists (3)
Sort Name ascending (A-Z)
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Models and examples built with TensorFlow
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Build Real-Time Knowledge Graphs for AI Agents
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time.
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Train transformer language models with reinforcement learning.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
verl: Volcano Engine Reinforcement Learning for LLMs
Automate the process of making money online.
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Wan: Open and Advanced Large-Scale Video Generative Models
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audio codec.
Accessible large language models via k-bit quantization for PyTorch.
The absolute trainer to light up AI agents.
An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.