Lists (2)
Sort Name ascending (A-Z)
Stars
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
Speed up model training by fixing data loading.
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools, with 10x faster training through evolutionary hyperparameter optimization.
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
A Github Action to automatically bump and tag master, on merge, with the latest SemVer formatted version. Works on any platform.
Configuration classes enabling type-safe PyTorch configuration for Hydra apps
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
A PyTorch native platform for training generative AI models
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
Run PyTorch LLMs locally on servers, desktop and mobile
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
📚 Parameterize, execute, and analyze notebooks
A modern model graph visualizer and debugger
PyTorch native quantization and sparsity for training and inference
Large Language Model Text Generation Inference
A high-throughput and memory-efficient inference and serving engine for LLMs
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes