Stars
An Open Source Machine Learning Framework for Everyone
Train transformer language models with reinforcement learning.
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
🦜🔗 The platform for reliable agents.
A Python-embedded modeling language for convex optimization problems.
Large Language Model Text Generation Inference
C-based/Cached/Core Computer Vision Library, A Modern Computer Vision Library
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
A concise but complete full-attention transformer with a set of promising experimental features from various papers
COLMAP - Structure-from-Motion and Multi-View Stereo
FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, le…
Local-first AI Notepad for Private Meetings
A package with general tools for working with higher-dimensional tensor networks based on ITensor.
The definitive Web UI for local AI, with powerful features and easy setup.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Generative Flow Networks - GFlowNet
Fast and memory-efficient exact attention
Vector (and Scalar) Quantization, in Pytorch
Universal LLM Deployment Engine with ML Compilation
[ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models
Fast inference engine for Transformer models
Generative Models by Stability AI
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"