Highlights
- Pro
Stars
Tensors and Dynamic neural networks in Python with strong GPU acceleration
An Open Source Machine Learning Framework for Everyone
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Google Research
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Development repository for the Triton language and compiler
Flax is a neural network library for JAX that is designed for flexibility.
Models and examples built with TensorFlow
Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models (CRFM) at Stanford for holistic, reproducible and transparen…
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
A library for efficient similarity search and clustering of dense vectors.
Universal LLM Deployment Engine with ML Compilation
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K…
Fast and Accurate ML in 3 Lines of Code
A data augmentations library for audio, image, text, and video.
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Provides a common interface to many IR ranking datasets.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports comp…
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…