Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 The platform for reliable agents.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
LlamaIndex is the leading framework for building LLM-powered agents over your data.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
⚡ A Fast, Extensible Progress Bar for Python and CLI
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Open-Sora: Democratizing Efficient Video Production for All
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Code for the paper "Language Models are Unsupervised Multitask Learners"
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
MiniCPM-V 4.5: A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Train transformer language models with reinforcement learning.
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations