Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
The definitive Web UI for local AI, with powerful features and easy setup.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Open-Sora: Democratizing Efficient Video Production for All
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Graph Neural Network Library for PyTorch
Build resilient language agents as graphs.
Fast and memory-efficient exact attention
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
SGLang is a fast serving framework for large language models and vision language models.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
State-of-the-Art Text Embeddings
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Llama中文社区,实时汇总最新Llama学习资料,构建最好的中文Llama大模型开源生态,完全开源可商用
Question and Answer based on Anything.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"