Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Models and examples built with TensorFlow
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Making large AI models cheaper, faster and more accessible
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
💫 Industrial-strength Natural Language Processing (NLP) in Python
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Async Agentic RL)
Example models using DeepSpeed
Play couplet with seq2seq model. 用深度学习对对联。
Benchmarks of approximate nearest neighbor libraries in Python
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
tensorflow prediction using c++ api