-
Qiyuan Lab
- Beijing
-
23:25
(UTC +08:00) - https://scholar.google.com/citations?hl=en&user=JWqmlrcAAAAJ
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
A high-throughput and memory-efficient inference and serving engine for LLMs
Ongoing research training transformer models at scale
翻墙-科学上网、自由上网、免费科学上网、免费翻墙、fanqiang、油管youtube/视频下载、软件、VPN、一键翻墙浏览器,vps一键搭建翻墙服务器脚本/教程,免费shadowsocks/ss/ssr/v2ray/goflyway账号/节点,翻墙梯子,电脑、手机、iOS、安卓、windows、Mac、Linux、路由器翻墙、科学上网、youtube视频下载、youtube油管镜像/免翻墙…
Curated list of datasets and tools for post-training.
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
🦜🔗 The platform for reliable agents.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, GLM4.5, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, Llava, GLM4v, Ph…
Lightweight coding agent that runs in your terminal
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Build resilient language agents as graphs.
An open-source AI agent that brings the power of Gemini directly into your terminal.
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
DSPy: The framework for programming—not prompting—language models
The official Python SDK for Model Context Protocol servers and clients
A lightweight, powerful framework for multi-agent workflows
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
🚀 Collection of tuning recipes with HuggingFace SFTTrainer and PyTorch FSDP.
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Production-ready platform for agentic workflow development.
verl: Volcano Engine Reinforcement Learning for LLMs
UltraRAG 2.0: Less Code, Lower Barrier, Faster Deployment! MCP-based low-code RAG framework, enabling researchers to build complex pipelines to creative innovation.