-
Qiyuan Lab
- Beijing
-
10:42
(UTC +08:00) - https://scholar.google.com/citations?hl=en&user=JWqmlrcAAAAJ
Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 The platform for reliable agents.
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
翻墙-科学上网、自由上网、免费科学上网、免费翻墙、fanqiang、油管youtube/视频下载、软件、VPN、一键翻墙浏览器,vps一键搭建翻墙服务器脚本/教程,免费shadowsocks/ss/ssr/v2ray/goflyway账号/节点,翻墙梯子,电脑、手机、iOS、安卓、windows、Mac、Linux、路由器翻墙、科学上网、youtube视频下载、youtube油管镜像/免翻墙…
A high-throughput and memory-efficient inference and serving engine for LLMs
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A natural language interface for computers
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Transforms complex documents like PDFs into LLM-ready markdown/JSON for your Agentic workflows.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
Making large AI models cheaper, faster and more accessible
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
DSPy: The framework for programming—not prompting—language models
Open-Sora: Democratizing Efficient Video Production for All
Fully open reproduction of DeepSeek-R1
An open-source RAG-based tool for chatting with your documents.
Official inference repo for FLUX.1 models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
🤗 smolagents: a barebones library for agents that think in code.
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
MiniCPM-V 4.5: A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Build resilient language agents as graphs.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
The official Python SDK for Model Context Protocol servers and clients