-
Qiyuan Lab
- Beijing
-
02:28
(UTC +08:00) - https://scholar.google.com/citations?hl=en&user=JWqmlrcAAAAJ
Stars
PyMuPDF is a high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.
An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Async Agentic RL)
KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge ba…
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
Large World Model -- Modeling Text and Video with Millions Context
Open Source Deep Research Alternative to Reason and Search on Private Data. Written in Python.
Example models using DeepSpeed
Modeling, training, eval, and inference code for OLMo
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
The official PyTorch implementation of Google's Gemma models
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
A series of large language models developed by Baichuan Intelligent Technology
A unified, comprehensive and efficient recommendation library
An open-source framework for training large multimodal models.
崩坏:星穹铁道脚本 | Honkai: Star Rail auto bot (简体中文/繁體中文/English/Español)
Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL
Sky-T1: Train your own O1 preview model within $450
Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
🦛 CHONK docs with Chonkie ✨ — The no-nonsense RAG library