Stars
The official Python library for the OpenAI API
A lightweight, powerful framework for multi-agent workflows
Paper list of multi-agent reinforcement learning (MARL)
Build effective agents using Model Context Protocol and simple workflow patterns
verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code for paper "Group-in-Group Policy Optimization for LLM Agent Training"
Fully Open Framework for Democratized Multimodal Training
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation
openvla / openvla
Forked from TRI-ML/prismatic-vlmsOpenVLA: An open-source vision-language-action model for robotic manipulation.
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Build resilient language agents as graphs.
The absolute trainer to light up AI agents.
🦜🔗 The platform for reliable agents.
verl: Volcano Engine Reinforcement Learning for LLMs
Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.
IE-Critic-R1: Advancing the Explanatory Measurement of Text-Driven Image Editing for Human Perception Alignment
Witness the aha moment of VLM with less than $3.
A latent text-to-image diffusion model
"RAG-Anything: All-in-One RAG Framework"
[EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"
LlamaIndex is the leading framework for building LLM-powered agents over your data.
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
A fork to add multimodal model training to open-r1
Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).
Python toolkit for document information extraction using LMDX
A Bulletproof Way to Generate Structured JSON from Language Models
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!