Stars
🚀 A very efficient Texas Holdem GTO solver
🔥 The Web Data API for AI - Turn entire websites into LLM-ready markdown or structured data
The `axiom.trade/@chipa` API for python, community --> https://chipatrade.com/community
MineContext is your proactive context-aware AI partner(Context-Engineering+ChatGPT Pulse)
A powerful tool for creating fine-tuning datasets for LLM
Run Claude Code, Gemini, Codex — or any coding agent — in a clean, isolated sandbox with sensitive data redaction and observability baked in.
Simplifying reinforcement learning for complex game environments
A drop-in replacement for react-markdown, designed for AI-powered streaming.
SkyRL: A Modular Full-stack RL Library for LLMs
🦉 OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation
🏭 Easy data acquisition, benchmark resources, PLM fine-tuning for bio-researchers. (ACL Demo 2025)
ScreenCoder — Turn any UI screenshot into clean, editable HTML/CSS with full control. Fast, accurate, and easy to customize.
The multi-agent toolkit: framework, runtime, and control plane.
not another coding agent, kode is agent cli for everything
🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.
verl: Volcano Engine Reinforcement Learning for LLMs
Build, evaluate and train General Multi-Agent Assistance with ease
A highly customizable, lightweight, and open-source coding CLI powered by Groq for instant iteration.
A MCP for Claude Desktop / Claude Code / Windsurf / Cursor to build n8n workflows for you
A Tool to Visualize Claude Code's LLM Interactions
This repository contains a curated collection of 300+ case studies from over 80 companies, detailing practical applications and insights into machine learning (ML) system design. The contents are o…
An open-source AI agent that brings the power of Gemini directly into your terminal.
AI agent that is compatible with multiple LLM models
An open-source AI agent that is compatible with multiple LLM models
[TMLR 2025] Efficient Reasoning Models: A Survey