Lists (1)
Sort Name ascending (A-Z)
Starred repositories
Open-source, secure environment with real-world tools for enterprise-grade agents.
Elevate your AI research writing, no more tedious polishing ✨
Give your AI agent eyes to see the entire internet. Read & search Twitter, Reddit, YouTube, GitHub, Bilibili, XiaoHongShu — one CLI, zero API fees.
Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including CrewAI, Agno, OpenAI Agents SDK, Langchain, Autogen, AG2, and…
Kubernetes-native AI serving platform for scalable model serving.
A compact implementation of SGLang, designed to demystify the complexities of modern LLM serving systems.
Specification and documentation for Agent Skills
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Kode Agent — Design for post-human workflows. One unit agent for every human & computer task.
A high-throughput and memory-efficient inference and serving engine for LLMs
A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny vLLM + Qwen.
🥢像老乡鸡🐔那样做饭。主要部分于2024年完工,非老乡鸡官方仓库。文字来自《老乡鸡菜品溯源报告》,并做归纳、编辑与整理。CookLikeHOC.
Build and run agents you can see, understand and trust.
This repo contains the Hugging Face Deep Reinforcement Learning Course.
The ultimate LLM/AI application development framework in Go.
An AI agent development platform with all-in-one visual tools, simplifying agent creation, debugging, and deployment like never before. Coze your way to AI Agent creation.
An open-source AI agent that lives in your terminal.
Trae Agent is an LLM-based agent for general purpose software engineering tasks.
A lightweight, powerful framework for multi-agent workflows
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes
Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, TensorRT-LLM, and Triton
Cost-efficient and pluggable Infrastructure components for GenAI inference
Achieve state of the art inference performance with modern accelerators on Kubernetes
No fortress, purely open ground. OpenManus is Coming.
A tool for creating and running Linux containers using lightweight virtual machines on a Mac. It is written in Swift, and optimized for Apple silicon.
Production-ready platform for agentic workflow development.
KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale