Stars
🚀2.3x faster than MinIO for 4KB object payloads. RustFS is an open-source, S3-compatible high-performance object storage system supporting migration and coexistence with other S3-compatible platfor…
Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
The TypeScript AI agent framework. ⚡ Assistants, RAG, observability. Supports any LLM: GPT-4, Claude, Gemini, Llama.
Official JavaScript SDK for the Agent2Agent (A2A) Protocol
AG-UI: the Agent-User Interaction Protocol. Bring Agents into Frontend Applications.
Sample application to demonstrate Google ADK and A2A interoperability. For informational purposes only.
React UI + elegant infrastructure for AI Copilots, AI chatbots, and in-app AI agents. The Agentic Frontend 🪁
MongoDB-compatible database engine for cloud-native and open-source workloads. Built for scalability, performance, and developer productivity.
A tool for turning docs websites into vector db files
A small rust library for adding custom derives to enums
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
MCP Toolbox for Databases is an open source MCP server for databases.
Cloud Native Agentic AI | Discord: https://bit.ly/kagentdiscord
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.
vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of …
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthr…
A high-throughput and memory-efficient inference and serving engine for LLMs
Lightweight coding agent that runs in your terminal
Access large language models from the command-line
A high-performance observability data pipeline.