-
pmcp
High-quality Rust SDK for Model Context Protocol (MCP) with full TypeScript SDK compatibility
-
llm
unifying multiple LLM backends
-
toon-format
Token-Oriented Object Notation (TOON) - a token-efficient JSON alternative for LLM prompts
-
claude-agent-sdk-rs
Rust SDK for Claude Code CLI with bidirectional streaming, hooks, custom tools, and plugin support - 100% feature parity with Python SDK
-
localgpt
A local device focused AI assistant with persistent markdown memory, autonomous heartbeat tasks, and semantic search. Single binary, no runtime dependencies.
-
llama-cpp-2
llama.cpp bindings for Rust
-
txt
cargo doc for coding agents
-
tower-mcp
Tower-native Model Context Protocol (MCP) implementation
-
cgip
Terminal client for interacting with Chat GPT that allows you to build and manipulate contexts
-
swiftide-indexing
Fast, streaming indexing, query, and agentic LLM applications in Rust
-
turbomcp
Rust SDK for Model Context Protocol (MCP) with OAuth 2.1 compliance, ergonomic macros and SIMD acceleration
-
ba
task tracking for LLM sessions
-
swiftide
Fast, streaming indexing, query, and agentic LLM applications in Rust
-
elifrs
elif.rs CLI - Convention over configuration web framework tooling with zero-boilerplate project generation
-
dynamo-llm
Dynamo LLM Library
-
rstructor
Rust equivalent of Python's Instructor + Pydantic: Extract structured, validated data from LLMs (OpenAI, Anthropic, Grok, Gemini) using type-safe Rust structs and enums
-
elicitation
Conversational elicitation of strongly-typed Rust values via MCP
-
aicommit
A CLI tool that generates concise and descriptive git commit messages using LLMs
-
deepwiki-rs
deepwiki-rs(also known as Litho) is a high-performance automatic generation engine for C4 architecture documentation, developed using Rust. It can intelligently analyze project structures…
-
musicgpt
Generate music based on natural language prompts using LLMs running locally
-
aichat
All-in-one LLM CLI Tool
-
kalosm-sample
A common interface for token sampling and helpers for structered llm sampling
-
peas
A CLI-based, flat-file issue tracker for humans and robots
-
ai-gateway
AI gateway for managing and routing LLM requests - Govern, Secure, and Optimize your AI Traffic
-
error-toon
Compress verbose browser errors for LLM consumption. Save 70-90% tokens.
-
mpatch
A smart, context-aware patch tool that applies diffs using fuzzy matching, ideal for AI-generated code
-
swiftide-agents
Fast, streaming indexing, query, and agentic LLM applications in Rust
-
kwaak
Run a team of autonomous agents on your code, right from your terminal
-
singleshot
A CLI tool for testing AI models with a single prompt
-
rmcp-actix-web
actix-web transport implementations for RMCP (Rust Model Context Protocol)
-
dynamo-parsers
Dynamo Parser Library for Tool Calling and Reasoning
-
rmcp-openapi
converting OpenAPI specifications to MCP tools
-
lui
LLM UI for the command line, using the API of Open WebUI
-
onwards
A flexible LLM proxy library
-
mcp-host
Production-grade MCP host crate for building Model Context Protocol servers
-
llm_models_spider
Auto-updated registry of LLM model capabilities (vision, audio, etc.)
-
g3-glitter-bomb
✨💖 GB (G3-Glitter-Bomb) - Dialectical multi-agent autocoding with theatrical personas 💖✨
-
trickery
CLI tool for generating textual artifacts using LLM
-
rag-rs
A Rust-native implementation of the RAG stack
-
jumble
An MCP server that provides queryable, on-demand project context to LLMs
-
llmnop
A command-line tool for benchmarking the performance of LLM inference endpoints
-
repo-flatten
flatten all files in the repository into a single file, consumed by LLMs. Will ignore .gitignore and hidden files.
-
soul-core
Async agentic runtime for Rust — steerable agent loops, context management, multi-provider LLM abstraction, virtual filesystem, WASM-ready
-
langfuse-client-base
Auto-generated Langfuse API client from OpenAPI specification
-
r2t
A fast CLI tool to convert a repository's structure and contents into a single text file, useful for providing context to LLMs
-
ratatoskr-cli
Trace-first, deterministic execution for language model workflows
-
vllora
AI gateway for managing and routing LLM requests - Govern, Secure, and Optimize your AI Traffic
-
autogpt
🦀 A Pure Rust Framework For Building AGIs
-
mcp-council
MCP server for multi-LLM peer review and council deliberation workflow
-
devcat
A micro-version control system for your AI development loop
-
littrs-ruff-python-parser
Vendored ruff_python_parser for littrs (from github.com/astral-sh/ruff)
-
cloudllm
A batteries-included Rust toolkit for building intelligent agents with LLM integration, multi-protocol tool support, and multi-agent orchestration
-
peft-rs
Comprehensive PEFT (Parameter-Efficient Fine-Tuning) adapter library for Rust
-
dynamo-config
Dynamo Inference Framework
-
normy
Ultra-fast, zero-copy text normalization for Rust NLP pipelines & tokenizers
-
ramparts
A CLI tool for scanning Model Context Protocol (MCP) servers
-
pearls
A lightweight CLI for managing long-running task graphs for coding agents
-
prism-mcp-rs
Production-grade Rust SDK for Model Context Protocol (MCP) - Build AI agents, LLM integrations, and assistant tools with enterprise features
-
aof-core
Core types, traits, and abstractions for AOF framework
-
langdb_core
AI gateway Core for LangDB AI Gateway
-
lokomotiv
Local orchestration layer for coordinating multiple LLM backends
-
mdstream
Streaming-first Markdown middleware for LLM output (committed + pending blocks, render-agnostic)
-
trustee
A general-purpose agent that can morph into different specialized agents using WASM lifecycle plugins
-
mosec
Model Serving made Efficient in the Cloud
-
kalosm-language-model
A common interface for language models/transformers
-
genai-rs
client library for Google's Generative AI (Gemini) API with streaming, function calling, and multi-turn conversations
-
capsule-run
Secure WASM runtime to isolate and manage AI agent tasks
-
ct2rs
Rust bindings for OpenNMT/CTranslate2
-
systemprompt-traits
Minimal shared traits and contracts for systemprompt.io
-
outlines-core
Structured Generation
-
systemprompt-logging
Core logging module for systemprompt.io OS
-
claude-code-agent-sdk
Rust SDK for Claude Code CLI with bidirectional streaming, hooks, custom tools, and plugin support
-
aidaemon
A personal AI agent that runs as a background daemon, accessible via Telegram, Slack, or Discord, with tool use, MCP integration, and persistent memory
-
files-to-prompt
Concatenates a directory full of files into a single prompt for use with LLMs
-
minimax-cli
Unofficial MiniMax M2.1 CLI - Just run 'minimax' to start chatting
-
firecrawl
Rust SDK for Firecrawl API
-
saorsa-ai
Unified multi-provider LLM API
-
llmkit
Production-grade LLM client - 100+ providers, 11,000+ models. Pure Rust.
-
deepseek-tui
Unofficial DeepSeek CLI - Just run 'deepseek' to start chatting
-
railgun
CLI - Claude Code security hook for LLM protection
-
llm-kit-provider
Provider interface and traits for the LLM Kit - defines the contract for implementing AI model providers
-
ask-sh
An AI command line assistant, which is context-aware and multi-turn capable
-
multi-llm
Unified multi-provider LLM client with support for OpenAI, Anthropic, Ollama, and LMStudio
-
awpak-ai-cmd-client
A command-line client for executing AI workflow graphs using the awpak-ai library
-
adk-agent
Agent implementations for Rust Agent Development Kit (ADK-Rust, LLM, Custom, Workflow agents)
-
adk-core
Core traits and types for Rust Agent Development Kit (ADK-Rust) agents, tools, sessions, and events
-
littrs-ruff-python-ast
Vendored ruff_python_ast for littrs (from github.com/astral-sh/ruff)
-
octolib
Self-sufficient AI provider library with multi-provider support, embedding models, model validation, and cost tracking
-
baml-cli
BAML CLI for Rust
-
mcp-protocol-sdk
Production-ready Rust SDK for the Model Context Protocol (MCP) with multiple transport support
-
gitmelt
turn repository into single file text digest to conveniently feed into LLM
-
systemprompt-models
Shared data models and types for systemprompt.io OS
-
awful_rustdocs
Generate Rustdoc comments automatically using Awful Jade and a Nushell-based AST extractor
-
llmsim
LLM Traffic Simulator - A lightweight, high-performance LLM API simulator
-
elif-core
Core architecture foundation for the elif.rs LLM-friendly web framework
-
autoagents
Agent Framework for Building Autonomous Agents
-
fprovider
Provider library for the fiddlesticks agent harness framework
-
aofctl
CLI for AOF framework - kubectl-style agent orchestration
-
llm-stack
Core traits, types, and tools for the llm-stack SDK
-
cli_engineer
An autonomous CLI coding agent
-
vtcode
A Rust-based terminal coding agent with modular architecture supporting multiple LLM providers
-
chroma-api-types
Chroma-provided crate for api-types used in the Chroma API
-
infiniloom
High-performance repository context generator for LLMs - Claude, GPT-4, Gemini optimized
-
llmvm-core
The core application for llmvm
-
udiffx
Parse and apply LLM-optimized unified diff + XML file changes
-
sochdb
LLM-optimized database with native vector search
-
red-green-refactor
project to demonstrate the red-green-refactor cycle in TDD
-
toondb
LLM-optimized database with native vector search
-
fchat
Chat library for the fiddlesticks agent harness framework
-
vllora_llm
LLM client layer for the Vllora AI Gateway: unified chat-completions over multiple providers (OpenAI, Anthropic, Gemini, Bedrock, LangDB proxy) with optional tracing/telemetry
-
famulus
LSP server integrating LLMs
-
herolib-ai
AI client with multi-provider support (Groq, OpenRouter, SambaNova) and automatic failover
-
assay-core
High-performance evaluation framework for LLM agents (Core)
-
octoroute
Intelligent multi-model router for self-hosted LLMs
-
sofos
An interactive AI coding agent for your terminal
-
aico-cli
Scriptable control over LLMs from the terminal
-
gent-lang
A programming language for AI agents
-
awful_dataset_builder
Build LLM-ready Q/A datasets from reference text-to-question mappings produced by Awful Knowledge Synthesizer
-
splintr
Fast Rust BPE tokenizer with Python bindings
-
paip
like cat but through llm
-
shimmy
Lightweight sub-5MB Ollama alternative with native SafeTensors support. No Python dependencies, 2x faster loading. Now with GitHub Spec-Kit integration for systematic development.
-
deepsearch
A CLI for AI-powered research assistance, using local LLMs to decompose questions, search the web, and synthesize answers
-
ruvllm-wasm
WASM bindings for RuvLLM - browser-compatible LLM inference runtime with WebGPU acceleration
-
cc-sdk
Rust SDK for Claude Code CLI with full interactive capabilities
-
llama-gguf
A high-performance Rust implementation of llama.cpp - LLM inference engine with full GGUF support
-
miyabi-llm
LLM abstraction layer for Miyabi - GPT-OSS-20B integration
-
scouter-profile
Scouter profile logic
-
claude-agent
Rust SDK for building AI agents with Anthropic's Claude - Direct API, no CLI dependency
-
oxidizr
A Rust-based LLM training framework built on Candle
-
agent-sdk
Rust Agent SDK for building LLM agents
-
tryparse
Multi-strategy parser for messy real-world data. Handles broken JSON, markdown wrappers, and type mismatches.
-
agents-core
Core traits, data models, and prompt primitives for building deep agents
-
kalosm-language
A set of pretrained language models
-
consciousness_experiments
RustyWorm: Universal AI Mimicry Engine with Dual-Process Architecture
-
langchain-rust
LangChain for Rust, the easiest way to write LLM-based programs in Rust
-
reasoning-parser
Parser for AI model reasoning/thinking outputs (chain-of-thought, etc.)
-
weavex
Weave together web search and AI reasoning - an autonomous research agent powered by local LLMs
-
swiftide-docker-executor
A docker executor for swiftide agent tools
-
you
Translate your natural language into executable command(s)
-
llm-git
AI-powered git commit message generator using Claude and other LLMs via OpenAI-compatible APIs
-
aof-runtime
Agent execution runtime with task orchestration
-
cargo-usage-rules
A cargo subcommand to aggregate usage-rules.md files from Rust dependencies for AI agent consumption. Inspired by https://github.com/ash-project/usage_rules
-
nb-mcp-server
MCP server wrapping the nb CLI for LLM-friendly note-taking
-
dircat
High-performance Rust utility that concatenates and displays directory contents, similar to the C++ DirCat
-
reasonkit
The Reasoning Engine — Complete ReasonKit Suite | Auditable Reasoning for Production AI
-
dialog_detective
Automatically identify and rename unknown tv series video files by letting AI listen to their dialogue
-
gguf-rs
GGUF file parser
-
ai-lib
A unified AI SDK for Rust providing a single interface for multiple AI providers with hybrid architecture
-
hermes-llm
LLM training from scratch using Candle
-
codecat
「 Merge Code Repository into a Single File | Respects
.gitignore| Ideal for LLM Code Analysis 」 -
siumai-registry
Registry and factories for siumai
-
llm_api_access
A package to query popular LLMs
-
modelexpress-common
Shared utilities for Model Express client and server
-
dynamic-mcp
MCP proxy server that reduces LLM context overhead with on-demand tool loading from multiple upstream servers
-
littrs-ruff-source-file
Vendored ruff_source_file for littrs (from github.com/astral-sh/ruff)
-
deciduous
Decision graph tooling for AI-assisted development. Track every goal, decision, and outcome. Survive context loss. Query your reasoning.
-
perspt
Your Terminal's Window to the AI World - A high-performance CLI for LLMs with chat and autonomous agent modes
-
llm-utl
Convert code repositories into LLM-friendly prompts with smart chunking and filtering
-
littrs-ruff-python-trivia
Vendored ruff_python_trivia for littrs (from github.com/astral-sh/ruff)
-
simple-agents-healing
Response healing system for SimpleAgents - BAML-inspired JSON parsing and coercion
-
hai-cli
A CLI with a REPL for hackers using LLMs
-
xai-grpc-client
Feature-complete gRPC client for xAI's Grok API with streaming, tools, multimodal support
-
ruvllm-esp32
Tiny LLM inference for ESP32 microcontrollers with INT8/INT4 quantization, multi-chip federation, RuVector semantic memory, and SNN-gated energy optimization
-
siumai-protocol-anthropic
Anthropic Messages protocol standard mapping for siumai
-
tldrs
README.md generator powered by LLMs and codebase analysis
-
turbovault
Production-grade MCP server for Obsidian vault management - Transform your vault into an intelligent knowledge system for AI agents
-
adk-model
LLM model integrations for Rust Agent Development Kit (ADK-Rust) (Gemini, OpenAI, Claude, DeepSeek, etc.)
-
swarm-engine-llm
LLM integration backends for SwarmEngine
-
context-cli
CLI for building, resolving, and inspecting context caches
-
spider_agent
A concurrent-safe multimodal agent for web automation and research
-
filesystem-mcp-rs
Rust port of the official MCP filesystem server - fast, safe, protocol-compatible file operations
-
claude-code-api
OpenAI-compatible API gateway for Claude Code CLI
-
tenere
TUI interface for LLMs written in Rust
-
adk-eval
Agent evaluation framework for ADK-Rust
-
acton-ai
An agentic AI framework where each agent is an actor
-
debugger-cli
LLM-friendly debugger CLI using the Debug Adapter Protocol
-
siumai-provider-openai
OpenAI provider implementation for siumai (plus OpenAI-compatible vendor presets)
-
systemprompt-provider-contracts
Provider trait contracts for systemprompt.io - LLM, Tool, Job, Template, Component providers
-
llmx
working with LLM outputs (e.g. fuzzy JSON extraction/parsing).
-
scouter-evaluate
LLM Evaluation logic for Scouter
-
tower-llm
A Tower-based framework for building LLM & agent workflows in Rust
-
commitbot
A CLI assistant that generates commit and PR messages from your diffs using LLMs
-
blz-mcp
MCP server for blz documentation search
-
fetchkit
AI-friendly web content fetching and HTML-to-Markdown conversion library
-
ultrafast-mcp
High-performance, ergonomic Model Context Protocol (MCP) implementation in Rust
-
swarm-engine-eval
Evaluation framework for SwarmEngine
-
llmux
Zero-reload model switching for vLLM - manages multiple models on shared GPU
-
systemprompt
systemprompt.io - Extensible AI agent orchestration framework
-
ask-cmd
AI-powered CLI assistant - modern Unix meets AI
-
llm-tokenizer
LLM tokenizer library with caching and chat template support
-
messageforge
lightweight Rust library for creating structured messages in chat systems, including HumanMessage, AiMessage, SystemMessage, and more. It supports easy extensibility through macros…
-
agtrace
The official CLI for agtrace, built on top of agtrace-sdk. Visualize and analyze AI agent execution traces.
-
siumai-protocol-openai
OpenAI(-like) protocol standard mapping for siumai
-
cargo-ai
Build lightweight AI agents with Cargo. Powered by Rust. Declared in JSON.
-
langchain-ai-rust
Build LLM applications in Rust with type safety: chains, agents, RAG, LangGraph, embeddings, vector stores, and 20+ document loaders. A LangChain port supporting OpenAI, Claude, Gemini…
-
mcp-langbase-reasoning
MCP server providing structured reasoning via Langbase Pipes - linear, tree, divergent, Graph-of-Thoughts, and decision framework modes
-
systemprompt-identifiers
Core identifier types for systemprompt.io OS
-
url-preview
High-performance URL preview generator for messaging and social media applications
-
ruvector-sona
Self-Optimizing Neural Architecture - Runtime-adaptive learning for LLM routers with two-tier LoRA, EWC++, and ReasoningBank
-
llm-connector
Next-generation Rust library for LLM protocol abstraction with native multi-modal support. Supports 11+ providers (OpenAI, Anthropic, Google, Aliyun, Zhipu, Ollama, Tencent, Volcengine…
-
llm-converter
LLM protocol converter with Babel-style middleware, supporting 18+ AI protocols
-
toon
Token-Oriented Object Notation – a token-efficient JSON alternative for LLM prompts
-
agents-toolkit
Reusable tools and utilities for Rust deep agents
-
swarms-rs
The Bleeding-Edge Production-Ready Multi-Agent Orchestration Framework in Rust
-
caro
Convert natural language to shell commands using local LLMs
-
bitnet-quantize
Microsoft BitNet b1.58 quantization and inference for Rust
-
api_ollama
Ollama local LLM runtime API client for HTTP communication
-
baml
BAML runtime for Rust - type-safe LLM function calls
-
aof-llm
Multi-provider LLM abstraction layer
-
g3-core
Core engine for G3/GB AI coding agent
-
rsmap
Generate multi-layered, LLM-friendly index files for Rust codebases
-
spade-common
Helper crate for https://spade-lang.org/
-
agents-aws
AWS integrations for the Rust deep agents SDK
-
tru
TOON reference implementation in Rust (JSON <-> TOON)
-
agents-runtime
Async runtime orchestration for Rust deep agents
-
aca
A Rust-based agentic tool that automates coding tasks using Claude Code and OpenAI Codex CLI integrations
-
quagga
CLI tool that combines multiple text files into a single prompt suitable for Large Language Models
-
llms-from-scratch-rs
Rust (candle) code for Build a LLM From Scratch by Sebastian Raschka
-
chatpack-cli
CLI tool for parsing and converting chat exports into LLM-friendly formats
-
deputy
experimental terminal-based AI coding assistant that integrates directly with your filesystem and shell. Deputy leverages agentic LLM systems to read code, manipulate files, execute shell commands…
-
memory-wiki
A local-first, semantic knowledge base and MCP server for LLMs
-
kalosm
interface for pretrained AI models
-
modelrelay
Rust SDK for the ModelRelay API
-
mistralrs-mcp
MCP (Model Context Protocol) client for mistral.rs
-
mixtape-core
An agentic AI framework for Rust
-
rv-tool
Non-invasive AI code review for any type of workflow
-
fm-rs
Rust bindings for Apple's FoundationModels.framework
-
graph-flow
A high-performance, type-safe framework for building multi-agent workflow systems in Rust
-
awful_aj
A CLI for interacting with OpenAI compatible APIs
-
secretary
Transform natural language into structured data using large language models (LLMs) with powerful derive macros
-
avocado-cli
CLI tool for AvocadoDB - deterministic context compilation for AI agents
-
textcon
Template text files with file/directory references for AI/LLM consumption
-
check-the-tone
ctt - A CLI tool to check and improve the tone of your messages using local LLMs
-
influence
CLI tool for downloading HuggingFace models and running local LLM inference
-
inference-gateway-sdk
Rust SDK for interacting with various language models through the Inference Gateway
-
tools-rs
Core functionality for the tools-rs tool collection system
-
cowork-core
AI-powered software development system that automates the entire lifecycle from requirements analysis to code delivery through specialized agent collaboration
-
openresponses-rust
client library for the Open Responses API specification
-
nanobot
Rust port of nanobot, a lightweight personal AI assistant with tools and multi-channel gateway support
-
tools_core
Core functionality and schema generation for the tools collection system
-
opensession
CLI for opensession.io - discover, upload, and manage AI coding sessions
-
reson-agentic
Agents are just functions - production-grade LLM agent framework
-
toak-rs
A high-performance library and CLI tool for tokenizing git repositories, cleaning code, and generating embeddings
-
siumai-protocol-gemini
Google Gemini protocol standard (mapping + streaming) for siumai
-
terminal-aichat
A cli for AI/LLM chat in terminal. Extremely simple and easy to use. Using OpenAI-compatible
/v1/chat/completionAPI
Try searching with DuckDuckGo.