5 releases (breaking)

Uses new Rust 2024

new 0.5.0 May 14, 2026
0.4.0 Apr 15, 2026
0.3.0 Apr 15, 2026
0.2.0 Apr 5, 2026
0.0.1 Apr 5, 2026

#1945 in Artificial intelligence

Download history 14/week @ 2026-04-09 74/week @ 2026-04-16 17/week @ 2026-05-07

95 downloads per month
Used in 4 crates (3 directly)

MIT license

195KB
4K SLoC

⚡ AI Gateway

A protocol-faithful, multi-provider AI gateway written in Rust.

Route requests across OpenAI, Anthropic, Google Gemini, and the entire OpenAI-compatible ecosystem — with native wire types, streaming SSE, and zero lowest-common-denominator abstractions.

CI License: MIT Rust


Why AI Gateway?

Most multi-provider LLM wrappers force every provider into a single "universal" message format, silently dropping fields and features along the way. AI Gateway takes the opposite approach:

  • Protocol-faithful — Each provider crate models the upstream API exactly as documented. No fields are silently dropped, no semantics are lost.
  • Forward-compatible — All wire types carry #[serde(flatten)] extra to survive upstream API changes without recompilation.
  • Translation at the edge — Cross-provider mapping happens at the gateway layer via TryFrom/Into, not inside individual providers.
  • Streaming-first — Full SSE event fidelity for every provider.

Architecture

flowchart TB
    Client([Your Application])

    Client -->|Unified API| Gateway

    subgraph Gateway["⚡ AI Gateway"]
        direction TB
        Router[Request Router]
        Translate[Protocol Translation]
        Stream[SSE Streaming Engine]
        Router --> Translate --> Stream
    end

    subgraph Providers["Provider Crates"]
        direction LR
        OpenAI["aigw-openai"]
        Anthropic["aigw-anthropic"]
        Compat["aigw-openai-compat"]
        Gemini["aigw-gemini"]
    end

    Gateway --> OpenAI & Anthropic & Compat & Gemini

    OpenAI -->|Responses API\nChat Completions| OpenAI_API["OpenAI API"]
    Anthropic -->|Messages API| Anthropic_API["Anthropic API"]
    Compat -->|Chat Completions| Compat_APIs["Groq · Together\nvLLM · Fireworks\nPerplexity · LM Studio"]
    Gemini -->|generateContent| Gemini_API["Google Gemini API"]

Supported Providers

Provider Crate Chat Streaming Status
OpenAI aigw-openai 🚧 Active
Anthropic aigw-anthropic 🚧 Active
Google Gemini aigw-gemini 🏗️ Skeleton

OpenAI-Compatible Providers via aigw-openai-compat

Configure any OpenAI-compatible provider with a base_url + Quirks capability flags — no new crate needed.

Adding a new provider? If the API is OpenAI-compatible, add a Quirks config to aigw-openai-compat. Only create a new crate for providers with a distinct wire format.

Design Principles

No universal message type

Each provider crate defines its own native request/response types mirroring the upstream API. Translation between providers is handled via TryFrom/Into at the gateway layer.

Quirks-based compat

Third-party OpenAI-compatible providers declare their capabilities through a Quirks struct. Unsupported fields are stripped before sending, not silently ignored.

Quirks {
    supports_responses_api: false,
    supports_tool_choice: true,
    supports_parallel_tool_calls: false,
    supports_vision: true,
    supports_streaming: true,
}

Secrets never leak

API keys are stored as secrecy::SecretString — they never implement Debug, never appear in logs.

Quick Start

cargo build --workspace         # Build
cargo test --workspace          # Test
cargo clippy --workspace        # Lint

See CONTRIBUTING.md for the full development guide, code conventions, and how to add new providers.

License

MIT


lib.rs:

Umbrella crate for AI Gateway provider clients.

Re-exports all aigw-* provider crates behind feature flags as namespaced modules. All features are enabled by default; disable with default-features = false and pick only the providers you need.

# All providers (default)
aigw = "0.1"

# Only Anthropic + OpenAI
aigw = { version = "0.1", default-features = false, features = ["anthropic", "openai"] }

Usage: aigw::openai::OpenAIClient, aigw::anthropic::Client, etc.

Dependencies

~0–6MB
~100K SLoC