5 releases (breaking)

Uses new Rust 2024

new 0.5.0 May 14, 2026
0.4.0 Apr 15, 2026
0.3.0 Apr 15, 2026
0.2.0 Apr 5, 2026
0.1.0 Apr 5, 2026

#17 in #stream-parser

Download history 5/week @ 2026-04-03 26/week @ 2026-04-10 82/week @ 2026-04-17 9/week @ 2026-04-24 4/week @ 2026-05-01 42/week @ 2026-05-08

145 downloads per month
Used in 9 crates (6 directly)

MIT license

65KB
985 lines

Core types and traits for the AI Gateway translation layer.

This crate defines:

  • Canonical model (model): provider-neutral request/response types based on the OpenAI Chat Completions format. Clients speak this format; providers translate to/from it.
  • Translator traits (translate): RequestTranslator, ResponseTranslator, and StreamParser — pure data-mapping interfaces with no IO or HTTP client dependency.
  • Error types (error): TranslateError for translation failures, ProviderError for upstream API errors.

⚡ AI Gateway

A protocol-faithful, multi-provider AI gateway written in Rust.

Route requests across OpenAI, Anthropic, Google Gemini, and the entire OpenAI-compatible ecosystem — with native wire types, streaming SSE, and zero lowest-common-denominator abstractions.

CI License: MIT Rust


Why AI Gateway?

Most multi-provider LLM wrappers force every provider into a single "universal" message format, silently dropping fields and features along the way. AI Gateway takes the opposite approach:

  • Protocol-faithful — Each provider crate models the upstream API exactly as documented. No fields are silently dropped, no semantics are lost.
  • Forward-compatible — All wire types carry #[serde(flatten)] extra to survive upstream API changes without recompilation.
  • Translation at the edge — Cross-provider mapping happens at the gateway layer via TryFrom/Into, not inside individual providers.
  • Streaming-first — Full SSE event fidelity for every provider.

Architecture

flowchart TB
    Client([Your Application])

    Client -->|Unified API| Gateway

    subgraph Gateway["⚡ AI Gateway"]
        direction TB
        Router[Request Router]
        Translate[Protocol Translation]
        Stream[SSE Streaming Engine]
        Router --> Translate --> Stream
    end

    subgraph Providers["Provider Crates"]
        direction LR
        OpenAI["aigw-openai"]
        Anthropic["aigw-anthropic"]
        Compat["aigw-openai-compat"]
        Gemini["aigw-gemini"]
    end

    Gateway --> OpenAI & Anthropic & Compat & Gemini

    OpenAI -->|Responses API\nChat Completions| OpenAI_API["OpenAI API"]
    Anthropic -->|Messages API| Anthropic_API["Anthropic API"]
    Compat -->|Chat Completions| Compat_APIs["Groq · Together\nvLLM · Fireworks\nPerplexity · LM Studio"]
    Gemini -->|generateContent| Gemini_API["Google Gemini API"]

Supported Providers

Provider Crate Chat Streaming Status
OpenAI aigw-openai 🚧 Active
Anthropic aigw-anthropic 🚧 Active
Google Gemini aigw-gemini 🏗️ Skeleton

OpenAI-Compatible Providers via aigw-openai-compat

Configure any OpenAI-compatible provider with a base_url + Quirks capability flags — no new crate needed.

Adding a new provider? If the API is OpenAI-compatible, add a Quirks config to aigw-openai-compat. Only create a new crate for providers with a distinct wire format.

Design Principles

No universal message type

Each provider crate defines its own native request/response types mirroring the upstream API. Translation between providers is handled via TryFrom/Into at the gateway layer.

Quirks-based compat

Third-party OpenAI-compatible providers declare their capabilities through a Quirks struct. Unsupported fields are stripped before sending, not silently ignored.

Quirks {
    supports_responses_api: false,
    supports_tool_choice: true,
    supports_parallel_tool_calls: false,
    supports_vision: true,
    supports_streaming: true,
}

Secrets never leak

API keys are stored as secrecy::SecretString — they never implement Debug, never appear in logs.

Quick Start

cargo build --workspace         # Build
cargo test --workspace          # Test
cargo clippy --workspace        # Lint

See CONTRIBUTING.md for the full development guide, code conventions, and how to add new providers.

License

MIT

Dependencies

~1.5–2.7MB
~52K SLoC