Use Anthropic clients (like Claude Code) with Gemini, OpenAI, Vertex AI, or xAI backends. ๐ค
A proxy server that lets you use Anthropic clients with multiple LLM providers via LiteLLM. ๐
- OpenAI API key ๐ (if using OpenAI provider)
- Google AI Studio (Gemini) API key ๐ (if using Google provider)
- Google Cloud Project with Vertex AI API enabled ๐ (if using Vertex AI provider)
- xAI API key ๐ (if using xAI provider)
- uv installed.
-
Clone this repository:
git clone https://github.com/CogAgent/claude-code-proxy.git cd claude-code-proxy -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure Environment Variables: Copy the example environment file:
cp .env.example .env
Edit
.envand fill in your API keys and model configurations:ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).XAI_API_KEY: Your xAI API key (Required if PREFERRED_PROVIDER=xai).
For Vertex AI:
VERTEX_PROJECT_ID: Your Google Cloud Project ID (Required if PREFERRED_PROVIDER=vertex).VERTEX_LOCATION: Region where your Vertex AI resources are located (defaults to us-central1).- Set up Application Default Credentials (ADC) with
gcloud auth application-default loginor setGOOGLE_APPLICATION_CREDENTIALSto point to your service account key file.
Provider and Model Configuration:
PREFERRED_PROVIDER(Optional): Set toopenai(default),google,vertex, orxai. This determines the primary backend for mappinghaiku/sonnet.BIG_MODEL(Optional): The model to mapsonnetrequests to. Defaults vary by provider.SMALL_MODEL(Optional): The model to maphaikurequests to. Defaults vary by provider.
Mapping Logic:
- If
PREFERRED_PROVIDER=openai(default),haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withopenai/. - If
PREFERRED_PROVIDER=google,haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withgemini/. - If
PREFERRED_PROVIDER=vertex,haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withvertex_ai/. - If
PREFERRED_PROVIDER=xai,haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withxai/.
-
Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
(
--reloadis optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. ๐ฏ
The proxy automatically maps Claude models to OpenAI, Gemini, Vertex AI, or xAI models based on the configured provider:
| Claude Model | OpenAI (default) | Gemini | Vertex AI | xAI |
|---|---|---|---|---|
| haiku | openai/gpt-4.1-mini | gemini/gemini-2.0-flash | vertex_ai/gemini-2.0-flash | xai/grok-3-mini-beta |
| sonnet | openai/gpt-4.1 | gemini/gemini-2.5-pro-preview-03-25 | vertex_ai/gemini-2.5-pro-preview-03-25 | xai/grok-3 |
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
- gpt-4.1
- gpt-4.1-mini
The following Gemini models are supported with automatic gemini/ prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.0-flash
The following Vertex AI models are supported with automatic vertex_ai/ prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.5-flash-preview-04-17
- gemini-2.0-flash
- gemini-1.5-flash-preview-0514
- gemini-1.5-pro-preview-0514
The following xAI models are supported with automatic xai/ prefix handling:
- grok-3-mini-beta
- grok-3-beta
- grok-2-vision-latest
- grok-2
- grok-1
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/prefix - Gemini models get the
gemini/prefix - Vertex AI models get the
vertex_ai/prefix - xAI models get the
xai/prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on the provider and model lists
For example:
gpt-4obecomesopenai/gpt-4ogemini-2.5-pro-preview-03-25becomesgemini/gemini-2.5-pro-preview-03-25orvertex_ai/gemini-2.5-pro-preview-03-25depending on the providergrok-3becomesxai/grok-3
Control the mapping using environment variables in your .env file or directly:
Example 1: Default (Use OpenAI)
OPENAI_API_KEY="your-openai-key"
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the defaultExample 2: Use Google AI Studio
GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google prefExample 3: Use Vertex AI
VERTEX_PROJECT_ID="your-gcp-project-id"
VERTEX_LOCATION="us-central1"
# Set GOOGLE_APPLICATION_CREDENTIALS or use gcloud auth application-default login
PREFERRED_PROVIDER="vertex"
BIG_MODEL="gemini-2.5-pro-preview-03-25"
SMALL_MODEL="gemini-2.0-flash"Example 4: Use xAI
XAI_API_KEY="your-xai-api-key"
PREFERRED_PROVIDER="xai"
BIG_MODEL="grok-3"
SMALL_MODEL="grok-3-mini-beta"Example 5: Use Specific OpenAI Models
OPENAI_API_KEY="your-openai-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific modelThis proxy works by:
- Receiving requests in Anthropic's API format ๐ฅ
- Translating the requests to the appropriate format via LiteLLM ๐
- Sending the translated request to the selected provider (OpenAI, Gemini, Vertex AI, or xAI) ๐ค
- Converting the response back to Anthropic format ๐
- Returning the formatted response to the client โ
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. It also handles provider-specific authentication and configuration requirements. ๐
Contributions are welcome! Please feel free to submit a Pull Request. ๐