Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉
This project is forked from: https://github.com/1rgs/claude-code-proxy
- OpenAI API key (if using OpenAI) 🔑
- Google AI Studio (Gemini) API key (if using AI Studio) 🔑
- Google Cloud authentication configured via
gcloud auth application-default login(if using Vertex AI) ☁️ - uv installed.
-
Clone this repository:
git clone https://github.com/imsalik/claude-code-proxy.git cd claude-code-proxy -
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh(
uvwill handle dependencies based onpyproject.tomlwhen you run the server) -
Configure Environment Variables: Create a
.envfile with your API keys and model configurations:cp .env.example .env
Edit
.envand fill in your API keys and model configurations:ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.OPENAI_API_KEY: Your OpenAI API key (Required if using OpenAI models).GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if using AI Studio Gemini models).PREFERRED_PROVIDER: Set togoogle(default),openai. This determines the primary backend for mapping Anthropic models.GEMINI_PROVIDER: Set togoogle(default) to use Google AI Studio API orvertex_aito use Vertex AI with gcloud auth. Only applicable when using Gemini models. If usingvertex, ensure you have authenticated withgcloud auth application-default login.BIG_MODEL: The model to mapsonnetrequests to. Examples:gpt-4o,gemini-2.5-pro-preview-03-25.SMALL_MODEL: The model to maphaikurequests to. Examples:gpt-4o-mini,gemini-2.0-flash.
Mapping Logic:
- The proxy maps
claude-3-haiku-...toSMALL_MODELandclaude-3-sonnet-...toBIG_MODEL. - The appropriate provider prefix (
openai/,gemini/,vertex_ai/) is added based on the chosen model andPREFERRED_PROVIDER.
-
Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
(
--reloadis optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models through the proxy. 🎯
Sometimes Claude may continue to use an external service (e.g., Vertex AI) instead of your local server. This happens when your credentials cache is still active.
To force Claude to use your local proxy server, explicitly override the auth token:
ANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_AUTH_TOKEN="some-api-key" claudeThis command uses a dummy authentication token and ensures that Claude only connects to your local proxy.
The proxy automatically maps Claude models to OpenAI or Gemini models based on your configuration:
| Anthropic Model Family | PREFERRED_PROVIDER |
GEMINI_PROVIDER |
Mapped To | Default Target Model | Prefix Added |
|---|---|---|---|---|---|
claude-3-haiku-... |
google (default) |
google (default) |
SMALL_MODEL |
gemini-2.0-flash |
gemini/ |
claude-3-sonnet-... |
google (default) |
google (default) |
BIG_MODEL |
gemini-2.5-pro-preview-03-25 |
gemini/ |
claude-3-haiku-... |
google (default) |
vertex_ai |
SMALL_MODEL |
gemini-2.0-flash |
vertex_ai/ |
claude-3-sonnet-... |
google (default) |
vertex_ai |
BIG_MODEL |
gemini-2.5-pro-preview-03-25 |
vertex_ai/ |
claude-3-haiku-... |
openai |
(not applicable) | SMALL_MODEL |
gpt-4o-mini |
openai/ |
claude-3-sonnet-... |
openai |
(not applicable) | BIG_MODEL |
gpt-4o |
openai/ |
You can override the default target models using the BIG_MODEL and SMALL_MODEL environment variables.
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
The following Gemini models are supported with automatic prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.5-pro-exp-03-25
- gemini-2.0-flash
You can customize which models are used via environment variables in your .env file:
# For OpenAI models
PREFERRED_PROVIDER=openai
OPENAI_API_KEY=sk-...
BIG_MODEL=gpt-4o
SMALL_MODEL=gpt-4o-mini
# For Gemini models (AI Studio)
# PREFERRED_PROVIDER=google
# GEMINI_API_KEY=your-ai-studio-key
# BIG_MODEL=gemini-2.5-pro-preview-03-25
# SMALL_MODEL=gemini-2.0-flash
# For Vertex AI models
# PREFERRED_PROVIDER=vertex
# GEMINI_PROVIDER=vertex_ai
# # Ensure gcloud auth is configured
# BIG_MODEL=gemini-2.5-pro-preview-03-25
# SMALL_MODEL=gemini-2.0-flash
Or set them directly when running the server:
# Using OpenAI models (with uv)
PREFERRED_PROVIDER=openai OPENAI_API_KEY=sk-... BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082This proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI/Gemini format via LiteLLM 🔄
- Sending the translated request to the selected provider 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊