Use Anthropic clients (like Claude Code) with Gemini, OpenAI, or direct Anthropic backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini, OpenAI, or Anthropic models themselves (a transparent proxy of sorts), all via LiteLLM. 🌉
- OpenAI API key 🔑
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
- uv installed.
- 
Clone this repository: git clone https://github.com/1rgs/claude-code-proxy.git cd claude-code-proxy
- 
Install uv (if you haven't already): curl -LsSf https://astral.sh/uv/install.sh | sh( uvwill handle dependencies based onpyproject.tomlwhen you run the server)
- 
Configure Environment Variables: Copy the example environment file: cp .env.example .env Edit .envand fill in your API keys and model configurations:- ANTHROPIC_API_KEY: (Optional) Needed only if proxying to Anthropic models.
- OPENAI_API_KEY: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).
- GEMINI_API_KEY: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).
- PREFERRED_PROVIDER(Optional): Set to- openai(default),- google, or- anthropic. This determines the primary backend for mapping- haiku/- sonnet.
- BIG_MODEL(Optional): The model to map- sonnetrequests to. Defaults to- gpt-4.1(if- PREFERRED_PROVIDER=openai) or- gemini-2.5-pro-preview-03-25. Ignored when- PREFERRED_PROVIDER=anthropic.
- SMALL_MODEL(Optional): The model to map- haikurequests to. Defaults to- gpt-4.1-mini(if- PREFERRED_PROVIDER=openai) or- gemini-2.0-flash. Ignored when- PREFERRED_PROVIDER=anthropic.
 Mapping Logic: - If PREFERRED_PROVIDER=openai(default),haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withopenai/.
- If PREFERRED_PROVIDER=google,haiku/sonnetmap toSMALL_MODEL/BIG_MODELprefixed withgemini/if those models are in the server's knownGEMINI_MODELSlist (otherwise falls back to OpenAI mapping).
- If PREFERRED_PROVIDER=anthropic,haiku/sonnetrequests are passed directly to Anthropic with theanthropic/prefix without remapping to different models.
 
- 
Run the server: uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload ( --reloadis optional, for development)
If using docker, download the example environment file to .env and edit it as described above.
curl -O .env https://raw.githubusercontent.com/1rgs/claude-code-proxy/refs/heads/main/.env.exampleThen, you can either start the container with docker compose (preferred):
services:
  proxy:
    image: ghcr.io/1rgs/claude-code-proxy:latest
    restart: unless-stopped
    env_file: .env
    ports:
      - 8082:8082Or with a command:
docker run -d --env-file .env -p 8082:8082 ghcr.io/1rgs/claude-code-proxy:latest- 
Install Claude Code (if you haven't already): npm install -g @anthropic-ai/claude-code 
- 
Connect to your proxy: ANTHROPIC_BASE_URL=http://localhost:8082 claude 
- 
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯 
The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:
| Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model | 
|---|---|---|
| haiku | openai/gpt-4o-mini | gemini/[model-name] | 
| sonnet | openai/gpt-4o | gemini/[model-name] | 
The following OpenAI models are supported with automatic openai/ prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
- gpt-4.1
- gpt-4.1-mini
The following Gemini models are supported with automatic gemini/ prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.0-flash
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the openai/prefix
- Gemini models get the gemini/prefix
- The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
For example:
- gpt-4obecomes- openai/gpt-4o
- gemini-2.5-pro-preview-03-25becomes- gemini/gemini-2.5-pro-preview-03-25
- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to gemini/[model-name]
Control the mapping using environment variables in your .env file or directly:
Example 1: Default (Use OpenAI)
No changes needed in .env beyond API keys, or ensure:
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the defaultExample 2: Prefer Google
GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google prefExample 3: Use Direct Anthropic ("Just an Anthropic Proxy" Mode)
ANTHROPIC_API_KEY="sk-ant-..."
PREFERRED_PROVIDER="anthropic"
# BIG_MODEL and SMALL_MODEL are ignored in this mode
# haiku/sonnet requests are passed directly to Anthropic modelsUse case: This mode enables you to use the proxy infrastructure (for logging, middleware, request/response processing, etc.) while still using actual Anthropic models rather than being forced to remap to OpenAI or Gemini.
Example 4: Use Specific OpenAI Models
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific modelThis proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI format via LiteLLM 🔄
- Sending the translated request to OpenAI 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊
Contributions are welcome! Please feel free to submit a Pull Request. 🎁
