Qwen OpenAI-Compatible Proxy Server - Works with opencode, crush, claude code router, roo code, cline mostly everything
A proxy server that exposes Qwen models through an OpenAI-compatible API endpoint. Has tool calling and streaming support.
New - Web search now available via both REST API (
/v1/web/search) and MCP tool!
To have a better experience for using it as prod you can use cloud flare worker . check the repo https://github.com/aptdnfapt/qwen-worker-proxy
Users might face errors or 504 Gateway Timeout issues when using contexts with 130,000 to 150,000 tokens or more. This appears to be a practical limit for Qwen models. Qwen code itself tends to also break down and get stuck on this limit.
Discord server to talk about other stuff .
-
Configure Environment:
cp .env.example .env # Edit .env file with your desired configuration -
Build and Run with Docker Compose:
docker-compose up -d
-
Authenticate:
docker-compose exec qwen-proxy npm run auth:add <account>
-
Use the Proxy: Point your OpenAI-compatible client to
http://localhost:8080/v1
- Install Dependencies:
npm install
- Authenticate: You need to authenticate with Qwen to generate the required credentials file.
- Run
npm run auth:add <account>to authenticate with your Qwen account - This will create the
~/.qwen/oauth_creds.jsonfile needed by the proxy server - Alternatively, you can use the official
qwen-codeCLI tool from QwenLM/qwen-code
- Run
- Start the Server:
npm start
- Use the Proxy: Point your OpenAI-compatible client to
http://localhost:8080/v1.
Note: API key can be any random string - it doesn't matter for this proxy.
The proxy supports multiple Qwen accounts to overcome the 2,000 requests per day limit per account. Accounts are automatically rotated when quota limits are reached.
For Docker:
docker-compose exec qwen-proxy npm run auth:list
docker-compose exec qwen-proxy npm run auth:add <account-id>
docker-compose exec qwen-proxy npm run auth:remove <account-id>For Local Development:
-
List existing accounts:
npm run auth:list
-
Add a new account:
npm run auth:add <account-id>
Replace
<account-id>with a unique identifier for your account (e.g.,account2,team-account, etc.) -
Remove an account:
npm run auth:remove <account-id>
- When you have multiple accounts configured, the proxy will automatically rotate between them
- Each account has a 2,000 request per day limit
- When an account reaches its limit, Qwen's API will return a quota exceeded error
- The proxy detects these quota errors and automatically switches to the next available account
- If a DEFAULT_ACCOUNT is configured, the proxy will use that account first before rotating to others
- Request counts are tracked locally and reset daily at UTC midnight
- You can check request counts for all accounts with:
npm run auth:counts
Monitor your API usage with detailed reports:
# Show comprehensive usage report (chat + web search)
npm run usageThe proxy provides real-time feedback in the terminal:
- Shows which account is being used for each request
- Displays current request count for each account
- Notifies when an account is rotated due to quota limits
- Indicates which account will be tried next during rotation
- Shows which account is configured as the default account on server startup
- Marks the default account in the account list display
The proxy can be secured with API keys to prevent unauthorized access.
-
Single API Key:
API_KEY=your-secret-key-here
-
Multiple API Keys:
API_KEY=key1,key2,key3
-
Using the Proxy:
const openai = new OpenAI({ apiKey: 'your-secret-key-here', baseURL: 'http://localhost:8080/v1' });
Headers Supported:
X-API-Key: your-secret-keyAuthorization: Bearer your-secret-key
If no API key is configured, the proxy will not require authentication.
Monitor the proxy status with the health endpoint:
curl http://localhost:8080/healthResponse includes:
- Server status
- Account validation status
- Token expiry information
- Request counts
The proxy server can be configured using environment variables. Create a .env file in the project root or set the variables directly in your environment.
LOG_FILE_LIMIT: Maximum number of debug log files to keep (default: 20)DEBUG_LOG: Set totrueto enable debug logging (default: false)API_KEY: Set API key(s) for authentication (comma-separated for multiple keys)DEFAULT_ACCOUNT: Specify which account the proxy should use by default (when using multi-account setup)- Should match the name used when adding an account with
npm run auth add <name> - If not set or invalid, the proxy will use the first available account
- Should match the name used when adding an account with
Example .env file:
# Keep only the 10 most recent log files
LOG_FILE_LIMIT=10
# Enable debug logging (log files will be created)
DEBUG_LOG=true
# API key for authentication (comma-separated for multiple keys)
API_KEY=your-secret-key-here
# Specify which account to use by default (when using multi-account setup)
# Should match the name used when adding an account with 'npm run auth add <name>'
DEFAULT_ACCOUNT=my-primary-accountimport OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'fake-key', // Not used, but required by the OpenAI client
baseURL: 'http://localhost:8080/v1'
});
async function main() {
const response = await openai.chat.completions.create({
model: 'qwen-coder-plus',
messages: [
{ "role": "user", "content": "Hello!" }
]
});
console.log(response.choices[0].message.content);
}
main();curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer fake-key" \
-d '{
"model": "qwen3-coder-plus",
"messages": [
{
"role": "user",
"content": "Hello! Can you help me write a simple JavaScript function that adds two numbers together?"
}
],
"temperature": 0.7,
"max_tokens": 200
}'curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer fake-key" \
-d '{
"model": "qwen3-coder-plus",
"messages": [
{
"role": "user",
"content": "Explain how to reverse a string in JavaScript."
}
],
"stream": true,
"temperature": 0.7,
"max_tokens": 300
}'The proxy supports the following Qwen models:
qwen3-coder-plus- Primary coding model with enhanced capabilitiesqwen3-coder-flash- Faster, lighter coding model for quick responsesvision-model- Multimodal model with image processing capabilities
Note: Use the exact model name as shown above when configuring your client applications.
POST /v1/chat/completions- Chat completions (streaming and non-streaming)POST /v1/web/search- Web search for real-time informationGET /v1/models- List available modelsGET/POST /mcp- MCP server endpoint with SSE transportGET /health- Health check and status
Free web search endpoint from Qwen - 2000 requests per day for free accounts.
curl -X POST http://localhost:8080/v1/web/search \
-H "Content-Type: application/json" \
-H "Authorization: Bearer fake-key" \
-d '{
"query": "latest AI developments",
"page": 1,
"rows": 5
}'The proxy includes built-in MCP server support, allowing it to be used as a remote MCP server with compatible clients like opencode.
To use the MCP server with opencode, add the following to your ~/.config/opencode/config.json:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"qwen-web-search": {
"type": "remote",
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer your-api-key"
}
}
}
}Replace your-api-key with your configured API key if authentication is enabled. If no API key is set (common for local development), omit the headers field entirely.
This provides access to the web_search tool that uses Qwen's web search API with automatic account rotation. For other mcp clients programs / tools your need to find the proper json .
GET/POST /mcp- MCP server endpoint supporting SSE transport
The MCP server provides a web_search tool that allows searching the web using Qwen's infrastructure. It supports the same API key authentication as the main endpoints.
This proxy server supports tool calling functionality, allowing you to use it with tools like opencode and crush roo cline kilo and etc .
To use with opencode, add the following to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"myprovider": {
"npm": "@ai-sdk/openai-compatible",
"name": "proxy",
"options": {
"baseURL": "http://localhost:8080/v1"
},
"models": {
"qwen3-coder-plus": {
"name": "qwen3"
}
}
}
}
}To use with crush, add the following to ~/.config/crush/crush.json:
{
"$schema": "https://charm.land/crush.json",
"providers": {
"proxy": {
"type": "openai",
"base_url": "http://localhost:8080/v1",
"api_key": "",
"models": [
{
"id": "qwen3-coder-plus",
"name": "qwen3-coder-plus",
"cost_per_1m_in": 0.0,
"cost_per_1m_out": 0.0,
"cost_per_1m_in_cached": 0,
"cost_per_1m_out_cached": 0,
"context_window": 150000,
"default_max_tokens": 64000
}
]
}
}
}{
"LOG": false,
"Providers": [
{
"name": "qwen-code",
"api_base_url": "http://localhost:8080/v1/chat/completions/",
"api_key": "wdadwa-random-stuff",
"models": ["qwen3-coder-plus"],
"transformer": {
"use": [
[
"maxtoken",
{
"max_tokens": 65536
}
],
"enhancetool",
"cleancache"
]
}
}
],
"Router": {
"default": "qwen-code,qwen3-coder-plus"
}
}To use with Roo Code or Kilo Code or Cline :
- Go to settings in the client
- Choose the OpenAI compatible option
- Set the URL to:
http://localhost:8080/v1 - Use a random API key (it doesn't matter)
- Type or choose the model name exactly as:
qwen3-coder-plus - Disable streaming in the checkbox for Roo Code or Kilo Code
- Change the max output setting from -1 to 65000
- You can change the context window size to around 300k or so but after 150k it gets slower so keep that in mind .
The proxy now displays token counts in the terminal for each request, showing both input tokens and API-returned usage statistics (prompt, completion, and total tokens).
The proxy includes comprehensive token usage tracking that monitors daily input and output token consumption across all accounts. View detailed token usage reports with either:
npm run auth:tokensor
npm run tokensBoth commands display a clean table showing daily token usage trends, lifetime totals, and request counts. For more information, see docs/token-usage-tracking.md.
For more detailed documentation, see the docs/ directory.
For information about configuring a default account, see docs/default-account.md.