4 AI providers. 1 local server. No API keys.
Use ChatGPT, Claude, Gemini & Perplexity directly inside your coding tools β through your existing accounts.
Getting Started Β· CLI Β· REST API Β· WebSocket Β· SDKs Β· MCP Tools
App Demo Β· CLI Β· Webhook Live Chat & Battle Β· Application Overview
proxima-demo.mp4 |
proxima-cli.mp4 |
proxima-live-chat.mp4 |
proxima-overview.mp4 |
Proxima is a local AI gateway that connects multiple AI providers to your development environment. It communicates with each provider at the browser level through your active login sessions β the same way you'd chat with them in your browser.
| π One Endpoint | Everything through /v1/chat/completions β no separate URLs |
| π€ 4 AI Providers | ChatGPT, Claude, Gemini, Perplexity β any model, any task |
| β‘ Provider Engines | Native browser-level communication β 3β10x faster, more reliable |
| π₯οΈ CLI Tool | proxima ask, proxima fix, proxima debate β right from your terminal |
| π WebSocket | Real-time streaming at ws://localhost:3210/ws |
| π§° 45+ MCP Tools | Search, code, translate, analyze, debate, audit β all via MCP |
| π‘ REST API | OpenAI-compatible API on localhost:3210 |
| π¦ SDKs | Python & JavaScript β one function each |
| π§ Smart Router | Auto-picks the best available AI for your query |
| π No API Keys | Uses your existing browser sessions β see how it works |
| π Local & Private | Runs on 127.0.0.1, data goes only to providers you're logged into |
| π₯ | Provider Engine System Proxima now uses native browser-level communication with AI providers β no DOM scraping. Responses are 3β10x faster and far more stable, with SSE streaming support and automatic fallback mechanisms. |
| β‘ | CLI Tool Run proxima ask, proxima fix, proxima debate from any terminal. Pipe errors straight from your build output. Supports file context, git diff piping, and JSON output for scripts. |
| π | WebSocket Server Real-time streaming AI at ws://localhost:3210/ws. Bidirectional communication with status updates, request tracking, and keepalive. Useful for apps, scripts, anything that needs live output. |
| π οΈ | 15 New MCP Toolschain_query, solve, debate, security_audit, verify, fix_error, build_architecture, write_tests, explain_error, convert_code, ask_selected, conversation_export, ask_perplexity, github_search, get_ui_reference |
| π | Interactive API Docs Live documentation at /docs, /cli, /ws β with a working chat widget to test queries directly in your browser. |
| π― | Multi-Model Queriesmodel: "all" queries every provider at once. model: ["claude", "chatgpt"] targets specific ones. Compare responses side-by-side from multiple AI providers in a single request. |
| π€ | Conversation Export Export full conversation history from any provider using conversation_export. Continue working on AI agent projects, revisit ideas discussed with providers, and build on previous plans without losing context. |
| π‘οΈ | New REST API Functions New security_audit and debate functions added to the REST API endpoint. File upload support via file field in request body. |
Bug fixes & improvements:
- π§ Staggered multi-provider queries β prevents UI freezes during parallel requests
- π§ Smart provider selection β routes coding tasks to Claude, research to Perplexity
- π§ Response caching with TTL (5 min) and automatic eviction (max 100 entries)
- π§ Rate limit handling β detects 429 responses, auto-recovery on expired sessions
- π§ Engine auto-injection on page navigation with duplicate guard
- π§ Claude conversation auto-recovery (handles 404/410 expired sessions)
- π§ ChatGPT SHA3-512 proof-of-work challenge solver
- π§ 10MB body size limit on REST API with CORS headers
- π§ Socket leak prevention on IPC reconnect
- Node.js 18+ (for MCP server and CLI)
- Windows 10/11 β pre-built installer available
- macOS / Linux β supported via source code
|
Download Installer (Windows) Download the latest release and run the installer. |
Run from Source (Windows / macOS / Linux) git clone https://github.com/Zen4-bit/Proxima.git
cd Proxima
npm install
npm start |
Electron will open the Proxima window. Log in to your AI providers, enable REST API in Settings, and you're ready.
CLI install:
- Windows: Settings β β‘ Install CLI to PATH, or
npm link - macOS / Linux:
npm link(may needsudo npm link)
- Open Proxima and log into your AI providers (one-time setup)
- Go to Settings β MCP Configuration β copy the config
- Paste into your editor's MCP config file:
{
"mcpServers": {
"proxima": {
"command": "node",
"args": ["C:/path/to/Proxima/src/mcp-server-v3.js"]
}
}
}- Restart your editor. The tools will appear.
Tip: Use the copy button in Settings β don't type the path manually.
Works with: Cursor Β· VS Code (MCP extension) Β· Claude Desktop Β· Windsurf Β· Gemini CLI Β· any MCP-compatible client
|
ChatGPT OpenAI's GPT |
Claude Anthropic's Claude |
Gemini Google's Gemini |
Perplexity Web search & research |
Each provider runs through a dedicated engine script that handles communication at the browser level. Responses are streamed via SSE using your existing login. If an engine can't connect, Proxima falls back to DOM-based interaction automatically.
In v4.1.0, Proxima uses a Provider Engine System instead of DOM scraping.
When you send a query, Proxima uses a lightweight engine script within the provider's browser tab. That script handles communication at the browser level and streams the response back via SSE. If the engine fails for any reason, Proxima automatically falls back to DOM-based interaction β so it keeps working either way.
Your editor β MCP tool call β Proxima local server
β
Engine injected into session
β
Browser-level communication (SSE stream)
β
Response returned
| Engine | Provider | How it works |
|---|---|---|
chatgpt-engine.js | ChatGPT | Handles proof-of-work challenges, streams via SSE |
claude-engine.js | Claude | Org-level auth handling, SSE streaming, auto-recovery |
gemini-engine.js | Gemini | SSE streaming with auto-reconnect |
perplexity-engine.js | Perplexity | SSE streaming |
The proxima CLI lets you use any AI provider from your terminal.
|
From the app Settings β β‘ Install CLI to PATH |
From source npm link # Windows
sudo npm link # macOS / Linux |
Without installing npm run cli -- ask "question" |
# Ask any provider
proxima ask "How does async/await work in JS?"
proxima ask claude "Review this approach"
proxima ask chatgpt "Explain this error"
# Search
proxima search "latest Node.js release"
# Code
proxima code "REST API with Express and JWT auth"
proxima code review "function fetchUser(id) { ... }"
proxima code explain "async/await"
# Smart tools
proxima fix "SyntaxError: Unexpected token '<'"
proxima debate "tabs vs spaces"
proxima audit "SELECT * FROM users WHERE id=" + req.query.id
proxima brainstorm "features for a dev productivity tool"
# Translate
proxima translate "Hello world" --to Hindi
# Compare all providers
proxima compare "Bun vs Node.js for production"
# Utilities
proxima status # server status
proxima stats # response time stats
proxima models # list available providers
proxima new # reset all conversations# Fix build errors directly
npm run build 2>&1 | proxima fix
# Review a git diff
git diff | proxima code review
# Pass file as context
proxima ask "What does this do?" --file src/server.js| Flag | What it does |
|---|---|
-m / --model | Override provider (claude, chatgpt, gemini, perplexity, auto) |
--json | Raw JSON output for scripting |
-l / --lang | Specify code language |
--file | Include a file as context |
--to | Target language for translate |
--from | Source language for translate |
Proxima runs an OpenAI-compatible REST API at http://localhost:3210.
Enable it in Settings β REST API & CLI.
POST /v1/chat/completions β OpenAI-compatible chat
GET /v1/models β List available models
GET /v1/functions β API function catalog with examples
GET /v1/stats β Response time stats per provider
POST /v1/conversations/new β Reset all conversations
GET /api/status β Server status
GET /docs β Interactive API docs (with live chat widget)
GET /cli β CLI documentation
GET /ws β WebSocket documentation
The "function" field controls what happens. No function = normal chat.
| Function | Body Fields | What it does |
|---|---|---|
| (none) | model, message | Normal chat |
"search" | model, message, function | Web search + AI analysis |
"translate" | model, message, function, to | Translate text |
"brainstorm" | model, message, function | Generate ideas |
"code" | model, message, function, action | Code generate/review/debug/explain |
"analyze" | model, function, url | Analyze URL or content |
"security_audit" | model, code, function | Scan code for vulnerabilities |
"debate" | model, message, function | Multi-perspective debate |
Chat:
curl http://localhost:3210/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "claude", "message": "What is AI?"}'Search:
curl http://localhost:3210/v1/chat/completions \
-d '{"model": "perplexity", "message": "AI news 2026", "function": "search"}'Translate:
curl http://localhost:3210/v1/chat/completions \
-d '{"model": "gemini", "message": "Hello world", "function": "translate", "to": "Hindi"}'Code Generate:
curl http://localhost:3210/v1/chat/completions \
-d '{"model": "claude", "message": "Sort algorithm", "function": "code", "action": "generate", "language": "Python"}'Query All Providers:
curl http://localhost:3210/v1/chat/completions \
-d '{"model": "all", "message": "Explain quantum computing"}'Security Audit:
curl http://localhost:3210/v1/chat/completions \
-d '{"model": "claude", "function": "security_audit", "code": "db.query(\"SELECT * FROM users WHERE id=\" + req.query.id)"}'model: "all" // all enabled providers
model: ["claude", "chatgpt"] // specific providers{
"id": "proxima-abc123",
"model": "claude",
"choices": [{
"message": {
"role": "assistant",
"content": "AI response here..."
}
}],
"proxima": {
"provider": "claude",
"responseTimeMs": 2400
}
}When using model: "all", each provider gets its own entry in choices[].
from proxima import Proxima
client = Proxima()
# Chat β any model
response = client.chat("Hello", model="claude")
response = client.chat("Hello", model="chatgpt")
response = client.chat("Hello") # auto picks best
print(response.text)
print(response.response_time_ms)
# Search
result = client.chat("AI news 2026",
model="perplexity", function="search")
# Translate
hindi = client.chat("Hello world",
model="gemini", function="translate",
to="Hindi")
# Code
code = client.chat("Sort algorithm",
model="claude", function="code",
action="generate", language="Python")
# System
models = client.get_models()
stats = client.get_stats()
client.new_conversation()
|
const { Proxima } = require('./sdk/proxima');
const client = new Proxima();
// Chat β any model
const res = await client.chat("Hello",
{ model: "claude" });
console.log(res.text);
// Search
const news = await client.chat("AI news",
{ model: "perplexity",
function: "search" });
// Translate
const hindi = await client.chat("Hello",
{ model: "gemini",
function: "translate",
to: "Hindi" });
// Code generate
const code = await client.chat("Sort algo",
{ model: "claude",
function: "code",
action: "generate" });
// System
const models = await client.getModels();
const stats = await client.getStats();Works with Node.js 18+ (native |
client = Proxima(base_url="http://192.168.1.100:3210") # custom URL
client = Proxima(default_model="claude") # default modelReal-time streaming AI at ws://localhost:3210/ws.
Requires REST API to be enabled in Settings.
const ws = new WebSocket("ws://localhost:3210/ws");
ws.send(JSON.stringify({
action: "ask",
model: "claude",
message: "What is a closure?",
id: "req_1"
}));
ws.onmessage = (e) => {
const msg = JSON.parse(e.data);
// { type: "status", id: "req_1", status: "processing", model: "claude" }
// { type: "response", id: "req_1", model: "claude", content: "...", responseTimeMs: 2400 }
};| Action | What it does |
|---|---|
ask / chat | Chat with any provider |
search | Web search |
code | generate / review / explain / optimize / debug |
translate | Translate text |
brainstorm | Generate ideas |
debate | Multi-provider debate (queries all providers) |
audit | Security code audit |
new_conversation | Reset conversation context for all providers |
stats | Connection and provider statistics |
ping | Keepalive β returns pong |
| Tool | What it does |
|---|---|
ask_chatgpt | Query ChatGPT (supports file upload) |
ask_claude | Query Claude (supports file upload) |
ask_gemini | Query Gemini (supports file upload) |
ask_perplexity | Query Perplexity (supports file upload) |
ask_all_ais | Send same query to all providers at once |
ask_selected | Pick specific providers to query |
compare_ais | Get and compare responses side by side |
smart_query | Auto-picks best provider, falls back if one fails |
| Tool | What it does |
|---|---|
solve | One-shot problem solver β senior engineer level |
fix_error | Root cause + exact fix for any error |
build_architecture | Full project architecture blueprint |
write_tests | Generate tests (jest / vitest / mocha / pytest) |
explain_error | Error explained in plain terms, no jargon |
convert_code | Convert code between languages or frameworks |
| Tool | What it does |
|---|---|
chain_query | Sequential multi-AI pipeline β use {previous} to pass output forward |
debate | Multi-provider debate with FOR / AGAINST / NEUTRAL stances |
verify | Cross-provider answer verification with confidence score (0β100%) |
security_audit | Code security scan β flags CRITICAL / HIGH / MEDIUM / LOW issues |
| Tool | What it does |
|---|---|
generate_code | Generate code from a description |
explain_code | Plain-English explanation of any code |
optimize_code | Performance improvement suggestions |
review_code | Code review feedback |
verify_code | Check against best practices |
| Tool | What it does |
|---|---|
deep_search | Comprehensive web search |
internet_search | General internet search on any topic |
news_search | Latest news articles |
reddit_search | Reddit discussions |
github_search | Find open-source repos, code, and solutions on GitHub |
academic_search | Papers and research |
math_search | Math problems step-by-step |
| Tool | What it does |
|---|---|
brainstorm | Generate ideas on any topic |
summarize_url | Summarize any URL |
generate_article | Full article generation |
writing_help | Writing assistance |
fact_check | Fact verification |
find_stats | Find statistics and data |
how_to | Step-by-step instructions |
compare | Compare two things in depth |
| Tool | What it does |
|---|---|
analyze_document | Analyze documents from URL |
extract_data | Extract structured data from text or URL |
get_ui_reference | UI/UX design consultant β colors, layouts, components, CSS tokens, and code improvements |
| Tool | What it does |
|---|---|
analyze_file | Upload and analyze a local file |
review_code_file | Code review on a local file (bugs, performance, security) |
| Tool | What it does |
|---|---|
show_window | Show the Proxima window |
hide_window | Hide to system tray |
toggle_window | Toggle visibility |
set_headless_mode | Run fully in background |
| Tool | What it does |
|---|---|
new_conversation | Reset conversation context |
clear_cache | Clear response cache |
conversation_export | Export full conversation history |
Since Proxima works without API keys, a few things worth knowing:
- No credentials stored. Proxima uses your existing browser session cookies β the same way you're already logged in.
- Nothing leaves your machine except the queries you send to AI providers you're logged into.
- Runs on localhost. The MCP server, REST API, and WebSocket are all local. Nothing is exposed to the internet.
- No telemetry. Proxima doesn't collect or send any usage data anywhere.
- Sessions are yours. If you log out from a provider's website or clear browser data, you'll need to log in again through Proxima.
Proxima doesn't bypass authentication β it uses the sessions you already have. Same as using the site in a browser.
Proxima/
βββ electron/
β βββ main-v2.cjs # Electron main process
β βββ browser-manager.cjs # Browser session management
β βββ rest-api.cjs # REST API server (OpenAI-compatible)
β βββ ws-server.cjs # WebSocket server
β βββ provider-api.cjs # Provider engine injection manager
β βββ index-v2.html # App UI
β βββ preload.cjs # Renderer preload bridge
β βββ providers/
β βββ chatgpt-engine.js # SHA3-512 POW + SSE streaming
β βββ claude-engine.js # Org auth + SSE streaming
β βββ gemini-engine.js # Session SSE streaming
β βββ perplexity-engine.js # SSE streaming
βββ cli/
β βββ proxima-cli.cjs # Terminal CLI
βββ src/
β βββ mcp-server-v3.js # MCP server (50+ tools)
β βββ enabled-providers.json # Provider config
βββ sdk/
β βββ proxima.py # Python SDK
β βββ proxima.js # JavaScript SDK
βββ assets/ # Icons, screenshots, demo
βββ package.json
Windows Firewall prompt on first launch
Proxima runs on localhost:19223 and localhost:3210. Click Allow β it only accepts local connections.
Provider shows "Not logged in"
Each provider has a different login method:
- ChatGPT, Claude, Perplexity β click the provider tab and log in using OTP (email code). Google Sign-In is restricted in embedded browsers by Google's policy.
- Gemini β uses cookie-based authentication. Log in to Google in your regular browser first, then Proxima picks up the session automatically.
REST API not responding
Check that REST API is enabled in Settings β REST API & CLI section. Visit http://localhost:3210 in your browser to verify.
MCP tools not showing in editor
- Make sure Proxima is running
- Verify the path in your MCP config (use the Settings copy button)
- Restart your editor
CLI: proxima not found after install
Open a fresh terminal. If still not found, click π§ Fix in Settings β CLI section.
CLI: "Cannot connect to Proxima"
Proxima must be running and REST API must be enabled. The CLI connects to localhost:3210.
WebSocket won't connect
WebSocket shares the REST API server. Enable REST API in Settings first.
Personal, non-commercial use only. See LICENSE for details.
Proxima v4.1.0 β One API, All AI Models β‘
Made by Zen4-bit
If it saved you time, a β goes a long way.