Skip to content

Zen4-bit/Proxima

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Proxima

Proxima

4 AI providers. 1 local server. No API keys.

Use ChatGPT, Claude, Gemini & Perplexity directly inside your coding tools β€” through your existing accounts.


Version Platform MCP Tools License Website Docs Stars


Getting Started Β· CLI Β· REST API Β· WebSocket Β· SDKs Β· MCP Tools



Demo

App Demo Β· CLI Β· Webhook Live Chat & Battle Β· Application Overview

proxima-demo.mp4
proxima-cli.mp4
proxima-live-chat.mp4
proxima-overview.mp4

Overview

Proxima is a local AI gateway that connects multiple AI providers to your development environment. It communicates with each provider at the browser level through your active login sessions β€” the same way you'd chat with them in your browser.


🌐 One Endpoint Everything through /v1/chat/completions β€” no separate URLs
πŸ€– 4 AI Providers ChatGPT, Claude, Gemini, Perplexity β€” any model, any task
⚑ Provider Engines Native browser-level communication β€” 3–10x faster, more reliable
πŸ–₯️ CLI Tool proxima ask, proxima fix, proxima debate β€” right from your terminal
πŸ”Œ WebSocket Real-time streaming at ws://localhost:3210/ws
🧰 45+ MCP Tools Search, code, translate, analyze, debate, audit β€” all via MCP
πŸ“‘ REST API OpenAI-compatible API on localhost:3210
πŸ“¦ SDKs Python & JavaScript β€” one function each
🧠 Smart Router Auto-picks the best available AI for your query
πŸ”‘ No API Keys Uses your existing browser sessions β€” see how it works
πŸ”’ Local & Private Runs on 127.0.0.1, data goes only to providers you're logged into


What's New in v4.1.0

πŸ”₯ Provider Engine System
Proxima now uses native browser-level communication with AI providers β€” no DOM scraping. Responses are 3–10x faster and far more stable, with SSE streaming support and automatic fallback mechanisms.
⚑ CLI Tool
Run proxima ask, proxima fix, proxima debate from any terminal. Pipe errors straight from your build output. Supports file context, git diff piping, and JSON output for scripts.
πŸ”Œ WebSocket Server
Real-time streaming AI at ws://localhost:3210/ws. Bidirectional communication with status updates, request tracking, and keepalive. Useful for apps, scripts, anything that needs live output.
πŸ› οΈ 15 New MCP Tools
chain_query, solve, debate, security_audit, verify, fix_error, build_architecture, write_tests, explain_error, convert_code, ask_selected, conversation_export, ask_perplexity, github_search, get_ui_reference
πŸ“„ Interactive API Docs
Live documentation at /docs, /cli, /ws β€” with a working chat widget to test queries directly in your browser.
🎯 Multi-Model Queries
model: "all" queries every provider at once. model: ["claude", "chatgpt"] targets specific ones. Compare responses side-by-side from multiple AI providers in a single request.
πŸ“€ Conversation Export
Export full conversation history from any provider using conversation_export. Continue working on AI agent projects, revisit ideas discussed with providers, and build on previous plans without losing context.
πŸ›‘οΈ New REST API Functions
New security_audit and debate functions added to the REST API endpoint. File upload support via file field in request body.

Bug fixes & improvements:

  • πŸ”§ Staggered multi-provider queries β€” prevents UI freezes during parallel requests
  • πŸ”§ Smart provider selection β€” routes coding tasks to Claude, research to Perplexity
  • πŸ”§ Response caching with TTL (5 min) and automatic eviction (max 100 entries)
  • πŸ”§ Rate limit handling β€” detects 429 responses, auto-recovery on expired sessions
  • πŸ”§ Engine auto-injection on page navigation with duplicate guard
  • πŸ”§ Claude conversation auto-recovery (handles 404/410 expired sessions)
  • πŸ”§ ChatGPT SHA3-512 proof-of-work challenge solver
  • πŸ”§ 10MB body size limit on REST API with CORS headers
  • πŸ”§ Socket leak prevention on IPC reconnect

Getting Started

Requirements

  • Node.js 18+ (for MCP server and CLI)
  • Windows 10/11 β€” pre-built installer available
  • macOS / Linux β€” supported via source code

Install

Download Installer (Windows)

Download the latest release and run the installer.


Download Proxima v4.1.0 β†’

Run from Source (Windows / macOS / Linux)

git clone https://github.com/Zen4-bit/Proxima.git
cd Proxima
npm install
npm start

Electron will open the Proxima window. Log in to your AI providers, enable REST API in Settings, and you're ready.


CLI install:

  • Windows: Settings β†’ ⚑ Install CLI to PATH, or npm link
  • macOS / Linux: npm link (may need sudo npm link)

Connect to your editor

  1. Open Proxima and log into your AI providers (one-time setup)
  2. Go to Settings β†’ MCP Configuration β†’ copy the config
  3. Paste into your editor's MCP config file:
{
  "mcpServers": {
    "proxima": {
      "command": "node",
      "args": ["C:/path/to/Proxima/src/mcp-server-v3.js"]
    }
  }
}
  1. Restart your editor. The tools will appear.

Tip: Use the copy button in Settings β€” don't type the path manually.

Works with: Cursor Β· VS Code (MCP extension) Β· Claude Desktop Β· Windsurf Β· Gemini CLI Β· any MCP-compatible client


Supported Providers


ChatGPT
OpenAI's GPT


Claude
Anthropic's Claude


Gemini
Google's Gemini


Perplexity
Web search & research

Each provider runs through a dedicated engine script that handles communication at the browser level. Responses are streamed via SSE using your existing login. If an engine can't connect, Proxima falls back to DOM-based interaction automatically.



How It Works

In v4.1.0, Proxima uses a Provider Engine System instead of DOM scraping.

When you send a query, Proxima uses a lightweight engine script within the provider's browser tab. That script handles communication at the browser level and streams the response back via SSE. If the engine fails for any reason, Proxima automatically falls back to DOM-based interaction β€” so it keeps working either way.

Your editor β†’ MCP tool call β†’ Proxima local server
                                      ↓
                           Engine injected into session
                                      ↓
                      Browser-level communication (SSE stream)
                                      ↓
                              Response returned

EngineProviderHow it works
chatgpt-engine.jsChatGPTHandles proof-of-work challenges, streams via SSE
claude-engine.jsClaudeOrg-level auth handling, SSE streaming, auto-recovery
gemini-engine.jsGeminiSSE streaming with auto-reconnect
perplexity-engine.jsPerplexitySSE streaming

CLI Tool

The proxima CLI lets you use any AI provider from your terminal.


Install

From the app

Settings β†’ ⚑ Install CLI to PATH

From source

npm link                  # Windows
sudo npm link             # macOS / Linux

Without installing

npm run cli -- ask "question"

Commands

# Ask any provider
proxima ask "How does async/await work in JS?"
proxima ask claude "Review this approach"
proxima ask chatgpt "Explain this error"

# Search
proxima search "latest Node.js release"

# Code
proxima code "REST API with Express and JWT auth"
proxima code review "function fetchUser(id) { ... }"
proxima code explain "async/await"

# Smart tools
proxima fix "SyntaxError: Unexpected token '<'"
proxima debate "tabs vs spaces"
proxima audit "SELECT * FROM users WHERE id=" + req.query.id
proxima brainstorm "features for a dev productivity tool"

# Translate
proxima translate "Hello world" --to Hindi

# Compare all providers
proxima compare "Bun vs Node.js for production"

# Utilities
proxima status                     # server status
proxima stats                      # response time stats
proxima models                     # list available providers
proxima new                        # reset all conversations

Pipe Support

# Fix build errors directly
npm run build 2>&1 | proxima fix

# Review a git diff
git diff | proxima code review

# Pass file as context
proxima ask "What does this do?" --file src/server.js

Flags

FlagWhat it does
-m / --modelOverride provider (claude, chatgpt, gemini, perplexity, auto)
--jsonRaw JSON output for scripting
-l / --langSpecify code language
--fileInclude a file as context
--toTarget language for translate
--fromSource language for translate

REST API

Proxima runs an OpenAI-compatible REST API at http://localhost:3210.

Enable it in Settings β†’ REST API & CLI.


Endpoints

POST /v1/chat/completions   β€” OpenAI-compatible chat
GET  /v1/models             β€” List available models
GET  /v1/functions          β€” API function catalog with examples
GET  /v1/stats              β€” Response time stats per provider
POST /v1/conversations/new  β€” Reset all conversations
GET  /api/status            β€” Server status
GET  /docs                  β€” Interactive API docs (with live chat widget)
GET  /cli                   β€” CLI documentation
GET  /ws                    β€” WebSocket documentation

Functions

The "function" field controls what happens. No function = normal chat.

FunctionBody FieldsWhat it does
(none)model, messageNormal chat
"search"model, message, functionWeb search + AI analysis
"translate"model, message, function, toTranslate text
"brainstorm"model, message, functionGenerate ideas
"code"model, message, function, actionCode generate/review/debug/explain
"analyze"model, function, urlAnalyze URL or content
"security_audit"model, code, functionScan code for vulnerabilities
"debate"model, message, functionMulti-perspective debate

Examples

Chat:

curl http://localhost:3210/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "claude", "message": "What is AI?"}'

Search:

curl http://localhost:3210/v1/chat/completions \
  -d '{"model": "perplexity", "message": "AI news 2026", "function": "search"}'

Translate:

curl http://localhost:3210/v1/chat/completions \
  -d '{"model": "gemini", "message": "Hello world", "function": "translate", "to": "Hindi"}'

Code Generate:

curl http://localhost:3210/v1/chat/completions \
  -d '{"model": "claude", "message": "Sort algorithm", "function": "code", "action": "generate", "language": "Python"}'

Query All Providers:

curl http://localhost:3210/v1/chat/completions \
  -d '{"model": "all", "message": "Explain quantum computing"}'

Security Audit:

curl http://localhost:3210/v1/chat/completions \
  -d '{"model": "claude", "function": "security_audit", "code": "db.query(\"SELECT * FROM users WHERE id=\" + req.query.id)"}'

Multi-model queries

model: "all"                       // all enabled providers
model: ["claude", "chatgpt"]       // specific providers

Response Format

{
  "id": "proxima-abc123",
  "model": "claude",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "AI response here..."
    }
  }],
  "proxima": {
    "provider": "claude",
    "responseTimeMs": 2400
  }
}

When using model: "all", each provider gets its own entry in choices[].


SDKs

Python

from proxima import Proxima
client = Proxima()

# Chat β€” any model
response = client.chat("Hello", model="claude")
response = client.chat("Hello", model="chatgpt")
response = client.chat("Hello")  # auto picks best
print(response.text)
print(response.response_time_ms)

# Search
result = client.chat("AI news 2026",
    model="perplexity", function="search")

# Translate
hindi = client.chat("Hello world",
    model="gemini", function="translate",
    to="Hindi")

# Code
code = client.chat("Sort algorithm",
    model="claude", function="code",
    action="generate", language="Python")

# System
models = client.get_models()
stats = client.get_stats()
client.new_conversation()

pip install requests, then copy sdk/proxima.py to your project.

JavaScript

const { Proxima } = require('./sdk/proxima');
const client = new Proxima();

// Chat β€” any model
const res = await client.chat("Hello",
    { model: "claude" });
console.log(res.text);

// Search
const news = await client.chat("AI news",
    { model: "perplexity",
      function: "search" });

// Translate
const hindi = await client.chat("Hello",
    { model: "gemini",
      function: "translate",
      to: "Hindi" });

// Code generate
const code = await client.chat("Sort algo",
    { model: "claude",
      function: "code",
      action: "generate" });

// System
const models = await client.getModels();
const stats = await client.getStats();

Works with Node.js 18+ (native fetch).


SDK Configuration

client = Proxima(base_url="http://192.168.1.100:3210")   # custom URL
client = Proxima(default_model="claude")                  # default model

WebSocket

Real-time streaming AI at ws://localhost:3210/ws.

Requires REST API to be enabled in Settings.


Example

const ws = new WebSocket("ws://localhost:3210/ws");

ws.send(JSON.stringify({
  action: "ask",
  model: "claude",
  message: "What is a closure?",
  id: "req_1"
}));

ws.onmessage = (e) => {
  const msg = JSON.parse(e.data);
  // { type: "status",   id: "req_1", status: "processing", model: "claude" }
  // { type: "response", id: "req_1", model: "claude", content: "...", responseTimeMs: 2400 }
};

Available Actions

ActionWhat it does
ask / chatChat with any provider
searchWeb search
codegenerate / review / explain / optimize / debug
translateTranslate text
brainstormGenerate ideas
debateMulti-provider debate (queries all providers)
auditSecurity code audit
new_conversationReset conversation context for all providers
statsConnection and provider statistics
pingKeepalive β€” returns pong

MCP Tools

πŸ€– AI Provider Tools

ToolWhat it does
ask_chatgptQuery ChatGPT (supports file upload)
ask_claudeQuery Claude (supports file upload)
ask_geminiQuery Gemini (supports file upload)
ask_perplexityQuery Perplexity (supports file upload)
ask_all_aisSend same query to all providers at once
ask_selectedPick specific providers to query
compare_aisGet and compare responses side by side
smart_queryAuto-picks best provider, falls back if one fails

πŸ”§ Development Tools

ToolWhat it does
solveOne-shot problem solver β€” senior engineer level
fix_errorRoot cause + exact fix for any error
build_architectureFull project architecture blueprint
write_testsGenerate tests (jest / vitest / mocha / pytest)
explain_errorError explained in plain terms, no jargon
convert_codeConvert code between languages or frameworks

βš”οΈ Multi-AI Tools

ToolWhat it does
chain_querySequential multi-AI pipeline β€” use {previous} to pass output forward
debateMulti-provider debate with FOR / AGAINST / NEUTRAL stances
verifyCross-provider answer verification with confidence score (0–100%)
security_auditCode security scan β€” flags CRITICAL / HIGH / MEDIUM / LOW issues

πŸ’» Code Tools

ToolWhat it does
generate_codeGenerate code from a description
explain_codePlain-English explanation of any code
optimize_codePerformance improvement suggestions
review_codeCode review feedback
verify_codeCheck against best practices

πŸ” Search Tools

ToolWhat it does
deep_searchComprehensive web search
internet_searchGeneral internet search on any topic
news_searchLatest news articles
reddit_searchReddit discussions
github_searchFind open-source repos, code, and solutions on GitHub
academic_searchPapers and research
math_searchMath problems step-by-step

πŸ“ Content Tools

ToolWhat it does
brainstormGenerate ideas on any topic
summarize_urlSummarize any URL
generate_articleFull article generation
writing_helpWriting assistance
fact_checkFact verification
find_statsFind statistics and data
how_toStep-by-step instructions
compareCompare two things in depth

πŸ”¬ Analysis Tools

ToolWhat it does
analyze_documentAnalyze documents from URL
extract_dataExtract structured data from text or URL
get_ui_referenceUI/UX design consultant β€” colors, layouts, components, CSS tokens, and code improvements

πŸ“ File Tools

ToolWhat it does
analyze_fileUpload and analyze a local file
review_code_fileCode review on a local file (bugs, performance, security)

πŸͺŸ Window Controls

ToolWhat it does
show_windowShow the Proxima window
hide_windowHide to system tray
toggle_windowToggle visibility
set_headless_modeRun fully in background

πŸ”„ Session Tools

ToolWhat it does
new_conversationReset conversation context
clear_cacheClear response cache
conversation_exportExport full conversation history

Security & Privacy

Since Proxima works without API keys, a few things worth knowing:

  • No credentials stored. Proxima uses your existing browser session cookies β€” the same way you're already logged in.
  • Nothing leaves your machine except the queries you send to AI providers you're logged into.
  • Runs on localhost. The MCP server, REST API, and WebSocket are all local. Nothing is exposed to the internet.
  • No telemetry. Proxima doesn't collect or send any usage data anywhere.
  • Sessions are yours. If you log out from a provider's website or clear browser data, you'll need to log in again through Proxima.

Proxima doesn't bypass authentication β€” it uses the sessions you already have. Same as using the site in a browser.


Project Structure

Proxima/
β”œβ”€β”€ electron/
β”‚   β”œβ”€β”€ main-v2.cjs                  # Electron main process
β”‚   β”œβ”€β”€ browser-manager.cjs          # Browser session management
β”‚   β”œβ”€β”€ rest-api.cjs                 # REST API server (OpenAI-compatible)
β”‚   β”œβ”€β”€ ws-server.cjs                # WebSocket server
β”‚   β”œβ”€β”€ provider-api.cjs             # Provider engine injection manager
β”‚   β”œβ”€β”€ index-v2.html                # App UI
β”‚   β”œβ”€β”€ preload.cjs                  # Renderer preload bridge
β”‚   └── providers/
β”‚       β”œβ”€β”€ chatgpt-engine.js        # SHA3-512 POW + SSE streaming
β”‚       β”œβ”€β”€ claude-engine.js         # Org auth + SSE streaming
β”‚       β”œβ”€β”€ gemini-engine.js         # Session SSE streaming
β”‚       └── perplexity-engine.js     # SSE streaming
β”œβ”€β”€ cli/
β”‚   └── proxima-cli.cjs              # Terminal CLI
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ mcp-server-v3.js             # MCP server (50+ tools)
β”‚   └── enabled-providers.json       # Provider config
β”œβ”€β”€ sdk/
β”‚   β”œβ”€β”€ proxima.py                   # Python SDK
β”‚   └── proxima.js                   # JavaScript SDK
β”œβ”€β”€ assets/                          # Icons, screenshots, demo
└── package.json

Troubleshooting

Windows Firewall prompt on first launch
Proxima runs on localhost:19223 and localhost:3210. Click Allow β€” it only accepts local connections.

Provider shows "Not logged in"
Each provider has a different login method:

  • ChatGPT, Claude, Perplexity β€” click the provider tab and log in using OTP (email code). Google Sign-In is restricted in embedded browsers by Google's policy.
  • Gemini β€” uses cookie-based authentication. Log in to Google in your regular browser first, then Proxima picks up the session automatically.

REST API not responding
Check that REST API is enabled in Settings β†’ REST API & CLI section. Visit http://localhost:3210 in your browser to verify.

MCP tools not showing in editor

  1. Make sure Proxima is running
  2. Verify the path in your MCP config (use the Settings copy button)
  3. Restart your editor

CLI: proxima not found after install
Open a fresh terminal. If still not found, click πŸ”§ Fix in Settings β†’ CLI section.

CLI: "Cannot connect to Proxima"
Proxima must be running and REST API must be enabled. The CLI connects to localhost:3210.

WebSocket won't connect
WebSocket shares the REST API server. Enable REST API in Settings first.


License

Personal, non-commercial use only. See LICENSE for details.



Proxima v4.1.0 β€” One API, All AI Models ⚑

Made by Zen4-bit


If it saved you time, a ⭐ goes a long way.


About

Multi-AI MCP Server - Connect ChatGPT, Claude, Gemini & Perplexity to your coding tools without any API

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors