Documentation: Getting Started · README · Configuration · IDE Clients · MCP API · ctx CLI · Memory Guide · Architecture · Multi-Repo · Kubernetes · VS Code Extension · Troubleshooting · Development
The open-source, self-improving code search that gets smarter every time you use it.
AI coding assistants are only as good as the context they retrieve. Most solutions chunk your code into large blocks and hope for the best—returning whole files when you need a single function, or missing the relevant code entirely. Context-Engine takes a different approach: ReFRAG-inspired micro-chunking returns precise 5-50 line spans, hybrid search combines semantic and lexical signals with cross-encoder reranking, and the system adapts to your codebase over time. No cloud dependency, no vendor lock-in—just a Docker Compose stack that works with any MCP-compatible tool.
| Feature | What it does |
|---|---|
| Precision Retrieval | Returns exact code spans (5-50 lines), not whole files |
| Hybrid Search | Dense vectors + lexical matching + cross-encoder reranking |
| MCP Native | Dual transport (SSE + HTTP) for any AI coding tool |
| Works Locally | Docker Compose, runs on your machine |
| Adaptive (optional) | Enable learning mode to improve ranking from usage patterns |
git clone https://github.com/m1rl0k/Context-Engine.git && cd Context-Engine
docker compose up -dThat's it. The stack is running.
Option A: VS Code Extension (recommended)
Install Context Engine Uploader from the VS Code Marketplace. Open your project, click "Upload Workspace". Done.
The extension auto-syncs changes and configures your MCP clients.
Option B: CLI
# Index any project
HOST_INDEX_PATH=/path/to/your/project docker compose run --rm indexerHTTP endpoints — for Claude Code, Windsurf, Qodo, and other RMCP-capable clients:
{
"mcpServers": {
"qdrant-indexer": { "url": "http://localhost:8003/mcp" },
"memory": { "url": "http://localhost:8002/mcp" }
}
}stdio via npx (recommended) — unified bridge with workspace awareness:
{
"mcpServers": {
"context-engine": {
"type": "stdio",
"command": "npx",
"args": [
"@context-engine-bridge/context-engine-mcp-bridge",
"mcp-serve",
"--workspace", "/path/to/your/project",
"--indexer-url", "http://localhost:8003/mcp",
"--memory-url", "http://localhost:8002/mcp"
]
}
}
}See docs/IDE_CLIENTS.md for Cursor, Windsurf, Cline, Codex, Augment, and more.
| Client | Transport |
|---|---|
| Claude Code | SSE / RMCP |
| Cursor | SSE / RMCP |
| Windsurf | SSE / RMCP |
| Cline | SSE / RMCP |
| Roo | SSE / RMCP |
| Augment | SSE |
| Codex | RMCP |
| Copilot | RMCP |
| AmpCode | RMCP |
| Zed | SSE (via mcp-remote) |
| Service | URL |
|---|---|
| Indexer MCP (SSE) | http://localhost:8001/sse |
| Indexer MCP (RMCP) | http://localhost:8003/mcp |
| Memory MCP (SSE) | http://localhost:8000/sse |
| Memory MCP (RMCP) | http://localhost:8002/mcp |
| Qdrant | http://localhost:6333 |
| Upload Service | http://localhost:8004 |
The Context Engine Uploader extension provides:
- One-click upload — Sync your workspace to Context-Engine
- Auto-sync — Watch for changes and re-index automatically
- Prompt+ button — Enhance prompts with code context before sending
- MCP auto-config — Writes Claude/Windsurf MCP configs for you
See docs/vscode-extension.md for full documentation.
Search (Indexer MCP):
repo_search— Hybrid code search with filterscontext_search— Blend code + memory resultscontext_answer— LLM-generated answers with citationssearch_tests_for,search_config_for,search_callers_for
Memory (Memory MCP):
store— Save knowledge with metadatafind— Retrieve stored memories
Indexing:
qdrant_index_root— Index the workspaceqdrant_status— Check collection healthqdrant_prune— Remove stale entries
See docs/MCP_API.md for complete API reference.
| Guide | Description |
|---|---|
| Getting Started | VS Code + dev-remote walkthrough |
| IDE Clients | Config examples for all supported clients |
| Configuration | Environment variables reference |
| MCP API | Full tool documentation |
| Architecture | System design |
| Multi-Repo | Multiple repositories in one collection |
| Kubernetes | Production deployment |
flowchart LR
subgraph Your Machine
A[IDE / AI Tool]
V[VS Code Extension]
end
subgraph Docker
U[Upload Service]
I[Indexer MCP]
M[Memory MCP]
Q[(Qdrant)]
L[[LLM Decoder]]
W[[Learning Worker]]
end
V -->|sync| U
U --> I
A -->|MCP| I
A -->|MCP| M
I --> Q
M --> Q
I -.-> L
I -.-> W
W -.-> Q
The VS Code extension syncs your workspace to the stack. Your IDE talks to the MCP servers, which query Qdrant for hybrid search. Optional features include a local LLM decoder (llama.cpp), cloud LLM integration (GLM, MiniMax M2), and adaptive learning that improves ranking over time.
Python, TypeScript/JavaScript, Go, Java, Rust, C#, PHP, Shell, Terraform, YAML, PowerShell
MIT