FullStack MCP Hub — complete MCP toolkit with built-in RAG, GUI and 50+ free tools (no paid APIs) that runs out-of-the-box.
Works with:
- OpenAI / ChatGPT Codex CLI
- Gemini CLI
- ChatGPT (Dev Mode) via custom connector
- Claude (desktop / MCP)
- Local LLMs (llama.cpp, LM Studio, etc.)
- Basically anything that can connect to an MCP server over stdio or SSE
Instead of every model having its own isolated tools and half-memory, FullStack MCP Hub gives you one central MCP stack:
- A gateway + hub that connects to multiple MCP servers, discovers tools, and exposes them through a single endpoint.
- A graphical UI where you can:
- browse tools from all connected servers,
- edit tool descriptions,
- block tools you don’t want used,
- save payload presets,
- and run tools live with JSON inputs and real responses.
- A built-in RAG system with its own tab:
- drag & drop files directly into the UI,
- browse your RAG folders (
uploads,saved_chats,images,indexes, profiles), - create named indexes from directories,
- search using keyword + fuzzy match + filters,
- and wire that context into any model you’re using.
Out of the box you get a solid, free MCP tool stack (no paid APIs required):
- Filesystem, shell, Python REPL (with its own venv)
- Local RAG (chunked search, filters,
save_chat,save_image) - Playwright (browser automation, screenshots, interactive browsing)
- SQLite (read/write, schema, insights)
- Web search (fast DuckDuckGo + deeper multi-engine clone)
- Research (Wikipedia, arXiv, Wikimedia images)
- Scraper (HTML → text)
- Image generation via Pollinations (URL-based)
- CoinGecko market data (curated, SSE-based)
- Clock/time utilities
The goal: make MCP server integration easy, and give you a reusable tool + RAG layer you can plug into any LLM workflow—cloud, local, or hybrid.
The idea:
clone → run one script → open the UI → start adding MCP servers.
- Node.js 18+
npm- Linux/macOS shell (bash/zsh) or WSL on Windows
You do not need any paid API keys to get started.
All bundled MCP servers run locally and use free/public data sources.
git clone https://github.com/<your-username>/FullStack_MCP_Hub.git
cd FullStack_MCP_HubSet the root path (used by helper scripts):
export MCP_ROOT="$(pwd)"(Optional) Add this to your shell rc (~/.bashrc or ~/.zshrc) so it’s always set:
echo "export MCP_ROOT=/full/path/to/FullStack_MCP_Hub" >> ~/.bashrcReload your shell or open a new terminal after adding it.
This will:
- install gateway and UI dependencies,
- build the React UI,
- install Playwright browsers for the Playwright MCP server.
From the repo root:
cd "$MCP_ROOT"
chmod +x setup.sh start.sh
./setup.shRun this once. If it finishes without errors, you’re ready to start the hub.
From the repo root:
cd "$MCP_ROOT"
./start.shThis will:
-
start the MCP gateway on http://localhost:3333
-
expose:
GET /tools– list toolsGET /sse– universal MCP SSE endpointPOST /gemini/v1/execute– Gemini-style adapter
-
serve the web UI at http://localhost:3333
Open your browser:
http://localhost:3333
If port 3333 is already in use:
lsof -i :3333
kill <pid>
./start.shIf you prefer to see the individual steps:
-
Install gateway deps:
cd "$MCP_ROOT/gateway" npm install
-
Install UI deps and build:
cd "$MCP_ROOT/gateway/ui" npm install npm run build
-
Start the gateway:
cd "$MCP_ROOT/gateway" npm start
-
Open the UI:
http://localhost:3333
Once the UI is open:
-
Go to the Servers section (left sidebar).
-
Click “Open form”.
-
Fill the form:
- Transport:
stdio - Command:
npx - Args:
-y @automatalabs/mcp-server-playwright - CWD:
servers(relative to repo root)
- Transport:
-
Click Test connection.
-
On success, click Add server.
You should now see a group of playwright__* tools in the list.
For a quick sanity check, select playwright__browser_navigate and run:
{ "url": "https://example.com" }Then run playwright__browser_screenshot:
{ "name": "example_full", "fullPage": true }You should see a screenshot file appear in your configured Playwright output path.
At this point, the hub is working and ready to be plugged into:
- OpenAI / ChatGPT Codex CLI
- Gemini CLI
- ChatGPT Dev Mode
- Claude or any other MCP-aware client.
FullStack MCP Hub is three main layers: hub, gateway, and UI, plus a set of bundled MCP servers and a RAG data directory.
-
hub/MCP hub core:- connects to configured MCP servers,
- lists tools across all servers,
- executes calls,
- merges tool metadata,
- applies description overrides and blocklist.
-
gateway/HTTP/SSE front door:-
exposes:
GET /tools– enumerate toolsGET /sse– universal MCP endpointPOST /gemini/v1/execute– Gemini-style adapter
-
serves built UI from
gateway/ui/dist.
-
-
gateway/ui/React + Vite UI:- Tools list & detail pane
- Servers management (add/test/remove)
- Blocked tools view
- Tool payload presets
- RAG tab (drag/drop uploads, browse folders, search indexes)
-
tool-registry/master.json- registry of MCP servers (stdio + SSE) and their config.
-
tool-registry/tool-overrides.json- per-tool description overrides (editable in the UI).
-
tool-registry/tool-blocklist.json- persistent tool blocklist (managed from the UI’s Blocked tab).
-
servers/-
bundled MCP servers that require no paid APIs:
local_ragsqlitepython_replresearchscrapepollinationscoingeckowebsearchweb-search-mcp(advanced)playwrightshellfilesystem
-
-
data/rag/-
Local RAG storage:
uploads/– arbitrary files you import via the UI.saved_chats/– raw + summary chat logs.images/– saved images fromlocal_rag__save_image.profile_*– profile folders (e.g.profile_jeff).indexes/– per-index folders if used.indexes.pkl– a persisted, global index metadata file.
-
The built-in RAG system is designed to be:
- lightweight (no embeddings required),
- chunked (about 500-word chunks),
- fuzzy-searchable,
- and filterable (path, tags, time).
-
You drop files into
data/rag/(usually via the RAG tab in the UI). -
You point
local_rag__create_indexat a directory and give the index a name. -
The server:
- walks that directory,
- extracts text,
- splits each file into ~500-word chunks with overlap,
- stores chunks + metadata,
- and persists the structure into
indexes.pkl.
-
When you use
local_rag__search_index:- it looks up the chosen index,
- runs keyword + fuzzy search over chunks,
- applies optional filters (like
path_contains,tag, or mtime), - and returns matching chunks with enough context to feed into a model.
The “Generation 1.5” RAG upgrades improved a few key things:
-
Path handling
local_rag__create_indexandlocal_rag__search_indexnow handle relative paths correctly, so you can point indexes at directories underdata/rag/without hard-coding absolute paths.
-
Chunked indexing
local_rag__create_indexsplits content into ~500-word chunks with overlap, instead of indexing whole files.- This improves relevance and keeps the amount of text passed back to models manageable.
-
Fuzzy search
local_rag__search_indexsupports fuzzy matches, so minor typos or wording changes won’t break recall.- It uses a sliding-window + similarity scoring approach over pre-chunked text.
-
Filters
-
path_containsis confirmed working. -
Other filters (
tag, mtime filters) are implemented and behave as expected:tagcomes from lines like#tags: project, profilein your markdown.- time filters use file modified times to narrow which chunks are searched.
-
Overall, this gives you a RAG system that sits between:
- plain keyword search (too dumb, no chunking, no fuzzy), and
- full vector/embedding RAG (heavier, needs extra models + infra),
while staying simple and fast for personal / local knowledge bases.
You (or contributors) can later add an optional vector layer on top, without breaking the existing API.
Under data/rag/ you’ll typically see:
-
uploads/- Files you drag/drop in via the RAG tab.
-
saved_chats/-
Files created by
local_rag__save_chat, usually:- a full raw transcript file,
- a summary file,
- filenames encoded with timestamp + model name.
-
-
images/- Files created by
local_rag__save_imagefrom base64 payloads.
- Files created by
-
profile_template/- Template profile file:
profile_public.md.
- Template profile file:
-
profile_*(e.g.profile_jeff/)-
Your actual profile data:
profile_public.md(name, prefs, projects, etc.).
-
-
indexes/- Optional per-index structure if you store extra metadata per index.
-
indexes.pkl- Single persisted file that tracks all indexes and cached text.
-
Create a personal profile
-
Copy the template:
mkdir -p data/rag/profile_jeff cp data/rag/profile_template/profile_public.md data/rag/profile_jeff/
-
Edit
profile_public.mdwith your personal details and preferences. -
Optionally add a tag line near the top:
#tags: profile, persona -
Create an index for it:
{ "index_name": "profile_jeff", "directory": "data/rag/profile_jeff" }via
local_rag__create_index.
-
-
Use your profile in new chats
-
In any client connected to this hub, ask it to call
local_rag__search_indexwith:{ "index_name": "profile_jeff", "query": "profile persona", "max_results": 5 } -
Then tell the model to keep that profile in mind for the rest of the session.
-
-
Save chats
-
When a conversation is important, call
local_rag__save_chatwith:- full transcript,
- optional summary,
- model/client name.
-
The file will get dropped into
data/rag/saved_chats/with a timestamped name.
-
-
Index & search project folders
-
Put a project folder under
data/rag/uploads/my_project/. -
Create an index:
{ "index_name": "my_project", "directory": "data/rag/uploads/my_project" } -
Later, search it via
local_rag__search_indexfor quick recall.
-
All tools are exposed via the MCP hub once their servers are connected.
filesystem__create_directoryfilesystem__directory_treefilesystem__edit_filefilesystem__get_file_infofilesystem__list_allowed_directoriesfilesystem__list_directoryfilesystem__list_directory_with_sizesfilesystem__move_filefilesystem__read_filefilesystem__read_media_filefilesystem__read_multiple_filesfilesystem__read_text_filefilesystem__search_filesfilesystem__write_file
Use these for inspecting, reading, writing, and organizing files and directories under the allowed roots (configured on the filesystem MCP server).
-
local_rag__create_indexBuild a named index from a directory of text files (chunked). -
local_rag__search_indexSearch a named index with keyword + fuzzy match and optional filters. -
local_rag__list_filesList files inside RAG directories. -
local_rag__list_indexesList all available indexes. -
local_rag__read_fileRead a file managed by local_rag. -
local_rag__save_chatSave raw + summary chats intodata/rag/saved_chats/. -
local_rag__save_imageSave base64 images intodata/rag/images/.
shell__run_commandExecute shell commands on the host machine. Powerful and potentially dangerous—intended for trusted, local setups.
playwright__browser_navigateplaywright__browser_screenshotplaywright__browser_clickplaywright__browser_click_textplaywright__browser_fillplaywright__browser_selectplaywright__browser_select_textplaywright__browser_hoverplaywright__browser_hover_textplaywright__browser_evaluate
Use these for full browser automation:
- open pages,
- fill forms,
- click buttons/links,
- run JS,
- and capture screenshots.
sqlite__read_querysqlite__write_querysqlite__create_tablesqlite__list_tablessqlite__describe_tablesqlite__append_insight
Great for storing structured logs, metrics, and notes directly from LLM runs.
-
websearch__web_searchFast DuckDuckGo search with snippets. -
websearch_adv__full-web-searchMulti-engine deep search with full-page extraction. -
websearch_adv__get-single-web-page-contentRobust content extractor for a single page.
research__wikipedia_searchresearch__arxiv_searchresearch__images_search_commons
These provide high-signal reference material for technical/academic questions and image lookups.
python_repl__execpython_repl__resetpython_repl__pip_install
Persistent Python process with its own venv:
- run analysis code,
- parse/transform data,
- install packages like
pandas,numpy, etc., without touching system Python.
scrape__scrape_pageFetch a URL and return cleaned text and title. Good when you want fast HTML→text without running a full browser.
-
pollinations__generateImageUrlGenerate an image URL from a text prompt (no paid API key). -
pollinations__listImageModelsDiscover which models are available.
Curated subset of the CoinGecko MCP server (SSE-based):
coingecko__get_simple_pricecoingecko__get_coins_marketscoingecko__get_range_coins_market_chartcoingecko__get_search
Use these for quick price lookups, market lists, and basic charts.
-
clock__nowCurrent time (UTC + local) and ISO formats. -
clock__add_deltaAdd or subtract a time delta (days/hours/minutes) from now.
-
Point your Gemini MCP config at:
http://localhost:3333/sse -
Or, use the
POST /gemini/v1/executeendpoint as a Gemini-style adapter in your scripts.
Once connected, Gemini CLI can:
- list tools,
- call any of the 50+ tools,
- use your RAG indexes as context.
-
Configure a custom MCP endpoint pointing at:
http://localhost:3333/sse -
Use the Codex CLI’s MCP integration (server name of your choice) to access the same tools and RAG as the UI.
-
Add a custom MCP connector in Dev Mode.
-
If
http://localhost:3333is blocked by the environment:-
use
ngrok(or similar) to expose it:ngrok http 3333
-
use the generated HTTPS URL as the MCP endpoint.
-
Then you can use this same tool/RAG stack inside Dev Mode chats.
-
Add an MCP server pointing at the same SSE endpoint:
http://localhost:3333/sse -
Claude will see the tools defined in the hub and can call them just like the UI does.
This stack is powerful. It exposes:
- full filesystem access (read/write/move/delete within allowed roots),
- system shell execution,
- browser automation,
- SQLite writes.
Recommended basics:
- Run the hub on localhost or behind a VPN.
- Only connect LLM clients you trust.
- Be careful with prompts that encourage arbitrary shell commands.
- Use the Blocked tab to disable tools you don’t want a particular client to use.
- Consider separate hub instances or separate tool registries for “safe” vs “full-power” environments.
-
Port in use (
EADDRINUSE)-
Check what’s on
:3333:lsof -i :3333
-
Kill it:
kill <pid>
-
Restart:
./start.sh
-
-
“Test connection” fails when adding a server
-
Double-check:
Commandexists on your PATH (node,python3,npx, etc.).Argsare correct for that MCP server.CWDpoints to the right folder.- For SSE: the URL ends with
/sseand the server is actually running.
-
Try again and read the returned error in the UI.
-
-
No tools show up after adding a server
-
Click Refresh tools in the UI.
-
If still empty:
- check
tool-registry/master.jsonfor typos, - restart the gateway (
Ctrl+Cthen./start.sh).
- check
-
-
One server keeps failing / spamming errors
- Make sure its dependencies are installed.
- Temporarily disable it in the registry, or block specific tools via the Blocked tab.
- Re-add it once things are fixed.
-
Hosted UIs can’t reach localhost
-
Use
ngrok(or similar) to expose your hub:ngrok http 3333
-
Use the HTTPS URL as the MCP endpoint in that client.
-
Some directions this project can grow:
-
Optional vector-based RAG layer (embeddings + vector store).
-
Per-index metadata instead of a single
indexes.pkl. -
Built-in scheduler MCP server for time-based jobs (run tools on a schedule).
-
Higher-level “notes” and “projects” APIs on top of RAG + SQLite.
-
Per-client profiles and presets (e.g. different defaults for different LLMs).
-
More bundled servers:
- email/calendar/contact integrations,
- additional search/scraping utilities,
- specialized dev tools.
Created by Jeff Bulger
- Website: https://jeffbulger.dev
- Email:
admin@jeffbulger.dev - GitHub: https://github.com/jbulger82
If you’re interested in:
- MCP tools and servers,
- LLM “full stack” workflows,
- RAG + search,
- automation / “LLM Ops”,
issues, PRs, and ideas are all welcome.
license (MIT, Apache-2.0, etc.) .