Skip to content

AppliNH/mcp-rag-vector

Repository files navigation

mcp-rag-vector

Configure MCP with Claude Desktop

  1. Generate code install in GOBIN:
make generate-code-design
make install-bin
  1. Configure the following in Claude Desktop settings:
{
  "mcpServers": {
    "greetingmcp": {
      "command": "$GOPATH/bin/mcp-rag-vector",
      "args": ["mcp"]
    }
  }
}

Local usage with ollama

  1. Install ollama and run ollama serve

  2. Start the stack

docker compose up -d --build
  1. Wait for the model to be downloaded (2 GB)

  2. Query the API

curl -X POST http://localhost:8000/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.2:3b",
    "messages": [
      { "role": "user", "content": "Use the greet tool with my name thomas, return what it says" }
    ],
    "stream": false
  }'

Query the MCP via cURL

  1. Start the API

docker-commpose up or make run-server

  1. Use make http-call-mcp

About

MCP server to serve as a RAG pipeline by allowing an LLM to write and read in a vector DB.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages