- Go to the Ollama website and download Ollama for your operating system.
- Follow the instructions to install Ollama on your machine.
- For this project, install the llama3.2:1b model:
(Replace with the specific command for your model and Ollama version if needed.)
ollama pull llama3.2:1b
- Clone this project (
mcp-client):git clone https://github.com/AbhijeetMajumdar/mcp-client
- Clone the supporting REST API project (
custom-mcp):git clone https://github.com/AbhijeetMajumdar/custom-mcp
- Make sure both Ollama and the custom-mcp server are running.
- Start the Spring AI Model Context Server (
mcp-client) as you would a standard Spring Boot application (e.g., usingmvn spring-boot:runor via your IDE).
If you need to terminate the custom-mcp process (for example, if it’s using port 8060), you can do the following in Windows:
netstat -aon | findstr :8060Find the PID (e.g., 20740), then:
taskkill /pid 20740 /fUse curl to test the API endpoints:
curl --location 'http://localhost:8080/api/chat' --header 'Content-Type: application/json' --data '{"message" : "What is the latest AAPL price?"}'
curl --location 'http://localhost:8080/api/chat' --header 'Content-Type: application/json' --data '{"message" : "What is today's date?"}'
curl --location 'http://localhost:8080/api/chat' --header 'Content-Type: application/json' --data '{"message" : "Show my Chase account 123456 holdings."}'Let me know if you need the setup steps for a specific OS or have questions about configuration!