Skip to content

wimjongman/local-ai-rag-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local AI RAG Stack

Run a full local AI environment with:

  • Ollama for model runtime (e.g., LLaMA 3)
  • OpenWebUI as a chat interface for Ollama (accessible via your WSL IP address (e.g., http://:3000))
  • LangChain + Chroma as a basic RAG pipeline for question answering over local documents

Quickstart

Clone the repository

git clone https://github.com/wimjongman/local-ai-rag-stack.git
cd local-ai-rag-stack

Pull the LLM model

If you have not installed Ollama on your host system (for CLI usage), install it first:

curl -fsSL https://ollama.com/install.sh | sh

Then pull the model:

ollama pull llama3

Start Ollama + Web UI

To start the containers:

docker-compose up -d

🪟 Access from Windows if using WSL

If you're running this inside WSL and can't reach http://localhost:3000 from Windows:

  1. Run this in WSL to find your IP:
    ip addr show eth0 | grep inet
  2. Use the resulting IP (e.g. http://172.20.5.234:3000) in your Windows browser.

Note: Ollama (port 11434) is usually directly accessible via localhost, but OpenWebUI (port 3000) may require access via the WSL IP address.

Activate RAG script

cd rag
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
python main.py

Ask questions about your own documents locally 🚀

Reset everything

To remove containers, volumes, cache and start over:

./reset.sh

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors