A fully local RAG pipeline using LlamaIndex + Ollama + Chroma to query your Logseq notes.
-
Python 3.13+
-
Ollama running (https://ollama.com)
-
Pull a chat and embedding model:
ollama pull llama3.1 ollama pull nomic-embed-text
or (lighter weight):
ollama pull llama3.1 ollama pull all-minilm
cd logseq-chat
make installEdit config.yaml and at a minimum set logseq_root to your Logseq graph directory.
make ingestmake chatmake test- Summarize tasks tagged #home in October 2025.
- Find notes referencing [[Team Topologies]] and list my pros/cons.
- Skips
assets/by default. Enable OCR later if needed. - Uses Markdown-aware chunking; tags from
#tagandtags::stored in metadata. - For faster machines, try bigger models; for CPU-only, consider
llama3.2orqwen2.5:7band smaller chunks.