Local AI Question-Answering bot built with Ollama and LangChain using a RAG (Retrieval-Augmented Generation) pipeline. Answers queries from local PDF/text documents with fast, private inference.
-
Updated
Jul 13, 2025 - CSS
Local AI Question-Answering bot built with Ollama and LangChain using a RAG (Retrieval-Augmented Generation) pipeline. Answers queries from local PDF/text documents with fast, private inference.
This Repo contains "Chat with LLM Locally" Web Application, Using Django
Generative AI Voice & Image UI Generator — A full-stack app that turns text prompts and images into responsive UI code using local AI models with Ollama, CodeLLaMA, and LLaVA. Runs 100% locally with no cloud APIs, featuring an Express backend and a simple HTML/JS/Tailwind frontend for live preview.
A simple local chat app using Flask and Ollama to run LLMs like llama3, command-r7b, and deepseek-r1. Switch models, chat in a web UI, and export history — all offline, no API keys needed.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."