The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
-
Updated
Aug 2, 2025 - JavaScript
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
Chat Multiple PDFs in Zotero AI with Gemini, Grok 4, DeepSeek, GPT, ChatGPT, Claude, OpenRouter, Gemma 3, Qwen 3
A simple "Be My Eyes" web app with a llama.cpp/llava backend
Local coding agent with neat UI
Android app that connects to LM Studio running on your computer, allowing you to chat with your favorite AI models from your mobile device.
The TeamAI application allows users to create a team of AI powered assistants with individual capabilities, personas. The AI assistants will solve the task requested by the user as a team effort, each bot contributing with its respective capabilities. Supported providers are Ollama and OpenAI.
SeekDeep is a peer-to-peer desktop application that brings collaborative capabilities to your local Large Language Models. Built with Pear Runtime, Hyperswarm, and Hypercore crypto, it allows multiple users to connect and share access to a host's Ollama models in a secure peer-to-peer network.
Anyany.js is a Node.js-based generative AI framework for software testing, supporting both local (Ollama) and cloud (OpenAI) models
An intelligent writing environment that combines structured thinking methodologies with modern AI assistance. Not a content generator - but a thinking companion for rigorous academic work.
AI-powered developer assistant that can autonomously analyze, modify, and debug code—almost like an AI pair programmer but with deeper project understanding.
DocuChat is a document chat application that allows you to have conversations with your documents, powered by a serverless vector database for scalable, efficient retrieval. Upload your files and ask questions in natural language to get answers based on their content.
A smart email assistant to manage your inbox like a boss
Privacy focused mobile optimised web front-end for LM Studio.
This repository gives you simple options to interact with Ollama modles using CLI, a local GUI, or a hosted web-app. NOTE: This setup is intended for testing and personal use only. Exposing your local server via ngrok without additional security measures puts your data and privacy at considerable risk.
Compare and validate QA tasks using 3 local (Ollama) or cloud (Groq API) LLMs side-by-side. Designed for QA automation, test case generation, and bug triage.
Local RAG-powered document analysis platform with PDF QA, Ollama integration, and citation-aware search.
PaperMind AI is a local privacy-first PDF assistant that allows natural language chat with any document. Powered by FAISS, LangChain, MiniLM embeddings, and TinyLLaMA 1.1B — all running offline. Built with FastAPI backend and a clean HTML/CSS/JS frontend.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."