The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
-
Updated
Sep 19, 2025 - JavaScript
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
The Ascend Institute, by StatikFinTech, LLC, ticker: SFTi(soon). Building Systems and Documentation that scares the orgs/govs behind paywalls and control systems. GremlinGPT is our chaotic first Creation: learning, building, and evolving as it survives testing and bonds with its user. Not just a wrapper—a battle map with agency. For the People.
sddao v1: a P2P chat over TON, libp2p (supporting local LLM), available as a binary file.
Android app that connects to LM Studio running on your computer, allowing you to chat with your favorite AI models from your mobile device.
AI Research Assistant – Fully offline AI research assistant using local LLMs (Llama 3.1, Mistral, CodeLlama) with MongoDB storage, professional UI, smart fallback, PDF processing, and natural language query support. Runs locally with zero internet dependency once set up.
The TeamAI application allows users to create a team of AI powered assistants with individual capabilities, personas. The AI assistants will solve the task requested by the user as a team effort, each bot contributing with its respective capabilities. Supported providers are Ollama and OpenAI.
A kanban board built with Deno, Sortable and Shoelace Web Components. 100% HTML/JS. Rapidly becoming the front-end for local AI "Assistant" initiation, context and task tracking. Being developed with Runa, a local LLM runtime. Uses sqlite3 / DenoKV.
SeekDeep is a peer-to-peer desktop application that brings collaborative capabilities to your local Large Language Models. Built with Pear Runtime, Hyperswarm, and Hypercore crypto, it allows multiple users to connect and share access to a host's Ollama models in a secure peer-to-peer network.
Compare and validate QA tasks using 3 local (Ollama) or cloud (Groq API) LLMs side-by-side. Designed for QA automation, test case generation, and bug triage.
Anyany.js is a Node.js-based generative AI framework for software testing, supporting both local (Ollama) and cloud (OpenAI) models
Live Web Access for Your Local AI — Tunable Search & Clean Content Extraction
Local RAG-powered document analysis platform with PDF QA, Ollama integration, and citation-aware search.
DocuChat is a document chat application that allows you to have conversations with your documents, powered by a serverless vector database for scalable, efficient retrieval. Upload your files and ask questions in natural language to get answers based on their content.
This repository gives you simple options to interact with Ollama modles using CLI, a local GUI, or a hosted web-app. NOTE: This setup is intended for testing and personal use only. Exposing your local server via ngrok without additional security measures puts your data and privacy at considerable risk.
AI-powered developer assistant that can autonomously analyze, modify, and debug code—almost like an AI pair programmer but with deeper project understanding.
Local coding agent with neat UI
Privacy focused mobile optimised web front-end for LM Studio.
A smart email assistant to manage your inbox like a boss
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."