Complete local AI infrastructure for Apple Silicon - Ollama (LLMs) + ComfyUI (Stable Diffusion) with zero cloud dependencies
-
Updated
Dec 14, 2025 - JavaScript
Complete local AI infrastructure for Apple Silicon - Ollama (LLMs) + ComfyUI (Stable Diffusion) with zero cloud dependencies
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
The Ascend Institute, by StatikFinTech, LLC, ticker: SFTi(soon). Building Systems and Documentation that scares the orgs/govs behind paywalls and control systems. GremlinGPT is our chaotic first Creation: learning, building, and evolving as it survives testing and bonds with its user. Not just a wrapper—a battle map with agency. For the People.
AI-powered fact-checking Chrome extension using local Ollama LLMs with web search
sddao v1: a P2P chat over TON, libp2p (supporting local LLM), available as a binary file.
截图即知识。一款集成了 OCR 与 LLM 的本地化个人知识库工具:📸 Capture screenshots, 🧠 analyze with local AI, and 📚 build your second brain instantly. A full-stack solution for student/researcher knowledge management.
AI-powered Power BI External Tool that provides an intelligent chat interface for analyzing semantic models. Connect to Power BI Desktop via XMLA, view metadata and sample data, execute DAX queries, and get AI-powered insights about your data model.
Interactive web dev with an Open AI API compatible LLM
Live Web Access for Your Local AI — Tunable Search & Clean Content Extraction
AI Research Assistant – Fully offline AI research assistant using local LLMs (Llama 3.1, Mistral, CodeLlama) with MongoDB storage, professional UI, smart fallback, PDF processing, and natural language query support. Runs locally with zero internet dependency once set up.
The TeamAI application allows users to create a team of AI powered assistants with individual capabilities, personas. The AI assistants will solve the task requested by the user as a team effort, each bot contributing with its respective capabilities. Supported providers are Ollama and OpenAI.
A kanban board built with Deno, Sortable and Shoelace Web Components. 100% HTML/JS. Rapidly becoming the front-end for local AI "Assistant" initiation, context and task tracking. Being developed with Runa, a local LLM runtime. Uses sqlite3 / DenoKV.
SeekDeep is a peer-to-peer desktop application that brings collaborative capabilities to your local Large Language Models. Built with Pear Runtime, Hyperswarm, and Hypercore crypto, it allows multiple users to connect and share access to a host's Ollama models in a secure peer-to-peer network.
Anyany.js is a Node.js-based generative AI framework for software testing, supporting both local (Ollama) and cloud (OpenAI) models
Local RAG-powered document analysis platform with PDF QA, Ollama integration, and citation-aware search.
DocuChat is a document chat application that allows you to have conversations with your documents, powered by a serverless vector database for scalable, efficient retrieval. Upload your files and ask questions in natural language to get answers based on their content.
This repository gives you simple options to interact with Ollama modles using CLI, a local GUI, or a hosted web-app. NOTE: This setup is intended for testing and personal use only. Exposing your local server via ngrok without additional security measures puts your data and privacy at considerable risk.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."