A simple "Be My Eyes" web app with a llama.cpp/llava backend
-
Updated
Nov 28, 2023 - JavaScript
A simple "Be My Eyes" web app with a llama.cpp/llava backend
An intelligent writing environment that combines structured thinking methodologies with modern AI assistance. Not a content generator - but a thinking companion for rigorous academic work.
A smart email assistant to manage your inbox like a boss
Privacy focused mobile optimised web front-end for LM Studio.
Local coding agent with neat UI
AI-powered developer assistant that can autonomously analyze, modify, and debug code—almost like an AI pair programmer but with deeper project understanding.
This repository gives you simple options to interact with Ollama modles using CLI, a local GUI, or a hosted web-app. NOTE: This setup is intended for testing and personal use only. Exposing your local server via ngrok without additional security measures puts your data and privacy at considerable risk.
DocuChat is a document chat application that allows you to have conversations with your documents, powered by a serverless vector database for scalable, efficient retrieval. Upload your files and ask questions in natural language to get answers based on their content.
Local RAG-powered document analysis platform with PDF QA, Ollama integration, and citation-aware search.
Live Web Access for Your Local AI — Tunable Search & Clean Content Extraction
Anyany.js is a Node.js-based generative AI framework for software testing, supporting both local (Ollama) and cloud (OpenAI) models
Compare and validate QA tasks using 3 local (Ollama) or cloud (Groq API) LLMs side-by-side. Designed for QA automation, test case generation, and bug triage.
SeekDeep is a peer-to-peer desktop application that brings collaborative capabilities to your local Large Language Models. Built with Pear Runtime, Hyperswarm, and Hypercore crypto, it allows multiple users to connect and share access to a host's Ollama models in a secure peer-to-peer network.
A kanban board built with Deno, Sortable and Shoelace Web Components. 100% HTML/JS. Rapidly becoming the front-end for local AI "Assistant" initiation, context and task tracking. Being developed with Runa, a local LLM runtime. Uses sqlite3 / DenoKV.
The TeamAI application allows users to create a team of AI powered assistants with individual capabilities, personas. The AI assistants will solve the task requested by the user as a team effort, each bot contributing with its respective capabilities. Supported providers are Ollama and OpenAI.
AI Research Assistant – Fully offline AI research assistant using local LLMs (Llama 3.1, Mistral, CodeLlama) with MongoDB storage, professional UI, smart fallback, PDF processing, and natural language query support. Runs locally with zero internet dependency once set up.
sddao v1: a P2P chat over TON, libp2p (supporting local LLM), available as a binary file.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."