A toolkit of local, privacy-focused AI applications built with Python. Includes a RAG-powered research assistant for PDFs, various chatbots, and visualization scripts.
-
Updated
Dec 1, 2025 - HTML
A toolkit of local, privacy-focused AI applications built with Python. Includes a RAG-powered research assistant for PDFs, various chatbots, and visualization scripts.
An AI chatbot built with DeepScaleR and deployed locally using Ollama, FastAPI, and Gradio.
Video Summarizer using Local LLM (facebook/bart-large-cnn). Submit a YouTube URL and get an AI-generated summary of the video.
Chatbot UI powered by local LLaMA3 using Ollama + Streamlit. 100% offline, no API keys needed.
An automated system designed to replace manual Excel tracking. It securely connects to Gmail via OAuth2 and uses a Local LLM to analyze recruiter emails and update application statuses without sending data to third parties. Built with FastAPI and asyncio to handle concurrent email processing efficiently.
A Polymathic Autonomous Organization (PAO) - sovereign, self-funding AI development environment with Solana blockchain integration. Features three autonomous agents (Analyst, Artist, Engineer) in a Textual TUI, local TensorRT-LLM inference, and economic generative loop.
PHP Frontend for Hosting local LLM's (run via VSCode or basic php execution methods/ add to project)
Langer is a lightweight desktop tool for translating text with multiple LLMs and evaluating them using standard metrics. It provides an easy Python/Tkinter interface, JSON batch translation, plugin-based evaluators, and support for both cloud and local LLMs.
Production-ready RAG system starter kit with local LLM inference, hybrid search, and intelligent document processing - deploy AI that learns from your knowledge base in minutes.
Desktop AI chat that runs 100% locally. Private, personalized, with dedicated support and industry-specific SLMs. Private desktop AI assistant powered by Ollama. Features RAG, 30+ models, and specialized SLMs for enterprise.
ScreenSafe: Privacy Protection and on-device PII detection.
🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
Free, offline OCR using local LLMs with Ollama. Convert images to text with vision-enabled models running entirely on your machine — no cloud, no API costs, full privacy.
Chat LLM local: Interfaz CLI para modelos GGUF y Transformers con compatibilidad CUDA. Permite ejecutar Llama, Mistral, Gemma, Phi y Qwen localmente con detección automática de modelos, adaptación de mensajes del sistema, soporte RAG y más.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."