The MIT-licensed core of BrainDrive: an extensible, self-hosted AI platform with React UI, FastAPI backend, and a modular plugin architecture.
-
Updated
Dec 15, 2025 - TypeScript
The MIT-licensed core of BrainDrive: an extensible, self-hosted AI platform with React UI, FastAPI backend, and a modular plugin architecture.
Priveedly: A django-based content reader and recommender for personal and private use
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
🔒 100% Private RAG Stack with EmbeddingGemma, SQLite-vec & Ollama - Zero Cost, Offline Capable
A modern, RAG-powered AI chat application that integrates with Ollama for local AI inference. Chat with various Ollama models while leveraging your own documents for context-aware, intelligent responses.
Deploy a complete, self-hosted AI stack for private LLMs, agentic workflows, and content generation. One-command Docker Compose deployment on any cloud.
The Private AI Setup Dream Guide for Demos automates the installation of the software needed for a local private AI setup, utilizing AI models (LLMs and diffusion models) for use cases such as general assistance, business ideas, coding, image generation, systems administration, marketing, planning, and more.
SnapDoc AI processes everything on-device, ensuring your sensitive information never leaves your control. Use voice and text on-device processing in organizations.
Record system audio and mic on your Mac to generate diarized trancripts and meeting notes.
This is on-going research regarding the implementation of homomorphic encryption and federated learning for the use case of electric utility infrastructure defect detection using an object detection model in a Private AI framework.
Local LLM integration for Odoo 18 - chat with AI directly in Odoo using Ollama, LM Studio, or any OpenAI-compatible API.
Internship Project at Stratigus: Cybersecurity and Privacy Challenges in the Age of Generative AI
Fast, private Android chat front-end for Ollama. Engineered with a cohesive UI to be the most reliable, confusion-free local AI experience available for mobile.
Lightweight web UI for llama.cpp with dynamic model switching, chat history & markdown support. No GPU required. Perfect for local AI development.
This project presents a streamlined interface for interacting with the Ollama API using Spring Boot and WebFlux.
Local private AI assistant powered by FastAPI, Streamlit, FAISS, and TinyLlama with document search and chat capabilities.
A lightweight Retrieval-Augmented Generation (RAG) agent powered by Groq AI and local embeddings, built to process and understand text data efficiently. It retrieves relevant context from your own files and generates accurate, natural-language responses -all while keeping your data private and running locally.
Distributed Deep Learning
Local RL-driven AI orchestrator fusing RAG, CAG, and graphs - Built offline on consumer hardware.
Add a description, image, and links to the private-ai topic page so that developers can more easily learn about it.
To associate your repository with the private-ai topic, visit your repo's landing page and select "manage topics."