🌐 Streamline web agent evaluations with Agent-CE, a containerized platform offering integrated frameworks and CI/CD support for efficient performance assessment.
-
Updated
Dec 14, 2025 - Python
🌐 Streamline web agent evaluations with Agent-CE, a containerized platform offering integrated frameworks and CI/CD support for efficient performance assessment.
📁 Manage your files effortlessly using natural language with this LLM-powered assistant, combining intelligence and intuitive design for seamless navigation.
🤖 Build intelligent, offline LLM agents with LangGraph and llama-cpp-python using this starter template for local, private tool-calling applications.
🤖 Enhance productivity with DeskAI, a screen assistant that streamlines tasks like image capture, text translation, and AI-driven content management.
Agentic AI orchestration system with local LLMs, RAG, autonomous tool calling, web access, and model-adaptive resource management. Features intelligent context window handling and SSE streaming.
🚀 Extract and vectorize your Cursor chat history, enabling efficient search through a Dockerized FastAPI API with LanceDB integration.
AetherShell is an AI-driven Linux assistant that executes natural language commands offline using a local LLM. Ideal for seamless shell interaction. 🐙💻
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
A fully offline NLP pipeline for extracting, chunking, embedding, querying, summarizing, and translating research documents using local LLMs. Inspired by the fictional mystery of Dr. X, the system supports multi-format files, local RAG-based Q&A, Arabic translation, and ROUGE-based summarization — all without cloud dependencies.
A Python project that deploys a Local RAG chatbot using Ollama API and vLLM API. Refines answers with internal RAG knowledge base, using both Embedding and Rerank models to improve accuracy of context provided to LLM models.
Python agent connecting local LLM and local SearxNG for web search
🧬 RAGIX: Local-first development assistant making LLMs behave like disciplined engineers – Unix-RAG retrieval, sandboxed execution, MCP-compatible, fully auditable
Local Deep Research achieves ~95% on SimpleQA benchmark (tested with GPT-4.1-mini). Supports local and cloud LLMs (Ollama, Google, Anthropic, ...). Searches 10+ sources - arXiv, PubMed, web, and your private documents. Everything Local & Encrypted.
Extend the Ollama API with dynamic AI tool integration from multiple MCP (Model Context Protocol) servers. Fully compatible, transparent, and developer-friendly, ideal for building powerful local LLM applications, AI agents, and custom chatbots
A Full-Stack RAG Container Template
Multi-agentic biomedical literature research system with counterfactual analsyis and extensive citation system
Local-first AI-powered document intelligence platform for investigative journalism
A local, open-source demo of 'Talk to Your Network' using Ollama LLMs and RAG on synthetic telecom logs.
🚀 SpaceX RAG Tracker - Real-time launch data + AI-powered Q&A over Starship mission transcripts using local Llama-3 + Ollama. Fully Dockerized, production-ready FastAPI backend. Ask anything about IFT-5, booster catches, or Raptor anomalies - gets answers with sources.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."