⚙️ Build and manage decentralized applications with JV-Archon, a powerful framework designed for streamlined development and enhanced user experience.
-
Updated
Dec 14, 2025 - Shell
⚙️ Build and manage decentralized applications with JV-Archon, a powerful framework designed for streamlined development and enhanced user experience.
🚀 Build and understand a Large Language Model from scratch using PyTorch through hands-on notebooks focused on core Transformer concepts.
🌐 Streamline web agent evaluations with Agent-CE, a containerized platform offering integrated frameworks and CI/CD support for efficient performance assessment.
📁 Manage your files effortlessly using natural language with this LLM-powered assistant, combining intelligence and intuitive design for seamless navigation.
🌌 Create a simple MCP Server using the Star Wars API to access characters, planets, and films efficiently for testing and integration purposes.
🤖 Build intelligent, offline LLM agents with LangGraph and llama-cpp-python using this starter template for local, private tool-calling applications.
🤖 Enhance productivity with DeskAI, a screen assistant that streamlines tasks like image capture, text translation, and AI-driven content management.
Agentic AI orchestration system with local LLMs, RAG, autonomous tool calling, web access, and model-adaptive resource management. Features intelligent context window handling and SSE streaming.
Complete local AI infrastructure for Apple Silicon - Ollama (LLMs) + ComfyUI (Stable Diffusion) with zero cloud dependencies
🔍 Establish unique identities for AI agents using Solana blockchain and NFTs, enhancing security and accountability in the digital landscape.
🚀 Extract and vectorize your Cursor chat history, enabling efficient search through a Dockerized FastAPI API with LanceDB integration.
QuietPrompt is a local-first AI tool for coding. Capture screen text, voice, or typed prompts and run them offline with your LLM; no cloud 🐙
AetherShell is an AI-driven Linux assistant that executes natural language commands offline using a local LLM. Ideal for seamless shell interaction. 🐙💻
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
A fully offline NLP pipeline for extracting, chunking, embedding, querying, summarizing, and translating research documents using local LLMs. Inspired by the fictional mystery of Dr. X, the system supports multi-format files, local RAG-based Q&A, Arabic translation, and ROUGE-based summarization — all without cloud dependencies.
A Python project that deploys a Local RAG chatbot using Ollama API and vLLM API. Refines answers with internal RAG knowledge base, using both Embedding and Rerank models to improve accuracy of context provided to LLM models.
Python agent connecting local LLM and local SearxNG for web search
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."