-
EveryMatrix
Stars
Zedis: A blazing-fast, native Redis GUI built with Rust and GPUI.
Scitrera builds of various CUDA containers for version consistency, starting primarily with NVIDIA DGX Spark Containers
AI-powered coding assistant for any LLM — local or cloud
llama.cpp fork with additional SOTA quants and improved performance
A high-throughput and memory-efficient inference and serving engine for LLMs
Turn any PDF or image document into structured data for your AI. A powerful, lightweight OCR toolkit that bridges the gap between images/PDFs and LLMs. Supports 100+ languages.
The definitive Web UI for local AI, with powerful features and easy setup.
Wake records your terminal sessions so Claude Code can see what you've been doing.
Open WebUI tool — Give your LLM a persistent workspace with file storage, SQLite, archives, and collaboration.
🚀 100% local RAG system with one-command setup. Your data never leaves your server. AGPL-3.0
Fractal Resolution Aware Knowledge Tree Augmented Generation
An upgraded llama.cpp GUI (https://github.com/ggml-org) local-first cloud model llama.cpp GUI multi agent command center with RAG, MCP tools, browser automation, voice, and multi-provider orchestra…
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
A set of scripts and notebooks on LLM finetunning and dataset creation
aider is AI pair programming in your terminal
📑 PageIndex: Document Index for Vectorless, Reasoning-based RAG
OpenAPI to Agent Skill for context-efficient AI agents
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc
Community-contributed instructions, prompts, and configurations to help you make the most of GitHub Copilot.
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)