Serverless single HTML page access to an OpenAI API compatible Local LLM
-
Updated
Sep 9, 2025 - HTML
Serverless single HTML page access to an OpenAI API compatible Local LLM
LocalAPI.AI is a local AI management tool for Ollama, offering Web UI management and compatibility with vLLM, LM Studio, llama.cpp, Mozilla-Llamafile, Jan Al, Cortex API, Local-LLM, LiteLLM, GPT4All, and more.
Local Retrieval-Augmented Generation (RAG) system built with FastAPI, integrating vector search, Elasticsearch, and optional web search to power LLM-based intelligent question answering using models like Mistral or GPT-4.
☕ AI-powered assistant that answers Starbucks-related questions using real reviews, used LLaMA3:Instruct, LangChain, and Ollama , RAG & also Includes real-time Starbucks location lookup, runs fully offline (except for Maps), and styled with a cozy, coffee-themed UI.
The official landing page for Enclyra AI
Designed a secure, AI-based system for real-time fraud detection and behavior analysis. Utilized knowledge distillation, time-series RNNs, and explainability tools (LIME) to ensure efficient and transparent decision-making.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."