An AI chatbot built with DeepScaleR and deployed locally using Ollama, FastAPI, and Gradio.
-
Updated
Jun 3, 2025 - Python
An AI chatbot built with DeepScaleR and deployed locally using Ollama, FastAPI, and Gradio.
Video Summarizer using Local LLM (facebook/bart-large-cnn). Submit a YouTube URL and get an AI-generated summary of the video.
Chatbot UI powered by local LLaMA3 using Ollama + Streamlit. 100% offline, no API keys needed.
An automated system designed to replace manual Excel tracking. It securely connects to Gmail via OAuth2 and uses a Local LLM to analyze recruiter emails and update application statuses without sending data to third parties. Built with FastAPI and asyncio to handle concurrent email processing efficiently.
Langer is a lightweight desktop tool for translating text with multiple LLMs and evaluating them using standard metrics. It provides an easy Python/Tkinter interface, JSON batch translation, plugin-based evaluators, and support for both cloud and local LLMs.
Production-ready RAG system starter kit with local LLM inference, hybrid search, and intelligent document processing - deploy AI that learns from your knowledge base in minutes.
Meeting Mate is a local tool for transcribing and summarizing meetings conducted in Norwegian.
Free, offline OCR using local LLMs with Ollama. Convert images to text with vision-enabled models running entirely on your machine — no cloud, no API costs, full privacy.
Chat LLM local: Interfaz CLI para modelos GGUF y Transformers con compatibilidad CUDA. Permite ejecutar Llama, Mistral, Gemma, Phi y Qwen localmente con detección automática de modelos, adaptación de mensajes del sistema, soporte RAG y más.
Project demonstrates an agentic Retrieval-Augmented Generation (RAG) application built using CrewAI and Streamlit.
Local Autopilot-style support ticket classifier: Jinja template + JSON Schema guardrail; outputs UiPath XAML, C# and JS stubs. Works with LM Studio or Ollama.
An Experimental Cognitive Architecture with Persistent Memory for stateful LLM-agents
automate the batching and execution of prompts.
implemented vector similarity algorithms to understand their inner workings, used local embeddding models
A constrained generation filter for local LLMs that makes them quote properly from a source document
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."