Chat-driven browser automation using LibreChat UI, Ollama (llama3.2 2B), and Microsoft Playwright MCP server over HTTP streaming — zero-config with Docker Compose.
-
Updated
Aug 11, 2025 - TypeScript
Chat-driven browser automation using LibreChat UI, Ollama (llama3.2 2B), and Microsoft Playwright MCP server over HTTP streaming — zero-config with Docker Compose.
A Dockerized, full-stack mental-health chatbot with a Next.js/TypeScript frontend, FastAPI backend, and locally hosted Ollama model for empathetic, context-aware conversations.
Learn React from basic to advance concepts with project example
Discord bot that monitors YouTube links and identifies original content sources.
AI-powered, offline VS Code extension to auto-add comments/docstrings to Python code.
Production-ready orchestration framework for AI applications featuring hybrid intent classification, dynamic context optimization, and sequential pipeline architecture
Desktop application built with Electron, React, Typescript, OLlaMa and other popular Frontend libraries.
Sube documentos y extrae información relevante con Ollama en local. Diseñado para simplificar el análisis de documentos mediante conversaciones inteligentes.
This is a simple Deno CLI app that demonstrates how to use embeddings in a Deno CLI app.
AI chatbot built with Vercel AI SDK
Create unlimited AI chatbot agents for your website — powered by OpenAI-compatible LLMs, RAG, and MCP.
MyCMS.space is an innovative, open-source, and self-hostable CMS designed to transform any website into a dynamic, AI-powered digital assistant. It empowers you to seamlessly integrate a conversational AI widget that engages visitors 24/7, leveraging your site's comprehensive content. Effortlessly build captivating pages with an intuitive drag-a...
Self-hosted semantic memory for AI assistants via MCP. Vector search, auto-tagging, encryption at rest, OAuth 2.1 — fully local, zero cloud dependencies.
This repository, titled DarkStar Chat, is a cinematic and privacy-focused chatbot interface designed to serve as a local bridge for Large Language Models (LLMs). It allows users to run AI models entirely on their own machines by connecting to local providers like Ollama and LM Studio.
Full-stack prompt engineering workbench — multi-provider streaming, version diffs, A/B comparison, test cases, cost tracking
Add a description, image, and links to the ollama topic page so that developers can more easily learn about it.
To associate your repository with the ollama topic, visit your repo's landing page and select "manage topics."