script which performs RAG and use a local LLM for Q&A
-
Updated
Sep 16, 2024 - Python
script which performs RAG and use a local LLM for Q&A
All-in-One Translation tool that combines Machine Translations, LLMs, Translation Memories, Termbases and more
Deploy resources onto EC2, Lambda, RDS, S3 etc. Needed to run an LLM Application communicating to an Ollama server deployed in AWS.
A Discord bot integrating large language models using Discord.JS, Ollama and MySQL.
Automated forensic document auditor utilizing a local RAG pipeline (Llama 3.2 & ChromaDB) to detect financial and date discrepancies with 100% data privacy. Engineered for the Digital Accelerator division to streamline contract reviews and eliminate manual auditing errors through advanced MMR retrieval logic.
A Retrieval-Augmented Generation (RAG) system using FastAPI, Qdrant, and Ollama to provide expert knowledge about Pittsburgh.
AI-powered autonomous client for the SpaceMolt MMO game
A secure, privacy-preserving electronic voting system built with React, Node.js, PostgreSQL, and Ethereum blockchain (Sepolia testnet) integration.
Production-grade self-hosted inference gateway with OpenAI-compatible API
Semantic book recommendation system
A content safety platform that uses Ollama (LLMs + Embeddings) to detect and act on harmful content in real-time and safeguards content creators on social media platforms.
RAG platform with Dockerized ingestion, retrieval, vector storage, and LLM services
AI-powered freelance simulator to practice client communication with different personas before signing your first contract.
Add a description, image, and links to the ollama topic page so that developers can more easily learn about it.
To associate your repository with the ollama topic, visit your repo's landing page and select "manage topics."