Highlights
- Pro
Stars
Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.
Korean SAT leader board
hongsw / autorag-openplayground
Forked from nat/openplaygroundAn LLM playground you can run on your laptop with AutoRAG
An open-source RAG-based tool for chatting with your documents.
Chatbot for documentation, that allows you to chat with your data. Privately deployable, provides AI knowledge sharing and integrates knowledge into your AI workflow
💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
Generate ideal question-answers for testing RAG
Split Korean text into sentences using heuristic algorithm.
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
The official repository for the paper: Evaluation of Retrieval-Augmented Generation: A Survey.
ARAGOG- Advanced RAG Output Grading. Exploring and comparing various Retrieval-Augmented Generation (RAG) techniques on AI research papers dataset. Includes modular code for easy experimentation an…
A realtime serving engine for Data-Intensive Generative AI Applications
High-performance retrieval engine for unstructured data
A curated list of Large Language Model with RAG
Retrieval and Retrieval-augmented LLMs
Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activelo…
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language Models (ACL 2024)
A collection of localized (Korean) AWS AI/ML workshop materials for hands-on labs.
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.