Skip to content

RAG AutoML Tool - Find optimal RAG pipeline for your own data.

License

Notifications You must be signed in to change notification settings

khlee369/AutoRAG

 
 

Repository files navigation

AutoRAG

RAG AutoML tool for automatically finds an optimal RAG pipeline for your data.

Explore our 📖 Document!!

Plus, join our 📞 Discord Community.


💪 Colab Tutorial


🚨 YouTube Tutorial

AutoRAG.Tutorial.1.1.mp4

Muted by default, enable sound for voice-over

You can see on YouTube


☎️ FaQ

🛣️ Support plans & Roadmap

💻 Hardware Specs

Running AutoRAG

🍯 Tips/Tricks

☎️ TroubleShooting


😁 Introduction

There are many RAG pipelines and modules out there, but you don’t know what pipeline is great for “your own data” and "your own use-case." Making and evaluating all RAG modules is very time-consuming and hard to do. But without it, you will never know which RAG pipeline is the best for your own use-case.

AutoRAG is a tool for finding optimal RAG pipeline for “your data.” You can evaluate various RAG modules automatically with your own evaluation data, and find the best RAG pipeline for your own use-case.

AutoRAG supports a simple way to evaluate many RAG module combinations. Try now and find the best RAG pipeline for your own use-case.

📌AutoRAG Structure

Node

❗Supporting Nodes & modules

module_1 module_2 You can check our all supporting Nodes & modules at here

❗Supporting Evaluation Metrics

Metrics

You can check our all supporting Evaluation Metrics at here

⚡ Quick Install

We recommend using Python version 3.9 or higher for AutoRAG.

pip install AutoRAG

⚡ QuickStart

1. Prepare your evaluation data

For evaluation, you need to prepare just three files.

  1. QA dataset file (qa.parquet)
  2. Corpus dataset file (corpus.parquet)
  3. Config yaml file (config.yaml)

There is a template for your evaluation data for using AutoRAG.

  • Check out how to make evaluation data at here.
  • Check out the evaluation data rule at here.
  • Plus, you can get example datasets for testing AutoRAG at here.

2. Evaluate your data to various RAG modules

You can get various config yaml files at here. We highly recommend using pre-made config yaml files for starter.

If you want to make your own config yaml files, check out the Config yaml file section.

You can evaluate your RAG pipeline with just a few lines of code.

from autorag.evaluator import Evaluator

evaluator = Evaluator(qa_data_path='your/path/to/qa.parquet', corpus_data_path='your/path/to/corpus.parquet')
evaluator.start_trial('your/path/to/config.yaml')

or you can use command line interface

autorag evaluate --config your/path/to/default_config.yaml --qa_data_path your/path/to/qa.parquet --corpus_data_path your/path/to/corpus.parquet

Once it is done, you can see several files and folders created at your current directory. At the trial folder named to numbers (like 0), you can check summary.csv file that summarizes the evaluation results and the best RAG pipeline for your data.

For more details, you can check out how the folder structure looks like at here.

3. Use a found optimal RAG pipeline

You can use a found optimal RAG pipeline right away. It needs just a few lines of code, and you are ready to use!

First, you need to build pipeline yaml file from your evaluated trial folder. You can find the trial folder in your current directory. Just looking folder like '0' or other numbers.

from autorag.deploy import Runner

runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run('your question')

Or, you can run this pipeline as api server. You can use python code or CLI command. Check out API endpoint at here.

from autorag.deploy import Runner

runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run_api_server()

You can run api server with CLI command.

autorag run_api --config_path your/path/to/pipeline.yaml --host 0.0.0.0 --port 8000

4. Run Dashboard

You can run dashboard to easily see the result.

autorag dashboard --trial_dir /your/path/to/trial_dir
  • sample dashboard:

dashboard

5. Share your RAG pipeline

You can use your RAG pipeline from extracted pipeline yaml file. This extracted pipeline is great for sharing your RAG pipeline to others.

You must run this at project folder, which contains datas in data folder, and ingested corpus for retrieval at resources folder.

from autorag.deploy import extract_best_config

pipeline_dict = extract_best_config(trial_path='your/path/to/trial_folder', output_path='your/path/to/pipeline.yaml')

6. Deploy your optimal RAG pipeline (for testing)

6-1. Run as a CLI

You can use a found optimal RAG pipeline right away with extracted yaml file.

from autorag.deploy import Runner

runner = Runner.from_yaml('your/path/to/pipeline.yaml')
runner.run('your question')

6-2. Run as an API server

You can run this pipeline as an API server.

Check out API endpoint at here.

from autorag.deploy import Runner

runner = Runner.from_yaml('your/path/to/pipeline.yaml')
runner.run_api_server()
autorag run_api --config_path your/path/to/pipeline.yaml --host 0.0.0.0 --port 8000

6-3. Run as a Web Interface

you can run this pipeline as a web interface.

Check out web interface at here.

autorag run_web --trial_path your/path/to/trial_path
  • sample web interface:
web_interface

⭐ Star History ⭐

Star History Chart

✨ Contributors ✨

Thanks go to these wonderful people:

Contribution

We are developing AutoRAG as open-source.

So this project welcomes contributions and suggestions. Feel free to contribute to this project.

Plus, check out our detailed documentation at here.

About

RAG AutoML Tool - Find optimal RAG pipeline for your own data.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%