Skip to content

Releases: Marker-Inc-Korea/AutoRAG

v0.3.7

24 Oct 03:44
3f98336
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.5...v0.3.7

v0.3.5

13 Oct 11:55
38d5bdb
Compare
Choose a tag to compare

What's Changed

  • Run validation at the start_trial by @vkehfdl1 in #826
  • AutoRAG api version & api docker container + gpu version docker container by @vkehfdl1 in #823
  • Add FlashRank Reranker module by @bwook00 in #818
  • set the fixed port number of the panel dashboard by @vkehfdl1 in #827
  • change stream to astream, and add non-async stream function by @vkehfdl1 in #835
  • add setup python at sphinx.yml by @vkehfdl1 in #836
  • Change recency filter parameter name to threshold_datetime from threshold by @vkehfdl1 in #837
  • Release/v0.3.5 by @vkehfdl1 in #838
  • [Hotfix] name change Konlpy at chunk_full.yaml by @bwook00 in #840

Full Changelog: v0.3.4...v0.3.5

v0.3.4

09 Oct 11:33
2f2e53c
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.3...v0.3.4

v0.3.3

05 Oct 07:52
3232c90
Compare
Choose a tag to compare

What's Changed

  • [Parse Bug] Fix only parse the first page of the whole pdf files by @bwook00 in #783
  • [Parse Bug] Add non-table exists page to use clova.py by @bwook00 in #784
  • Prevent error that httpx uses different event loop at method chaining on the QA by @vkehfdl1 in #785
  • add deepeval metrics by @Eastsidegunn in #750
  • Release/v0.3.3 by @vkehfdl1 in #803

Full Changelog: v0.3.2...v0.3.3

v0.3.2

03 Oct 02:30
b8c8a67
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.3.1...v0.3.2

v0.3.1

02 Oct 05:06
f374f46
Compare
Choose a tag to compare

What's Changed

  • Add toctree by @bwook00 in #745
  • Fix minor errors at the documentations by @vkehfdl1 in #747
  • add effective_order at bleu as True by @vkehfdl1 in #748
  • add passage dependency filter at data creation by @vkehfdl1 in #751
  • Add Passage Dependency at README.md by @bwook00 in #761
  • docs: update data_format.md by @eltociear in #772
  • change the README and tutorial of deploying the result. by @vkehfdl1 in #769
  • Windows support (partially) AutoRAG by @vkehfdl1 in #766
  • Feature/hongsw/671 dockerfile Add Dockerfile and Docker configuration for AutoRAG production environment by @hongsw in #763
  • Add total three evolving methods to QA creation by @vkehfdl1 in #767
  • Possible error when the QA retrieval_gt shape will be different by @vkehfdl1 in #774
  • dump version 0.3.1 by @vkehfdl1 in #776

New Contributors

Full Changelog: v0.3.0...v0.3.1

v0.3.0

25 Sep 17:14
12001ce
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.18...v0.3.0

🚀 AutoRAG v0.3.0 is Here! 🚀

We're thrilled to introduce AutoRAG v0.3.0, packed with new features and key improvements. Here’s what’s new:

1. Improved Response Time for Deployment

In earlier versions, the response time during deployment was slow, making it difficult to use optimized RAG pipeline. With v0.3.0, we've significantly reduced the response time, making deployment much more efficient for user-facing services.

2. Re-designed Data Creation Process

Data creation is an essential part of optimizing RAG pipelines, and we've made the process much smoother. In earlier versions, this feature was still in its early stages. Now, in v0.3.0, you can build the data creation process within AutoRAG.

We’ve added AutoParse and AutoChunk, allowing you to configure, parse, and chunk your data using a single YAML file. You can also easily compare different methods to refine your pipeline. Whether you build QA datasets with LLMs or manually, this structure offers a human-in-the-loop process to help you create and manage your data.

Check out the detailed guide on data creation.

3. Python & Library Support Updates

  • Python 3.9 is no longer supported. Please upgrade to Python 3.10.
  • AutoRAG now works with LangChain 0.3, LlamaIndex 0.11, Pydantic v2, and OpenAI o1 models.

Share Your Feedback

Your insights help us improve AutoRAG! Let us know how these updates impact your workflow and what you’d like to see in future versions.
Join Discord server now!

Thank you for being part of the AutoRAG journey!

v0.2.18

19 Sep 04:10
b3b201e
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.17...v0.2.18

v0.2.17

16 Sep 11:11
b881287
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.16...v0.2.17

v0.2.16

13 Sep 02:14
682c354
Compare
Choose a tag to compare

What's Changed

  • Replace FastAPI with Flask by @rjwharry in #657
  • Mock all OpenAI Embeddings at the test code for outside contributors by @vkehfdl1 in #659
  • Add basic dataset schema for new 'beta' version of data creation by @vkehfdl1 in #663
  • Add AutoParse baseline and module 'langchain_parse' and 'clova' by @bwook00 in #660
  • Add llamaparse module by @bwook00 in #666
  • replace yaml.dump with yaml.safe_dump by @rjwharry in #669
  • Add table hybrid parse module by @bwook00 in #668
  • [Data Creation Refactoring] Add generate qa set features by @vkehfdl1 in #678
  • Add more data creation methods by @vkehfdl1 in #680
  • add (auto)chunk and its first module llama_index_chunk by @bwook00 in #681
  • [Data Creation Refactoring] Add don't know filter at data creation and its docs by @vkehfdl1 in #686
  • [Chunk] Add "path" and "start_end_idx" at chunk return by @bwook00 in #685
  • add override at Raw and Chunker from_parquet classmethod by @vkehfdl1 in #692
  • [Chunk] Add langchain chunk module by @bwook00 in #693
  • Fix bug when use vllm in multi-gpu environment by @vkehfdl1 in #697
  • Add chunk method at Raw schema and test whole pipeline to generate initial dataset. by @vkehfdl1 in #698
  • fix an issue with loading HuggingfaceLLM models by @jis478 in #652
  • [Bug] Modify to kiwipiepy version 0.18.0 or higher by @bwook00 in #704
  • refactor existing metric python files with input schema by @Eastsidegunn in #667
  • dump version 0.2.16 by @vkehfdl1 in #705

New Contributors

Full Changelog: v0.2.15...v0.2.16