Skip to content

Tags: antonioleonardo/req_llm

Tags

v1.0.0-rc.4

Toggle v1.0.0-rc.4's commit message
Release 1.0.0-rc.4

v1.0.0-rc.3

Toggle v1.0.0-rc.3's commit message
Tag 1.0.0-rc.3

v1.0.0-rc.2

Toggle v1.0.0-rc.2's commit message
Release version 1.0.0-rc.2 with significant enhancements and fixes

- Added a model metadata guide and a local patching system for model synchronization.
- Introduced a centralized `ReqLLM.Keys` module for unified API key management.
- Updated bang methods to return naked values, improving API usability.
- Enhanced provider architecture with a new `translate_options/3` callback for model-specific parameter handling.
- Improved documentation across various files for clarity and accuracy.
- Integrated ExCoveralls for test coverage reporting and established automated dependency management with Dependabot.

This release focuses on improving flexibility, usability, and documentation within the ReqLLM library.

v1.0.0-rc.1

Toggle v1.0.0-rc.1's commit message
Refactor request preparation and response handling for providers

- Improved the `prepare_request` function in the `ReqLLM.Providers.Anthropic` module to handle unsupported operations more gracefully.
- Enhanced the `ReqLLM.Generation` module by formatting the request preparation for better readability.
- Streamlined the `decode_object_stream` function in the `ReqLLM.Response` module to simplify the decoding process.
- Updated various provider implementations to ensure consistent handling of options and parameters.
- Refined tests for object generation and streaming to validate the new request handling logic and ensure robust response structures.