Skip to content

0.4.2

Latest

Choose a tag to compare

@edwko edwko released this 19 May 08:51
· 19 commits to main since this release

OuteTTS v0.4.2

  • Fade-in / Fade-out Audio Decoding
    Introduced quick fade-in and fade-out on decoded audio chunks to eliminate clipping artifacts at segment boundaries.

  • Batched Decoding Interfaces
    Added support for high-throughput, batched inference via three new backends:

    • EXL2 Async: Asynchronous batch processing using the EXL2.
    • VLLM: Asynchronous batch decoding with VLLM. (Experiment support)
    • llama.cpp Async Server Endpoint: Connects to a continuously-batched llama.cpp server for async inference.
  • Single-Stream Decoding

    • llama.cpp Server Endpoint: single-stream decode endpoint for llama.cpp server.
  • OuteTTS 1.0 0.6B Model Support
    Compatibility with the new OuteTTS-1.0-0.6B, including config defaults.

  • Batched Interface Parameters
    New configuration options to control batched interface.

  • Enhanced pre-prompt normalization pipeline.

  • Documentation Updates, expanded the batched interface usage.