| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| 0.4.2 source code.tar.gz | 2025-05-19 | 595.6 kB | |
| 0.4.2 source code.zip | 2025-05-19 | 738.3 kB | |
| README.md | 2025-05-19 | 1.1 kB | |
| Totals: 3 Items | 1.3 MB | 1 | |
OuteTTS v0.4.2
-
Fade-in / Fade-out Audio Decoding Introduced quick fade-in and fade-out on decoded audio chunks to eliminate clipping artifacts at segment boundaries.
-
Batched Decoding Interfaces Added support for high-throughput, batched inference via three new backends:
-
EXL2 Async: Asynchronous batch processing using the EXL2.
- VLLM: Asynchronous batch decoding with VLLM. (Experiment support)
-
llama.cpp Async Server Endpoint: Connects to a continuously-batched llama.cpp server for async inference.
-
Single-Stream Decoding
-
llama.cpp Server Endpoint: single-stream decode endpoint for llama.cpp server.
-
OuteTTS 1.0 0.6B Model Support Compatibility with the new OuteTTS-1.0-0.6B, including config defaults.
-
Batched Interface Parameters New configuration options to control batched interface.
-
Enhanced pre-prompt normalization pipeline.
-
Documentation Updates, expanded the batched interface usage.