WavTokenizer
SOTA discrete acoustic codec models with 40/75 tokens per second
...The model uses a single-quantizer design together with temporal compression to achieve extreme compression without sacrificing reconstruction fidelity. Its architecture incorporates a broader vector-quantization space, extended contextual windows, and improved attention networks, combined with multi-scale discriminators and inverse Fourier transform blocks to enhance waveform reconstruction. Extensive experiments show that WavTokenizer matches or surpasses previous neural codecs across speech, music, and general audio on both objective metrics and subjective listening tests.