WavTokenizer is a state-of-the-art discrete acoustic codec designed specifically for audio language modeling, capable of compressing 24 kHz audio into just 40 or 75 tokens per second while preserving high perceptual quality. It is built to represent speech, music, and general audio with extremely low bitrate, making it ideal as a front-end for large audio language models like GPT-4o and similar architectures. The model uses a single-quantizer design together with temporal compression to achieve extreme compression without sacrificing reconstruction fidelity. Its architecture incorporates a broader vector-quantization space, extended contextual windows, and improved attention networks, combined with multi-scale discriminators and inverse Fourier transform blocks to enhance waveform reconstruction. Extensive experiments show that WavTokenizer matches or surpasses previous neural codecs across speech, music, and general audio on both objective metrics and subjective listening tests.
Features
- Extreme compression with 40–75 discrete tokens per second for 24 kHz audio
- High-fidelity reconstruction for speech, music, and general audio
- Rich semantic token representations tailored for audio language models
- Multiple pre-trained models covering speech-only and speech+music domains
- PyTorch-based API for encoding, decoding, and token manipulation
- Designed to scale to large training corpora of tens of thousands of hours