segmentation-3.0 is a voice activity and speaker segmentation model from the pyannote.audio framework, designed to analyze 10-second mono audio sampled at 16kHz. It outputs a (num_frames, num_classes) matrix using a powerset encoding that includes non-speech, individual speakers, and overlapping speech for up to three speakers. Trained with pyannote.audio 3.0.0 on a rich blend of datasets—including AISHELL, DIHARD, VoxConverse, and more—it enables downstream tasks like voice activity detection (VAD), overlapped speech detection, and speaker diarization when combined with additional models. While it doesn't process full recordings directly, it powers pipelines for detailed segmentation and analysis of speech data. Its MIT license ensures it's openly accessible, though users must agree to usage conditions for access. The model showcases state-of-the-art segmentation performance and is used in both academic and production-oriented pipelines.
Features
- Processes 10-second mono audio at 16kHz
- Outputs 7-class powerset speaker segmentation
- Trained on 10+ benchmark speech datasets
- Enables voice activity and overlapped speech detection
- Integrates with pyannote.audio 3.0 pipelines
- Supports sliding window and chunk-based processing
- Open-source under MIT license
- Ideal for diarization, VAD, and speaker turn segmentation