pyannote/segmentation is an advanced audio segmentation model designed for detecting speech activity, overlapping speech, and refining speaker diarization outputs. Built using pyannote.audio, it enables fine-grained, frame-level speaker segmentation from audio input. The model supports multiple pipelines such as Voice Activity Detection (VAD), Overlapped Speech Detection (OSD), and Resegmentation. It outputs either labeled time segments or raw probability scores indicating speech presence. Based on work presented in Interspeech 2021, the model has been optimized for real-world datasets like AMI, DIHARD3, and VoxConverse. It is ideal for researchers and engineers developing speaker-aware audio processing systems and can be integrated via PyTorch in combination with Hugging Face tokens.
Features
- Detects active speech regions in audio files
- Identifies segments with overlapping speakers
- Refines diarization using resegmentation pipeline
- Outputs both labeled segments and raw scores
- Pretrained on benchmark datasets like DIHARD3 and VoxConverse
- Compatible with pyannote.audio and Hugging Face
- Supports configurable threshold-based detection
- Reproducible benchmarks for academic use