pyannote/segmentation is an advanced audio segmentation model designed for detecting speech activity, overlapping speech, and refining speaker diarization outputs. Built using pyannote.audio, it enables fine-grained, frame-level speaker segmentation from audio input. The model supports multiple pipelines such as Voice Activity Detection (VAD), Overlapped Speech Detection (OSD), and Resegmentation. It outputs either labeled time segments or raw probability scores indicating speech presence. Based on work presented in Interspeech 2021, the model has been optimized for real-world datasets like AMI, DIHARD3, and VoxConverse. It is ideal for researchers and engineers developing speaker-aware audio processing systems and can be integrated via PyTorch in combination with Hugging Face tokens.

Features

  • Detects active speech regions in audio files
  • Identifies segments with overlapping speakers
  • Refines diarization using resegmentation pipeline
  • Outputs both labeled segments and raw scores
  • Pretrained on benchmark datasets like DIHARD3 and VoxConverse
  • Compatible with pyannote.audio and Hugging Face
  • Supports configurable threshold-based detection
  • Reproducible benchmarks for academic use

Project Samples

Project Activity

See All Activity >

Categories

AI Models

Follow segmentation

segmentation Web Site

nel_h2
Gen AI apps are built with MongoDB Atlas Icon
Gen AI apps are built with MongoDB Atlas

Build gen AI apps with an all-in-one modern database: MongoDB Atlas

MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of segmentation!

Additional Project Details

Registered

2025-07-01