pyannote.ai invites you to their event

Building Voice AI with Open-Source diarization: From audio chaos to structured insights

About this event

Voice AI starts with knowing who’s speaking when.

Join us for a live, technical deep dive into how diarization transforms messy, real-world audio into structured, searchable, and actionable content.

In this session:

  • Discover what diarization is, and why it’s essential for Voice AI pipelines. 
  • Explore real-world use cases: meeting assistants, call analytics, transcription enrichment, and multilingual audio.
  • See how pyannote’s Community-1 model handles overlapping speech, background noise, and multi-speaker audio with precision.
  • Learn how to integrate diarization into your transcription or Voice AI workflow using GitHub or Hugging Face.

Featured experts:

  • Hervé, Founder of pyannote AI, the specialist behind the world’s most downloaded diarization model (1B+ downloads).
  • VB (Vaibhav Srivastav), Developer Experience at Hugging Face, discussing Voice AI models and open-source innovation.

When? 📅 November 6, 2025 | 5 PM CEST

Don’t miss this chance to see diarization in action and learn how to make your Voice AI smarter.

Hosted by

  • Team member
    T
    Vincent Molina CEO @ pyannoteAI

  • Guest speaker
    VS G
    Vaibhav Srivastav Developer Experience and Community @ Hugging Face

  • Team member
    T
    Hervé Bredin CSO @ pyannoteAI

  • Guest speaker
    VS G
    Vaibhav Srivastav

pyannote.ai

Identify who speaks when with pyannote.ai

Simply detect, segment, label, and separate speakers in any language.
When your audio sources are complex, contaminated by background noise, affected by external elements, and the performances of your solution depend on them, pyannote brings accuracy and precision.