In this article, the authors aim to improve speaker diarization, which is the task of identifying who is speaking during a conversation. They propose an end-to-end approach that uses deep neural networks to learn the entire process, from speech recognition to speaker diarization. This approach is different from traditional methods that rely on handcrafted features and separate modules for each step of the process.
The proposed model consists of two main components: the Frame Encoder and the Attention Module. The Frame Encoder converts each frame of speech into a vector representation, while the Attention Module computes the attention weights to focus on specific speakers in the conversation. The attention mechanism is similar to how we attend to certain people during a face-to-face conversation, where we pay more attention to the speaker who is closest to us.
The model is trained using a large dataset of conversations from various sources, including telephone calls and audiobooks. To train 16 kHz models, the authors also generated simulated SC (speaker-channel) data with different amounts of speakers in each conversation. The VAD (voice activity detection) algorithm was used to produce annotations, and equivalent background noises were used but in 16 kHz.
The authors evaluate their model using several metrics, including the speaker diarization error rate (SDER). Their results show that the end-to-end approach outperforms traditional methods, achieving better SDER scores. They also demonstrate that their model can handle conversations with multiple speakers and complex background noise.
In summary, this article presents an end-to-end deep neural network approach for speaker diarization in conversations. The proposed model learns the entire process from speech recognition to speaker diarization, without relying on handcrafted features or separate modules. The authors evaluate their model using various metrics and show that it outperforms traditional methods in handling multiple speakers and complex background noise.
Audio and Speech Processing, Electrical Engineering and Systems Science