Documents
Presentation Slides
Robust speaker recognition using unsupervised adversarial invariance
- Citation Author(s):
- Submitted by:
- Raghuveer Peri
- Last updated:
- 5 May 2020 - 1:34am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Presenters:
- Raghuveer Peri
- Paper Code:
- 5301
- Categories:
- Log in to post comments
In this paper, we address the problem of speaker recognition in challenging acoustic conditions using a novel method to extract robust speaker-discriminative speech representations. We adopt a recently proposed unsupervised adversarial invariance architecture to train a network that maps speaker embeddings extracted using a pre-trained model onto two lower dimensional embedding spaces. The embedding spaces are learnt to disentangle speaker-discriminative information from all other information present in the audio recordings, without supervision about the acoustic conditions. We analyze the robustness of the proposed embeddings to various sources of variability present in the signal for speaker verification and unsupervised clustering tasks on a large-scale speaker recognition corpus. Our analyses show that the proposed system substantially outperforms the baseline in a variety of challenging acoustic scenarios. Furthermore, for the task of speaker diarization on a real-world meeting corpus, our system shows a relative improvement of 36% in the diarization error rate compared to the state-of-the-art baseline.