Sorry, you need to enable JavaScript to visit this website.

An Analysis of Speech Enhancement and Recognition Losses in Limited Resources Multi-Talker Single Channel Audio-Visual ASR

Primary tabs

Citation Author(s):
Luca Pasa, Leonardo Badino
Submitted by:
Giovanni Morrone
Last updated:
13 May 2020 - 6:28pm
Document Type:
Presentation Slides
Document Year:
Presenters Name:
Luca Pasa
Paper Code:



In this paper, we analyzed how audio-visual speech enhancement can help to perform the ASR task in a cocktail party scenario. Therefore we considered two simple end-to-end LSTM-based models that perform single-channel audiovisual speech enhancement and phone recognition respectively. Then, we studied how the two models interact, and how to train them jointly affects the final result.We analyzed different training strategies that reveal some interesting and unexpected behaviors. The experiments show that during optimization of the ASR task the speech enhancement capability of the model significantly decreases and vice-versa. Nevertheless the joint optimization of the two tasks shows a remarkable drop of the Phone Error Rate (PER) compared to the audio-visual baseline models trained only to perform phone recognition. We analyzed the behaviors of the proposed models by using two limited-size datasets, and in particular we used the mixed-speech versions of GRID and TCD-TIMIT.

Full paper:

0 users have voted:

Dataset Files