Documents
Poster
Modality attention for end-to-end audio-visual speech recognition
- Citation Author(s):
- Submitted by:
- PAN ZHOU
- Last updated:
- 9 May 2019 - 12:27pm
- Document Type:
- Poster
- Document Year:
- 2019
- Event:
- Presenters:
- Pan Zhou
- Paper Code:
- SLP-P13
- Categories:
- Keywords:
- Log in to post comments
Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for robust speech recognition, especially in noisy environment. In this paper, we propose a novel multimodal attention based method for audio-visual speech recognition which could automatically learn the fused representation from both modalities based on their importance. Our method is realized using state-of-the-art sequence-to-sequence (Seq2seq) architectures. Experimental results show that relative improvements from 2% up to 36% over the auditory modality alone are obtained depending on the different signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation methods, our proposed approach can achieve better recognition performance under both clean and noisy conditions. We believe modality attention based end-to-end method can be easily generalized to other multimodal tasks with correlated information.
https://ieeexplore.ieee.org/document/8683733