Documents
Poster
Toward Visual Voice Activity Detection for Unconstrained Videos
- Citation Author(s):
- Submitted by:
- Rahul Sharma
- Last updated:
- 19 September 2019 - 11:55am
- Document Type:
- Poster
- Document Year:
- 2019
- Event:
- Presenters:
- Rahul Sharma
- Paper Code:
- 3434
- Categories:
- Log in to post comments
The prevalent audio-based Voice Activity Detection (VAD) systems are challenged by the presence of ambient noise and are sensitive to variations in the type of the noise. The use of information from the visual modality, when available, can help overcome some of the problems of audio-based VAD. Existing visual-VAD systems however do not operate directly on the whole image but require intermediate face detection, face landmark detection and subsequent facial feature extraction from the lip region. In this work, we present an end-to-end trainable Hierarchical Context Aware (HiCA) architecture for visual-VAD for videos obtained in unconstrained environments which can be trained with videos as input and audio speech labels as output. The network is designed to account for local and global temporal information in a video sequence. In contrast to existing visual-VAD systems our proposed approach does not rely on
face detection and subsequent facial feature extraction. It can obtain a VAD accuracy of 66% on a dataset of Hollywood movie videos just with visual information. Further analysis of the representations learned from our visual-VAD system shows that the network learns to localize on human faces, and sometimes speaking human
faces specifically. Our quantitative analysis of the effectiveness of face localization shows that our system performs better than sound localization networks designed for unconstrained videos.