Sorry, you need to enable JavaScript to visit this website.

MULTIMODAL EMOTION RECOGNITION WITH SURGICAL AND FABRIC MASKS

Citation Author(s):
Ziqing Yang, Katherine Nayan, Zehao Fan, Houwei Cao
Submitted by:
Ziqing Yang
Last updated:
16 May 2022 - 10:42am
Document Type:
Poster
Event:
Presenters:
Ziqing Yang
Paper Code:
MMSP-3.1
 

In this study, we investigate how different types of masks affect automatic emotion classification in different channels of audio, visual, and multimodal. We train emotion classification models for each modality with the original data without mask and the re-generated data with mask respectively, and investigate how muffled speech and occluded facial expressions change the prediction of emotions. Moreover, we conduct the contribution analysis to study how muffled speech and occluded face interplay with each other and further investigate the individual contribution of audio, visual, and audiovisual modalities to the prediction of emotion with and without mask. Finally, we investigate the cross-corpus emotion recognition across clear speech and re-generated speech with different types of masks, and discuss the robustness of speech emotion recognition.

up
0 users have voted: