Sorry, you need to enable JavaScript to visit this website.

MULTI-HEAD ATTENTION FOR SPEECH EMOTION RECOGNITION WITH AUXILIARY LEARNING OF GENDER RECOGNITION

Citation Author(s):
Periyasamy Paramasivam, Promod Yenigalla
Submitted by:
Anish Nediyanchath
Last updated:
21 May 2020 - 11:36pm
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters Name:
Anish Nediyanchath
Paper Code:
SPE-P11.8

Abstract 

Abstract: 

The paper presents a Multi-Head Attention deep learning network for Speech Emotion Recognition (SER) using Log mel-Filter Bank Energies (LFBE) spectral features as the input. The multi-head attention along with the position embedding jointly attends to information from different representations of the same LFBE input sequence. The position embedding helps in attending to the dominant emotion features by identifying positions of the features in the sequence. In addition to Multi-Head Attention and position embedding, we apply multi-task learning with gender recognition as an auxiliary task. The auxiliary task helps in learning the gender specific features that influence the emotion characteristics in speech and results in improved accuracy of Speech Emotion Recognition, the primary task. We conducted all our experiments on IEMOCAP dataset. We are able to achieve an overall accuracy of 76.4% and average class accuracy of 70.1%, which are 5.3% and 6.2% higher respectively than the state-of-the-art models available on SER for four emotion classes.

up
0 users have voted:

Dataset Files

ICASSP.pdf

(239)