Sorry, you need to enable JavaScript to visit this website.

Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations

DOI:
10.60864/xqg9-5r70
Citation Author(s):
Debaditya Shome, Ali Etemad
Submitted by:
Debaditya Shome
Last updated:
6 June 2024 - 10:27am
Document Type:
Poster
Document Year:
2024
Event:
Presenters:
Debaditya Shome
Paper Code:
SLP-P39.10
 

We propose EmoDistill, a novel speech emotion recognition (SER) framework that leverages cross-modal knowledge distillation during training to learn strong linguistic and prosodic representations of emotion from speech. During inference, our method only uses a stream of speech signals to perform unimodal SER thus reducing computation overhead and avoiding run-time transcription and prosodic feature extraction errors. During training, our method distills information at both embedding and logit levels from a pair of pre-trained Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on the IEMOCAP benchmark demonstrate that our method outperforms other unimodal and multimodal techniques by a considerable margin, and achieves state-of-the-art performance of 77.49% unweighted accuracy and 78.91% weighted accuracy. Detailed ablation studies demonstrate the impact of each component of our method.

up
0 users have voted: