Sorry, you need to enable JavaScript to visit this website.

Using Multiple Perspectives in Emotion and Sentiment Analysis

EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION


The subjectivity and variability exist in the human emotion perception differs from person to person. In this work, we propose a framework that models the majority of emotion annotation integrated with modeling of subjectivity in improving emotion categorization performances. Our method achieves a promising accuracy of 61.48% on a four-class emotion recognition task. To the best of our knowledge, while there are many works in studying annotator subjectivity, this is one of the first works that have explicitly modeled jointly the consensus with individuality in emotion perception to demonstrate its improvement in classifying emotion in a benchmark corpus.

In our immediate future work, we will evaluate the proposed framework on other public large-scaled emotional database with multiple annotators, e.g., NNIME, to further justify its robustness. We also plan to extend our framework to includ other behavior attributes, e.g., lexical content and body movements. Furthermore, the subjective nature of emotion perception has been shown to be related to the rater personality, a joint modeling of rater’s characteristics with his/her subjectivity in emotion perception may lead to further advancement in robust emotion recognition.

Paper Details

Authors:
Huang-Cheng Chou, Chi-Chun Lee
Submitted On:
30 May 2019 - 2:17am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Talk Slides

(11)

Full Paper

(9)

Keywords

Additional Categories

Subscribe

[1] Huang-Cheng Chou, Chi-Chun Lee, "EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4006. Accessed: Jun. 20, 2019.
@article{4006-19,
url = {http://sigport.org/4006},
author = {Huang-Cheng Chou; Chi-Chun Lee },
publisher = {IEEE SigPort},
title = {EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION},
year = {2019} }
TY - EJOUR
T1 - EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION
AU - Huang-Cheng Chou; Chi-Chun Lee
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4006
ER -
Huang-Cheng Chou, Chi-Chun Lee. (2019). EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION. IEEE SigPort. http://sigport.org/4006
Huang-Cheng Chou, Chi-Chun Lee, 2019. EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION. Available at: http://sigport.org/4006.
Huang-Cheng Chou, Chi-Chun Lee. (2019). "EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION." Web.
1. Huang-Cheng Chou, Chi-Chun Lee. EVERY RATING MATTERS: JOINT LEARNING OF SUBJECTIVE LABELS AND INDIVIDUAL ANNOTATORS FOR SPEECH EMOTION CLASSIFICATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4006