Sorry, you need to enable JavaScript to visit this website.

Expression Guided EEG Representation Learning for Emotion Recognition

Citation Author(s):
Soheil Rayatdoost, David Rudrauf, Mohammad Soleymani
Submitted by:
Mohammad Soleymani
Last updated:
16 May 2020 - 12:55am
Document Type:
Presentation Slides
Document Year:
Mohammad Soleymani
Paper Code:


Learning a joint and coordinated representation between different modalities can improve multimodal emotion recognition. In this paper, we propose a deep representation learning approach for emotion recognition from electroencephalogram (EEG) signals guided by facial electromyogram (EMG) and electrooculogram (EOG) signals. We recorded EEG, EMG and EOG signals from 60 participants who watched 40 short videos and self-reported their emotions. A cross-modal encoder that jointly learns the features extracted from facial and ocular expressions and EEG responses was designed and evaluated on our recorded data and MAHOB-HCI, a publicly available database. We demonstrate that the proposed representation is able to improve emotion recognition performance. We also show that the learned representation can be transferred to a different database without EMG and EOG and achieve superior performance. Methods that fuse behavioral and neural responses can be deployed in wearable emotion recognition solutions, practical in situations in which computer vision expression recognition is not feasible.

0 users have voted: