Sorry, you need to enable JavaScript to visit this website.

Multimodal Depression Classification Using Articulatory Coordination Features and Hierarchical Attention Based Text Embeddings

Citation Author(s):
Nadee Seneviratne, Carol Espy-Wilson
Submitted by:
Nadee Seneviratne
Last updated:
6 May 2022 - 2:57pm
Document Type:
Poster
Document Year:
2022
Event:
Presenters:
Nadee Seneviratne
Paper Code:
SPE-9.2
 

Multimodal depression classification has gained immense popularity over the recent years. We develop a multimodal depression classification system using articulatory coordination features extracted from vocal tract variables and text transcriptions obtained from an automatic speech recognition tool that yields improvements of area under the receiver operating characteristics curve compared to unimodal classifiers (7.5% and 13.7% for audio and text respectively). We show that in the case of limited training data, a segment-level classifier can first be trained to then obtain a session-wise prediction without hindering the performance, using a multi-stage convolutional recurrent neural network. A text model is trained using a Hierarchical Attention Network (HAN). The multimodal system is developed by combining embeddings from the session-level audio model and the HAN text model.

up
0 users have voted: