Sorry, you need to enable JavaScript to visit this website.

Comparison of speech tasks for automatic classification of patients with amyotrophic lateral sclerosis and healthy subjects

Citation Author(s):
Deep Patel, BK Yaminiy, Meera SSy, Shivashankar Ny, Preethish-Kumar Veeramaniz, Seena Vengalilz, Kiran Polavarapuz, Saraswati Nashiz, Atchayaram Naliniz, Prasanta Kumar Ghosh
Submitted by:
ARAVIND ILLA
Last updated:
24 April 2018 - 1:18am
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Aravind Illa
Paper Code:
SP-P21.5
 

In this work, we consider the task of acoustic and articulatory feature based automatic classification of Amyotrophic Lateral Sclerosis (ALS) patients and healthy subjects using speech tasks. In particular, we compare the roles of different types of speech tasks, namely rehearsed speech, spontaneous speech and repeated words for this purpose. Simultaneous articulatory and speech data were recorded from 8 healthy controls and 8 ALS patients using AG501 for the classification experiments. In addition to typical acoustic and articulatory features, new articulatory features are proposed for classification. As classifiers, both Deep Neural Networks (DNN) and Support Vector Machines (SVM) are examined. Classification experiments reveal that the proposed articulatory features outperform other acoustic and articulatory features using both DNN and SVM classifier. However, SVM performs better than DNN classifier using the proposed feature. Among three different speech tasks considered, the rehearsed speech was found to provide the highest F-score of 1, followed by an F-score of 0.92 when both repeated words and spontaneous speech are used for classification.

up
0 users have voted: