Sorry, you need to enable JavaScript to visit this website.

AUTOMATIC SPEECH ASSESSMENT FOR APHASIC PATIENTS BASED ON SYLLABLE-LEVEL EMBEDDING AND SUPRA-SEGMENTAL DURATION FEATURES

Citation Author(s):
Tan Lee, Anthony Pak Hin Kong
Submitted by:
Ying Qin
Last updated:
12 April 2018 - 11:52pm
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Ying Qin
Categories:
 

Aphasia is a type of acquired language impairment resulting from brain injury. Speech assessment is an important part of the comprehensive assessment process for aphasic patients. It is based on the acoustical and linguistic analysis of patients’ speech elicited through pre-defined story-telling tasks. This type of narrative spontaneous speech embodies multi-fold atypical characteristics related to the underlying language impairment. This paper presents an investigation on automatic speech assessment for Cantonese-speaking aphasic patients using an automatic speech recognition (ASR) system. A novel approach to extracting robust text features from erroneous ASR output is developed based on word embedding methods. The text features can effectively distinguish the stories told by an impaired speaker from those by unimpaired ones. On the other hand, a set of supra-segmental duration features are derived from syllable-level time alignments produced by the ASR system, to characterize the atypical prosody of impaired speech. The proposed text features, duration features and their combination are evaluated in a binary classification experiment as well as in automatic prediction of subjective assessment score. The results clearly show that the text features are very effective in the intended task of aphasia assessment, while using duration features could provide additional benefit.

up
0 users have voted: