Documents
Presentation Slides
Mood State Prediction from Speech of Varying Acoustic Quality for Individuals with Bipolar Disorder
- Citation Author(s):
- Submitted by:
- John Gideon
- Last updated:
- 27 March 2016 - 3:20pm
- Document Type:
- Presentation Slides
- Document Year:
- 2016
- Event:
- Presenters:
- John Gideon
- Paper Code:
- 2523
- Categories:
- Log in to post comments
Speech contains patterns that can be altered by the mood of an individual. There is an increasing focus on automated and distributed methods to collect and monitor speech from large groups of patients suffering from mental health disorders. However, as the scope of these collections increases, the variability in the data also increases. This variability is due in part to the range in the quality of the devices, which in turn affects the quality of the recorded data, negatively impacting the accuracy of automatic assessment. It is necessary to mitigate variability effects in order to expand the impact of these technologies. This paper explores speech collected from phone recordings for analysis of mood in individuals with bipolar disorder. Two different phones with varying amounts of clipping, loudness, and noise are employed. We describe methodologies for use during preprocessing, feature extraction, and data modeling to correct these differences and make the devices more comparable. The results demonstrate that these pipeline modifications result in statistically significantly higher performance, which highlights the potential of distributed mental health systems.