Documents
Presentation Slides
Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)
- Citation Author(s):
- Submitted by:
- Andreas Huewel
- Last updated:
- 18 May 2020 - 7:01am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Presenters:
- Andreas Hüwel
- Paper Code:
- AUD-P9.2
- Categories:
- Log in to post comments
State-of-the-art hearing aids (HA) are limited in recognizing acoustic environments. Much effort is spent on research to improve listening experience for HA users in every acoustic situation. There is, however, no dedicated public database to train acoustic environment recognition algorithms with a specific focus on HA applications accounting for their requirements. Existing acoustic scene classification databases are inappropriate for HA signal processing. In this work we propose a novel, binaural HA acoustic environment recognition data set (HEAR-DS) suitable for the environment recognition needs of HAs. We present the
details about each individual environment provided within the data set. To show separability of these acoustic environments we trained a group of deep neural network-based classifiers which vary in complexity. The obtained classification accuracies provide a reliable indicator about the validity and separability of the provided data set. Finally, as we do not aim at providing the best possible neural network architecture to perform such a classification, but propose solely a novel data set, further research
is needed to streamline such networks and optimize them for robustness, real-time and limited computational capability to fit into modern HAs.