Sorry, you need to enable JavaScript to visit this website.

Hearing Aid Research Data Set for Acoustic Environment Recognition (HEAR-DS)

Primary tabs

Citation Author(s):
Kamil Adiloğlu, Jörg-Hendrik Bach
Submitted by:
Andreas Huewel
Last updated:
18 May 2020 - 7:01am
Document Type:
Presentation Slides
Document Year:
Presenters Name:
Andreas Hüwel
Paper Code:



State-of-the-art hearing aids (HA) are limited in recognizing acoustic environments. Much effort is spent on research to improve listening experience for HA users in every acoustic situation. There is, however, no dedicated public database to train acoustic environment recognition algorithms with a specific focus on HA applications accounting for their requirements. Existing acoustic scene classification databases are inappropriate for HA signal processing. In this work we propose a novel, binaural HA acoustic environment recognition data set (HEAR-DS) suitable for the environment recognition needs of HAs. We present the
details about each individual environment provided within the data set. To show separability of these acoustic environments we trained a group of deep neural network-based classifiers which vary in complexity. The obtained classification accuracies provide a reliable indicator about the validity and separability of the provided data set. Finally, as we do not aim at providing the best possible neural network architecture to perform such a classification, but propose solely a novel data set, further research
is needed to streamline such networks and optimize them for robustness, real-time and limited computational capability to fit into modern HAs.

0 users have voted:

Dataset Files