Tutorial: Assisted Listening for headphones and hearing aids: Signal Processing Techniques
- Citation Author(s):
- Submitted by:
- Jianjun HE
- Last updated:
- 23 February 2016 - 1:44pm
- Document Type:
- Document Year:
With the strong growth of the mobile devices and emerging virtual reality (VR) and augmented reality (AR) applications, headsets are becoming more and more preferable in personal listening due to its convenience and portability. Assistive listening (AL) devices like hearing aids have seen much advancement. Creating a natural and authentic listening experience is the common objective of these VR, AR, and AL applications. In this tutorial, we will present state-of-the-art audio and acoustic signal processing techniques to enhance the sound reproduction in headsets and hearing aids.
This tutorial starts with an introduction of the recent examples of audio applications in VR, AR, and AL. To ensure the tutorial is understandable to the novice audience, some background on spatial hearing fundamentals and different classes of spatial audio reproduction techniques will be briefly introduced. This is followed by an outline of the three key parts of this tutorial that focuses on binaural techniques, especially their connections.
In part I, we will address recent advances in rendering natural sound in headphones. Based on a source-medium-receiver model, we analyze the differences between headphone sound reproduction and natural listening, which lead to five categories of signal processing approaches that could be employed to reduce the gap between the two. The five categories are virtualization, sound scene decomposition, individualization, equalization, and head-tracking. At last, an integration of these techniques are discussed and illustrated with an exemplar system (a.k.a., 3D audio headphones) developed at our lab.
In part II, we will discuss natural augmented reality audio. Natural listening in augmented reality requires listener to be aware of surrounding acoustic scene. In augmented reality, virtual sound sources are superimposed with the real world such that listeners are able to connect with the augmented sound sources seamlessly. Three typical headset systems for augmented reality audio will be presented, including a natural augmented reality (NAR) headset developed at our lab. The NAR headset employs adaptive filtering techniques to adapt to the listener's specific responses, environmental characteristics, and compensate for the headphone response to achieve natural listening in real-time.
In part III, other aspects to augment human listening, i.e., reducing unwanted noise and enhance speech perception, will be discussed. We will present active noise control (ANC) techniques for headsets and discuss how to integrate ANC with sound playback. Moreover, noise reduction and speech enhancement in hearing aids will be presented, with a focus on the spatial information. Furthermore, ANC can also be incorporated in hearing aids to further reduce the ambient noise.
In the concluding part of the tutorial, we will provide some demonstrations (video and apps) to illustrate some of the advancements in assisted listening and natural sound rendering in headphones, and highlight new trends of signal processing approaches for natural and augmented listening in headsets.
This tutorial is an extension of the APSIPA 2014 Plenary Talk and also includes new work reported in recent publications published in the IEEE Signal Processing Magazine, March 2015 issue on Signal Processing Techniques for Assisted Listening.