Documents
Poster
ROBUST AUTOMATIC RECOGNITION OF SPEECH WITH BACKGROUND MUSIC
- Citation Author(s):
- Submitted by:
- Jiri Malek
- Last updated:
- 28 February 2017 - 9:22am
- Document Type:
- Poster
- Document Year:
- 2017
- Event:
- Presenters:
- Jiri Malek
- Paper Code:
- SP-P4.1
- Categories:
- Keywords:
- Log in to post comments
This paper addresses the task of Automatic Speech Recognition (ASR) with music in the background, where the accuracy of recognition may deteriorate significantly.
To improve the robustness of ASR in this task, e.g. for broadcast news transcription or subtitles creation, we adopt two approaches:
1) multi-condition training of the acoustic models and 2) denoising autoencoders followed by acoustic model training on the preprocessed data.
In the latter case, two types of autoencoders are considered: the fully connected and the convolutional network.
Presented experimental results show that all the investigated techniques are able to improve the recognition of speech distorted by music significantly.
For example, in the case of artificial mixtures of speech and electronic music (low Signal-to-Noise Ratio (SNR) of $0$~dB), we achieved absolute improvement of accuracy by 35.8%.
For real-world broadcast news and a high SNR (about $10$~dB), we achieved improvement by 2.4%.
The important advantage of the studied approaches is that they do not deteriorate the accuracy in scenarios with clean speech (the decrease is about 1%).