Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

End-to-End Anchored Speech Recognition

Abstract: 

Voice-controlled house-hold devices, like Amazon Echo or Google Home, face the problem of performing speech recognition of device- directed speech in the presence of interfering background speech, i.e., background noise and interfering speech from another person or media device in proximity need to be ignored. We propose two end-to-end models to tackle this problem with information extracted from the “anchored segment”. The “anchored segment” refers to the wake-up word part of an audio stream, which contains valuable speaker information that can be used to suppress interfering speech and background noise. The first method is called “Multi-source Attention” where the attention mechanism takes both the speaker in- formation and decoder state into consideration. The second method directly learns a frame-level mask on top of the encoder output. We also explore a multi-task learning setup where we use the ground truth of the mask to guide the learner. Given that audio data with interfering speech is rare in our training data set, we also propose a way to synthesize the “noisy” speech from “clean” speech to mitigate the mismatch between training and test data. Our proposed methods show up to 15% relative reduction in WER for Amazon Alexa live data with interfering background speech without significantly degrading on clean speech.

up
0 users have voted:

Paper Details

Authors:
Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister
Submitted On:
7 May 2019 - 2:33pm
Short Link:
Type:
Poster
Event:
Presenter's Name:
Björn Hoffmeister
Paper Code:
1980
Document Year:
2019
Cite

Document Files

ICASSP19_Poster_AnchoredSpeechRecogWithAttention.pdf

(16)

Subscribe

[1] Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister, "End-to-End Anchored Speech Recognition", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3943. Accessed: May. 23, 2019.
@article{3943-19,
url = {http://sigport.org/3943},
author = {Yiming Wang; Xing Fan; I-Fan Chen; Yuzong Liu; Tongfei Chen; Björn Hoffmeister },
publisher = {IEEE SigPort},
title = {End-to-End Anchored Speech Recognition},
year = {2019} }
TY - EJOUR
T1 - End-to-End Anchored Speech Recognition
AU - Yiming Wang; Xing Fan; I-Fan Chen; Yuzong Liu; Tongfei Chen; Björn Hoffmeister
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3943
ER -
Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister. (2019). End-to-End Anchored Speech Recognition. IEEE SigPort. http://sigport.org/3943
Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister, 2019. End-to-End Anchored Speech Recognition. Available at: http://sigport.org/3943.
Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister. (2019). "End-to-End Anchored Speech Recognition." Web.
1. Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister. End-to-End Anchored Speech Recognition [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3943