Sorry, you need to enable JavaScript to visit this website.

Attentive Adversarial Learning for Domain-Invariant Training

Citation Author(s):
Zhong Meng, Jinyu Li, Yifan Gong
Submitted by:
Zhong Meng
Last updated:
12 May 2019 - 9:03pm
Document Type:
Poster
Document Year:
2019
Event:
Presenters:
Yifan Gong
Paper Code:
3451

Abstract

Adversarial domain-invariant training (ADIT) proves to be effective in suppressing the effects of domain variability in acoustic modeling and has led to improved performance in automatic speech recognition (ASR). In ADIT, an auxiliary domain classifier takes in equally-weighted deep features from a deep neural network (DNN) acoustic model and is trained to improve their domain-invariance by optimizing an adversarial loss function. In this work, we propose an attentive ADIT (AADIT) in which we advance the domain classifier with an attention mechanism to automatically weight the input deep features according to their importance in domain classification. With this attentive re-weighting, ADDIT can focus on the domain normalization of phonetic components that are more susceptible to domain variability and generates deep features with improved domain-invariance and senone-discriminativity over ADIT. Most importantly, the attention block serves only as an external component to the DNN acoustic model and is not involved in ASR, so AADIT can be used to improve the acoustic modeling with any DNN architectures. More generally, the same methodology can improve any adversarial learning system with an auxiliary discriminator. Evaluated on CHiME-3 dataset, the AADIT achieves 13.6% and 9.3% relative WER improvements, respectively, over a multi-conditional model and a strong ADIT baseline.

up
0 users have voted:

Files

aadit_poster.pptx

(265)