Documents
Poster
LOW-RANK AND SPARSE SOFT TARGETS TO LEARN BETTER ! DNN ACOUSTIC MODELS
- Citation Author(s):
- Submitted by:
- Pranay Dighe
- Last updated:
- 7 March 2017 - 12:15pm
- Document Type:
- Poster
- Document Year:
- 2017
- Event:
- Presenters:
- Pranay Dighe
- Paper Code:
- 3483
- Categories:
- Log in to post comments
Conventional deep neural networks (DNN) for speech acoustic modeling rely on Gaussian mixture models (GMM) and hidden Markov model (HMM) to obtain binary class labels as the targets for DNN training. Subword classes in speech recognition systems correspond to context-dependent tied states or senones. The present work addresses some limitations of GMM-HMM senone alignments for DNN training. We hypothesize that the senone probabilities obtained from a DNN trained with binary labels can provide more accurate targets to learn better acoustic models. However, DNN outputs bear inaccuracies which are exhibited as high dimensional unstructured noise, whereas the informative components are structured and low dimensional. We exploit principal component analysis (PCA) and sparse coding to characterize the senone subspaces. Enhanced probabilities obtained from low-rank and sparse reconstructions are used as soft-targets for DNN acoustic modeling, that also enables training with untranscribed data. Experiments conducted on AMI corpus shows 4.6% relative reduction in word error rate.