Documents
Poster
Large Margin Training Improves Language Models for ASR
- Citation Author(s):
- Submitted by:
- Jilin Wang
- Last updated:
- 22 June 2021 - 2:57pm
- Document Type:
- Poster
- Document Year:
- 2021
- Event:
- Presenters:
- Kenneth Church
- Paper Code:
- HLT-2.2
- Categories:
- Log in to post comments
Language models (LM) have been widely deployed in modern ASR systems. The LM is often trained by minimizing its perplexity on speech transcript. However, few studies try to discriminate a "gold" reference against inferior hypotheses. In this work, we propose a large margin language model (LMLM). LMLM is a general framework that enforces an LM to assign a higher score to the "gold" reference, and a lower one to the inferior hypothesis. The general framework is applied to three pretrained LM architectures: left-to-right LSTM, transformer encoder, and transformer decoder. Results show that LMLM can significantly outperform traditional LMs that are trained by minimizing perplexity. Especially for challenging noisy cases. Finally, among the three architectures, transformer encoder achieves the best performance.