Sorry, you need to enable JavaScript to visit this website.

Large Margin Training Improves Language Models for ASR

Citation Author(s):
Jilin Wang, Jiaji Huang, Kenneth Church
Submitted by:
Jilin Wang
Last updated:
22 June 2021 - 2:57pm
Document Type:
Document Year:
Kenneth Church
Paper Code:


Language models (LM) have been widely deployed in modern ASR systems. The LM is often trained by minimizing its perplexity on speech transcript. However, few studies try to discriminate a "gold" reference against inferior hypotheses. In this work, we propose a large margin language model (LMLM). LMLM is a general framework that enforces an LM to assign a higher score to the "gold" reference, and a lower one to the inferior hypothesis. The general framework is applied to three pretrained LM architectures: left-to-right LSTM, transformer encoder, and transformer decoder. Results show that LMLM can significantly outperform traditional LMs that are trained by minimizing perplexity. Especially for challenging noisy cases. Finally, among the three architectures, transformer encoder achieves the best performance.

1 user has voted: Jilin Wang


Poster for "Large Margin Training Improves Language Models for ASR"