Documents
Presentation Slides
On Language Model Integration for RNN Transducer based Speech Recognition
- Citation Author(s):
- Submitted by:
- Wei Zhou
- Last updated:
- 23 May 2022 - 5:09am
- Document Type:
- Presentation Slides
- Document Year:
- 2022
- Event:
- Presenters:
- Wei Zhou
- Paper Code:
- SPE-83.4
- Categories:
- Log in to post comments
The mismatch between an external language model (LM) and the implicitly learned internal LM (ILM) of RNN-Transducer (RNN-T) can limit the performance of LM integration such as simple shallow fusion. A Bayesian interpretation suggests to remove this sequence prior as ILM correction. In this work, we study various ILM correction-based LM integration methods formulated in a common RNN-T framework. We provide a decoding interpretation on two major reasons for performance improvement with ILM correction, which is further experimentally verified with detailed analysis. We also propose an exact-ILM training framework by extending the proof given in the hybrid autoregressive transducer, which enables a theoretical justification for other ILM approaches. Systematic comparison is conducted for both in-domain and cross-domain evaluation on the Librispeech and TED-LIUM Release 2 corpora, respectively. Our proposed exact-ILM training can further improve the best ILM method.