Documents
Poster
EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING
- Citation Author(s):
- Submitted by:
- Yuchen Zhang
- Last updated:
- 7 May 2019 - 2:28pm
- Document Type:
- Poster
- Document Year:
- 2019
- Event:
- Presenters:
- Yuchen Zhang
- Paper Code:
- 3409
- Categories:
- Log in to post comments
Code Switching refers to the phenomenon of changing languages within a sentence or discourse, and it represents a challenge for conventional automatic speech recognition systems deployed to tackle a single target language. The code switching problem is complicated by the lack of multi-lingual training data needed to build new and ad hoc multi-lingual acoustic and language models. In this work, we present a prototype research code-switching speech recognition system that leverages existing monolingual acoustic and language models, i.e., no ad hoc training is needed. To generate high quality pronunciation of foreign language words in the native language phoneme set, we use a combination of existing acoustic phone decoders and an LSTM-based grapheme-to-phoneme model. In addition, a code-switching language model was developed by using translated word pairs to borrow statistics from the native language model. We demonstrate that our approach handles accented foreign pronunciations better than techniques based on human labeling. Our best system reduces the WER from 34.4%, obtained with a conventional monolingual speech recognition system, to 15.3% on an intrasentential code-switching task, without harming the monolingual accuracy.