Sorry, you need to enable JavaScript to visit this website.

SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION

Citation Author(s):
Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black
Submitted by:
Siddharth Dalmia
Last updated:
18 April 2018 - 3:03pm
Document Type:
Presentation Slides
Document Year:
2018
Event:
Presenters:
Siddharth Dalmia
Paper Code:
4000
 

Techniques for multi-lingual and cross-lingual speech recognition can help in low resource scenarios, to bootstrap systems and enable analysis of new languages and domains. End-to-end approaches, in particular sequence-based techniques, are attractive because of their simplicity and elegance. While it is possible to integrate traditional multi-lingual bottleneck feature extractors as front-ends, we show that end-to-end multi-lingual training of sequence models is effective on context independent models trained using Connectionist Temporal Classification (CTC) loss. We show that our model improves performance on Babel languages by over 6% absolute in terms of word/phoneme error rate when compared to mono-lingual systems built in the same setting for these languages. We also show that the trained model can be adapted cross-lingually to an unseen language using just 25% of the target data. We show that training on multiple languages is important for very low resource cross-lingual target scenarios, but not for multi-lingual testing scenarios. Here, it appears beneficial to include large well prepared datasets.

up
2 users have voted: Shruti Palaskar, Siddharth Dalmia