Sorry, you need to enable JavaScript to visit this website.

Sequence-to-Sequence ASR Optimization via Reinforcement Learning

Citation Author(s):
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura
Submitted by:
Andros Tjandra
Last updated:
14 April 2018 - 10:37am
Document Type:
Poster
Document Year:
2018
Event:
Presenters:
Andros Tjandra
Paper Code:
SP-P17.8
 

Despite the success of sequence-to-sequence approaches in automatic speech recognition (ASR) systems, the models still suffer from several problems, mainly due to the mismatch between the training and inference conditions. In the sequence-to-sequence architecture, the model is trained to predict the grapheme of the current time-step given the input of speech signal and the ground-truth grapheme history of the previous time-steps. However, it remains unclear how well the model approximates real-world speech during inference. Thus, generating the whole transcription from scratch based on previous predictions is complicated and errors can propagate over time. Furthermore, the model is optimized to maximize the likelihood of training data instead of error rate evaluation metrics that actually quantify recognition quality. This paper presents an alternative strategy for training sequence-to-sequence ASR models by adopting the idea of reinforcement learning (RL). Unlike the standard training scheme with maximum likelihood estimation, our proposed approach utilizes the policy gradient algorithm. We can (1) sample the whole transcription based on the model's prediction in the training process and (2) directly optimize the model with negative Levenshtein distance as the reward. Experimental results demonstrate that we significantly improved the performance compared to a model trained only with maximum likelihood estimation.

up
0 users have voted: