Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

Sequence-to-Sequence ASR Optimization via Reinforcement Learning

Abstract: 

Despite the success of sequence-to-sequence approaches in automatic speech recognition (ASR) systems, the models still suffer from several problems, mainly due to the mismatch between the training and inference conditions. In the sequence-to-sequence architecture, the model is trained to predict the grapheme of the current time-step given the input of speech signal and the ground-truth grapheme history of the previous time-steps. However, it remains unclear how well the model approximates real-world speech during inference. Thus, generating the whole transcription from scratch based on previous predictions is complicated and errors can propagate over time. Furthermore, the model is optimized to maximize the likelihood of training data instead of error rate evaluation metrics that actually quantify recognition quality. This paper presents an alternative strategy for training sequence-to-sequence ASR models by adopting the idea of reinforcement learning (RL). Unlike the standard training scheme with maximum likelihood estimation, our proposed approach utilizes the policy gradient algorithm. We can (1) sample the whole transcription based on the model's prediction in the training process and (2) directly optimize the model with negative Levenshtein distance as the reward. Experimental results demonstrate that we significantly improved the performance compared to a model trained only with maximum likelihood estimation.

up
0 users have voted:

Paper Details

Authors:
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura
Submitted On:
14 April 2018 - 10:37am
Short Link:
Type:
Poster
Event:
Presenter's Name:
Andros Tjandra
Paper Code:
SP-P17.8
Document Year:
2018
Cite

Document Files

Poster in PDF format

(45 downloads)

Subscribe

[1] Andros Tjandra, Sakriani Sakti, Satoshi Nakamura, "Sequence-to-Sequence ASR Optimization via Reinforcement Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2834. Accessed: Jul. 21, 2018.
@article{2834-18,
url = {http://sigport.org/2834},
author = {Andros Tjandra; Sakriani Sakti; Satoshi Nakamura },
publisher = {IEEE SigPort},
title = {Sequence-to-Sequence ASR Optimization via Reinforcement Learning},
year = {2018} }
TY - EJOUR
T1 - Sequence-to-Sequence ASR Optimization via Reinforcement Learning
AU - Andros Tjandra; Sakriani Sakti; Satoshi Nakamura
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2834
ER -
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura. (2018). Sequence-to-Sequence ASR Optimization via Reinforcement Learning. IEEE SigPort. http://sigport.org/2834
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura, 2018. Sequence-to-Sequence ASR Optimization via Reinforcement Learning. Available at: http://sigport.org/2834.
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura. (2018). "Sequence-to-Sequence ASR Optimization via Reinforcement Learning." Web.
1. Andros Tjandra, Sakriani Sakti, Satoshi Nakamura. Sequence-to-Sequence ASR Optimization via Reinforcement Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2834