Documents
Poster
High Order Recurrent Neural Networks for Acoustic Modelling
- Citation Author(s):
- Submitted by:
- Chao ZHANG
- Last updated:
- 12 April 2018 - 12:16pm
- Document Type:
- Poster
- Document Year:
- 2018
- Event:
- Presenters:
- Phil Woodland
- Paper Code:
- 3291
- Categories:
- Keywords:
- Log in to post comments
Vanishing long-term gradients are a major issue in training standard recurrent neural networks (RNNs), which can be alleviated by long short-term memory (LSTM) models with memory cells. However, the extra parameters associated with the memory cells mean an LSTM layer has four times as many parameters as an RNN with the same hidden vector size. This paper addresses the vanishing gradient problem using a high order RNN (HORNN) which has additional connections from multiple previous time steps. Speech recognition experiments using British English multi-genre broadcast (MGB3) data showed that the proposed HORNN architectures for rectified linear unit and sigmoid activation functions reduced word error rates (WER) by 4.2% and 6.3% over the corresponding RNNs, and gave similar WERs to a (projected) LSTM while using only 20%--50% of the recurrent layer parameters and computation.