Documents
Presentation Slides
Improved TDNNs using Deep Kernels and Frequency Dependent Grid-RNNs
- Citation Author(s):
- Submitted by:
- Florian Kreyssig
- Last updated:
- 15 April 2018 - 2:43am
- Document Type:
- Presentation Slides
- Document Year:
- 2018
- Event:
- Presenters:
- Florian Kreyssig
- Paper Code:
- 4303
- Categories:
- Log in to post comments
Time delay neural networks (TDNNs) are an effective acoustic model for large vocabulary speech recognition. The strength of the model can be attributed to its ability to effectively model long temporal contexts. However, current TDNN models are relatively shallow, which limits the modelling capability. This paper proposes a method of increasing the network depth by deepening the kernel used in the TDNN temporal convolutions. The best performing kernel consists of three fully connected layers with a residual (ResNet) connection from the output of the first to the output of the third. The addition of spectro-temporal processing as the input to the TDNN in the form of a convolutional neural network (CNN) and a newly designed Grid-RNN was investigated. The Grid-RNN strongly outperforms a CNN if different sets of parameters for different frequency bands are used and can be further enhanced by using a bi-directional Grid-RNN. Experiments using the multi-genre broadcast (MGB3) English data (275h) show that deep kernel TDNNs reduces the word error rate (WER) by 6% relative and when combined with the frequency dependent Grid-RNN gives a relative WER reduction of 9%.