Sorry, you need to enable JavaScript to visit this website.

SPEECH RECOGNITION MODEL COMPRESSION

Citation Author(s):
Ahmed Tewfik, Raj Pawate
Submitted by:
Madhumitha Sakthi
Last updated:
25 May 2020 - 2:17pm
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters:
Madhumitha Sakthi
Paper Code:
4186
 

Deep Neural Network-based speech recognition systems are widely used in most speech processing applications. To achieve better model robustness and accuracy, these networks are constructed with millions of parameters, making them storage and compute-intensive. In this paper, we propose Bin & Quant (B&Q), a compression technique using which we were able to reduce the Deep Speech 2 speech recognition model size by 7 times for a negligible loss in accuracy. We have shown that our algorithm is generally beneficial based on its effectiveness across two other speech recognition models and the VGG16 model. In this paper, we have empirically shown that Recurrent Neural Networks (RNNs) are more sensitive to model parameter perturbation than Convolutional Neural Networks (CNNs), followed by fully connected(FC) networks. Using our B&Q technique, we have shown that we can establish parameter sharing across layers instead of just within a particular layer.

up
0 users have voted: