Sorry, you need to enable JavaScript to visit this website.

End-to-end Keyword Spotting using Neural Architecture Search and Quantization

Citation Author(s):
David Peter, Wolfgang Roth, Franz Pernkopf
Submitted by:
David Peter
Last updated:
5 May 2022 - 3:24pm
Document Type:
Poster, Slides and Paper
Document Year:
2022
Event:
Presenters:
Franz Pernkopf
Paper Code:
MLSP-12.1
 

This paper introduces neural architecture search (NAS) for the automatic discovery of end-to-end keyword spotting (KWS) models in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) operating on raw audio waveforms. After a suitable KWS model is found with NAS, we conduct quantization of weights and activations to reduce the memory footprint. We conduct extensive experiments on the Google speech commands dataset. In particular, we compare our end-to-end approach to mel-frequency cepstral coefficient (MFCC) based systems. For quantization, we compare fixed bit-width quantization and trained bit-width quantization. Using NAS only, we were able to obtain a highly efficient model with an accuracy of 95.55% using 75.7k parameters and 13.6M operations. Using trained bit-width quantization, the same model achieves a test accuracy of 93.76% while using on average only 2.91 bits per activation and 2.51 bits per weight.

up
0 users have voted: