Documents
Presentation Slides
Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection
- Citation Author(s):
- Submitted by:
- Flavio Martinelli
- Last updated:
- 27 May 2020 - 8:49am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Presenters:
- Flavio Martinelli
- Paper Code:
- SS-L5.5
- Categories:
- Keywords:
- Log in to post comments
Recent advances in Voice Activity Detection (VAD) are driven by artificial and Recurrent Neural Networks (RNNs), however, using a VAD system in battery-operated devices requires further power efficiency. This can be achieved by neuromorphic hardware, which enables Spiking Neural Networks (SNNs) to perform inference at very low energy consumption. Spiking networks are characterized by their ability to process information efficiently, in a sparse cascade of binary events in time called spikes. However, a big performance gap separates artificial from spiking networks, mostly due to a lack of powerful SNN training algorithms. To overcome this problem we exploit an SNN model that can be recast into a recurrent network and trained with known deep learning techniques. We describe a training procedure that achieves low spiking activity and apply pruning algorithms to remove up to 85% of the network connections with no performance loss. The model competes with state-of-the-art performance at a fraction of the power consumption comparing to other methods.