Documents
Presentation Slides
Stochastic Tucker-Decomposed Recurrent Neural Networks for Forecasting
- Citation Author(s):
- Submitted by:
- Zachariah Carmichael
- Last updated:
- 30 November 2020 - 4:41pm
- Document Type:
- Presentation Slides
- Document Year:
- 2019
- Event:
- Presenters:
- Zachariah Carmichael
- Paper Code:
- 1570570141
- Categories:
- Log in to post comments
The growing edge computing paradigm, notably the vision of the internet-of-things (IoT), calls for a new epitome of lightweight algorithms. Currently, the most successful models that learn from temporal data, which is prevalent in IoT applications, stem from the field of deep learning. However, these models evince extended training times and heavy resource requirements, prohibiting training in constrained environments. To address these concerns, we employ deep stochastic neural networks from the reservoir computing paradigm. These networks train quickly with no need for backpropagation and we further accelerate training by employing Tucker decomposition. We demonstrate that such networks benefit from both tensorization and compression, and achieve a reduction of FLOPs up to ~95% while outperforming the uncompressed counterparts.
Read the paper here: https://ieeexplore.ieee.org/abstract/document/8969554/