Documents
Presentation Slides
Presentation Slides REVE: Regularizing Deep Learning using Variational Entropy Bound
- Citation Author(s):
- Submitted by:
- Antoine Saporta
- Last updated:
- 20 September 2019 - 12:07am
- Document Type:
- Presentation Slides
- Document Year:
- 2019
- Event:
- Presenters:
- Antoine Saporta
- Paper Code:
- 1363
- Categories:
- Log in to post comments
Studies on generalization performance of machine learning algorithms under the scope of information theory suggest that compressed representations can guarantee good generalization, inspiring many compression-based regularization methods. In this paper, we introduce REVE, a new regularization scheme. Noting that compressing the representation can be sub-optimal, our first contribution is to identify a variable that is directly responsible for the final prediction. Our method aims at compressing the class conditioned entropy of this latter variable. Second, we introduce a variational upper bound on this conditional entropy term. Finally, we propose a scheme to instantiate a tractable loss that is integrated within the training procedure of the neural network and demonstrate its efficiency on different neural networks and datasets.