Sorry, you need to enable JavaScript to visit this website.

Recent advances in neural networks pruning have made it possible to remove a large number of filters without any perceptible drop in accuracy. However, the gain in speed depends on the number of filters per layer. In this paper, we propose a one-shot layer-wise proxy classifier to estimate layer importance that in turn allows us to prune a whole layer. In contrast to existing filter pruning methods which attempt to reduce the layer width of a dense model, our method reduces its depth and can thus guarantee inference speed up.

Categories:
58 Views