Documents
Presentation Slides
One-Shot Layer-Wise Accuracy Approximation for Layer Pruning
- Citation Author(s):
- Submitted by:
- Sara Elkerdawy
- Last updated:
- 3 November 2020 - 11:45am
- Document Type:
- Presentation Slides
- Document Year:
- 2020
- Event:
- Paper Code:
- https://github.com/selkerdawy/one-shot-layer-pruning
- Categories:
- Keywords:
- Log in to post comments
Recent advances in neural networks pruning have made it possible to remove a large number of filters without any perceptible drop in accuracy. However, the gain in speed depends on the number of filters per layer. In this paper, we propose a one-shot layer-wise proxy classifier to estimate layer importance that in turn allows us to prune a whole layer. In contrast to existing filter pruning methods which attempt to reduce the layer width of a dense model, our method reduces its depth and can thus guarantee inference speed up. In our proposed method, we first go through the training data once to construct proxy classifiers for each layer using imprinting. Next, we prune layers with smallest accuracy difference from their preceding layer till a latency budget is achieved. Finally, we fine-tune the newly pruned model to improve accuracy. Experimental results showed 43.70% latency reduction with 1.27% accuracy increase on CIFAR100 for the pruned VGG19. Further, we achieved 16% and 25% latency reduction with 0.58% increase and 0.01% decrease in accuracy respectively on ImageNet for ResNet-50. The major advantage of our proposed method is that these latency reductions cannot be achieved with existing filter pruning methods as they are bounded by the original model’s depth.