Sorry, you need to enable JavaScript to visit this website.

DBS

DOI:
10.60864/ntpm-m828
Citation Author(s):
Submitted by:
Zhaokai Zhang
Last updated:
13 April 2024 - 8:20am
Document Type:
leter
Categories:
 

Network pruning is an effective technique to reduce computation costs for deep model deployment on resource-constraint devices. Searching superior sub-networks from a vast search space through Neural Architecture Search (NAS) , which conducts a one-shot supernet used as a performance estimator, is still time-consuming. In addition to searching ineffciency, such solutions also focus on FLOPs budget and suffer from an inferior ranking consistency between supernet-inherited and stand-alone performance. To solve the problems above, we propose a framework, namely DBS. Firstly, we pre-sample sub-networks with a similar budget setting as starting points, then we use a strict path-wise fair sandwich rule to train these starting points in a supernet. Second, we train Transformerbased predictors according to the performance and budget (FLOPs or latency) of starting points. After that, we freeze the parameters of predictors and apply a differentiable budgetaware search on continuous sub-networks vectors. Finally, we obtain the derived sub-networks from the optimized vectors by a decoder. We conduct comprehensive experiments on Imagenet with Resnet and Mobilenet-V2 under various FLOPs settings as well as different latency, which shows consistent improvements to the-state-of-art methods.

up
0 users have voted: