Sorry, you need to enable JavaScript to visit this website.

SRQ: Self-reference quantization scheme for lightweight neural network

Citation Author(s):
Xiaobin Li, Hongxu Jiang, Shuangxi Huang, Fangzheng Tian, Runhua Zhang, Dong Dong
Submitted by:
Xiaobin Li
Last updated:
28 February 2021 - 3:02am
Document Type:
Document Year:



Lightweight neural network (LNN) nowadays plays a vital role in embedded applications with limited resources. Quantized LNN with a low bit precision is an effective solution, which further reduces the computational and memory resource requirements. However, it is still challenging to avoid the significant accuracy degradation compared with the heavy neural network due to its numerical approximation and lower redundancy. In this paper, we propose a novel robustness-aware self-reference quantization scheme for LNN (SRQ), as Fig. 1 shows, which improves the performance by efficiently distillation of the structural information and takes the robustness of the quantized LNN into consideration. Specifically, SRQ considers a structural loss between the original LNN and quantized LNN, witch enable the scheme not only improve accuracy performance, but also can further fine tuning of the quantization network by applying the Lipschitz constraint to the structural loss. In addition, we also consider the robustness of quantized LNN for the first time, and propose a non-sensitive perturbation loss function by introducing an extraneous term of spectral norm. The experimental results show that the SRQ can effectively improve the accuracy and robustness of the state-of-the-art quantization methods, such as DoReFa and PACT.

1 user has voted: Xiaobin Li

Dataset Files

PDF of the ppt