- Read more about COMPRESSING DEEP NETWORKS USING FISHER SCORE OF FEATURE MAPS
- Log in to post comments
We introduce a new structural technique for pruning deep neural networks with skip-connections by removing the less informative layers using their Fisher scores. Extensive experiments on the classification of CIFAR-10, CIFAR-100, and SVHN data sets demonstrate the efficacy of our proposed method in compressing deep models, both in terms of the number of parameters and operations.
- Categories:
Data compression is used in a wide variety of tasks, including compression of databases, large learning models, videos, images, etc. The cost of decompressing (decoding) data can be prohibitive for certain real-time applications. In many scenarios, it is acceptable to sacrifice (to some extent) on compression in the interest of fast decoding.
- Categories:
- Read more about Compact Polyominoes
- Log in to post comments
We provide a compact representation of polyominoes with n cells that supports navigation and visibility queries in constant time.
- Categories:
- Read more about Rate-distortion Optimized Coding for Efficient CNN Compression
- Log in to post comments
In this paper, we present a coding framework for deep convolutional neural network compression. Our approach utilizes the classical coding theories and formulates the compression of deep convolutional neural networks as a rate-distortion optimization problem. We incorporate three coding ingredients in the coding framework, including bit allocation, dead zone quantization, and Tunstall coding, to improve the rate-distortion frontier without noticeable system-level overhead introduced.
- Categories:
- Read more about Compressive Sensing via Unfolded ℓ_0-Constrained Convolutional Sparse Coding
- Log in to post comments
DCC的副本.pdf
- Categories:
The Run Length Encoding (RLE) compression method is a long standing simple lossless compression scheme which is easy to implement and achieves a good compression on input data which contains repeating consecutive symbols. In its pure form RLE is not applicable on natural text or other input data with short sequences of identical symbols. We present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string which is highly suitable for RLE compression.
- Categories:
- Categories:
- Categories:
- Read more about SRQ: Self-reference quantization scheme for lightweight neural network
- Log in to post comments
Lightweight neural network (LNN) nowadays plays a vital role in embedded applications with limited resources. Quantized LNN with a low bit precision is an effective solution, which further reduces the computational and memory resource requirements. However, it is still challenging to avoid the significant accuracy degradation compared with the heavy neural network due to its numerical approximation and lower redundancy. In this paper, we propose a novel robustness-aware self-reference quantization scheme for LNN (SRQ), as Fig.
dcc2021.pdf
- Categories:
- Read more about Joint Asymmetric Convolution Block and LocalGlobal Context Optimization for Learned Image Compression
- Log in to post comments
- Categories: