Sorry, you need to enable JavaScript to visit this website.

DCC 2021Virtual Conference - The Data Compression Conference (DCC) is an international forum for current work on data compression and related applications. Both theoretical and experimental work are of interest. Visit DCC 2021 website

In this paper, we present a coding framework for deep convolutional neural network compression. Our approach utilizes the classical coding theories and formulates the compression of deep convolutional neural networks as a rate-distortion optimization problem. We incorporate three coding ingredients in the coding framework, including bit allocation, dead zone quantization, and Tunstall coding, to improve the rate-distortion frontier without noticeable system-level overhead introduced.

Categories:
116 Views

The Run Length Encoding (RLE) compression method is a long standing simple lossless compression scheme which is easy to implement and achieves a good compression on input data which contains repeating consecutive symbols. In its pure form RLE is not applicable on natural text or other input data with short sequences of identical symbols. We present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string which is highly suitable for RLE compression.

Categories:
58 Views

It was recently shown that the combination of source prediction, two-times oversampling, and noise shaping, can be used to obtain a robust (multiple-description) audio coding frame- work for networks with packet loss probabilities less than 10%. Specifically, it was shown that audio signals could be encoded into two descriptions (packets), which were separately sent over a communication channel. Each description yields a desired performance by itself, and when they are combined, the performance is improved.

Categories:
127 Views

In recent years, using compressed sensing (CS) as a cryptosystem has drawn more and more attention since this cryptosystem can perform compression and encryption simultaneously. However, this cryptosystem is vulnerable to known-plaintext attack (KPA) under multi-time-sampling (MTS) scenario due to the linearity of its encoding process.

Categories:
24 Views

In this paper, we present a new coding approach to near-lossless compression for binary sparse sources by using a special class of low density generator matrix (LDGM) codes. On the theoretical side, we proved that such a class of block LDGM codes are universal in the sense that any source with an entropy less than the coding rate can be compressed and reconstructed with an arbitrarily low bit-error rate (BER). On the practical side, we employ spatially coupled LDGM codes to reduce the complexity of reconstruction by implementing an iterative sliding window decoding algorithm.

Categories:
37 Views

Pages