Sorry, you need to enable JavaScript to visit this website.

DCC 2022 Conference - The Data Compression Conference (DCC) is an international forum for current work on data compression and related applications. Both theoretical and experimental work are of interest. Visit the DCC 2022 website.

To achieve high efficiency of remote pathology image browsing in telemedicine, efficient image compression coding is required. In this work, we establish a visibility threshold (VT) model, which considers multi-resolution and different visual qualities jointly. Based on this model, we propose an image coding method under the JPEG2000 standard for the whole-slide pathology images (WSIs), which operates adaptively according to the required resolutions and visual qualities.


Recently many efforts have been devoted to learning non-linear predictions from neighboring samples with deep neural networks. However, existing methods mainly generate predictions with local reference samples, regardless of nonlocal self-similarity.


The necessity of radix conversion of numeric data is an indispensable component in any complete analysis of digital computation. In this poster, we propose a binary encoding for mixed-radix digits. Second, a variant of rANS coding based on this conversion is given, which supports parallel decoding. The simulations show that the proposed coding in serial mode has a higher throughput than the baseline (with the speed-up factor about 2×) without loss of compression ratio, and it outperforms the existing 2-way interleaving implementation.


Motion Capture (MoCap) data is one type of fundamental asset for the digital entertainment. The progressively increasing 3D applications make MoCap data compression unprecedentedly important. In this paper, we propose an end-to-end attribute-decomposable motion compression network using the AutoEncoder architecture. Specifically, the algorithm consists of an LSTM-based encoder-decoder for compression and decompression. The encoder module decomposes human motion into multiple uncorrelated semantic attributes, including action content, arm space, and motion mirror.


In the paper Huffman codes that mix different r-nary code elements in one code, the mixed Huffman codes, are analyzed. The Huffman code generalization usually leads to shortening of average codeword length: a statistical test shows that for source alphabets longer than 8-12 elements more than 99% of the best compact codes are mixed Huffman ones. This is also true for practical mixed Huffman codes, which is demonstrated in experiments with data files containing up to milion elements for sources of size 12-17 symbols.


Video post-processing is a method to improve the quality of reconstructed frames at the
decoder side. Although the existing post-processing algorithms based on deep learning
can achieve signicant quality improvement compared with traditional methods, they will
require a lot of computational resources, which makes these algorithms difficult to use
on mobile devices. To tackle this problem, a low-complexity neural network based on
max-pooling and depth-wise separable convolution is proposed in this work for compressed


A lossless data compression code based on a binary-coded ternary number representation is investigated. The code is synchronizable and supports a direct search in a compressed file. A simple monotonous encoding and very fast decoding algorithms are constructed owing to code properties. Experiments show that in natural language text compression the new code outperforms byte-aligned codes SCDC and RPBC either in compression ratio or in decoding speed.