DCC 2021Virtual Conference - The Data Compression Conference (DCC) is an international forum for current work on data compression and related applications. Both theoretical and experimental work are of interest. Visit DCC 2021 website.
- Read more about On Elias-Fano for Rank Queries in FM-indexes
- Log in to post comments
We describe methods to support fast rank queries on the Burrows-Wheeler transform (BWT) string S of an input string T on alphabet Σ, in order to support pattern counting queries. Our starting point is an approach previously adopted by several authors, which is to represent S as |Σ| bitvectors, where the bitvector for symbol c has a 1 at position i if and only if S[i] = c, with the bitvec- tors stored in Elias-Fano (EF) encodings, to enable binary rank queries. We first show that the clustering of symbols induced by the BWT makes standard implementations of EF unattractive.
- Categories:
We focus on the Multi-Rate Sampling (MRS) Compressed Sensing (CS) scheme, in which several Analog-to-Digital Converters (ADC) sample in parallel at different sub-Nyquist rates. In this paper, we postulate that a good signal recovery requires the measurement matrix rank (MMR) to be as high as possible. We also present an upper-bound for the MMR. Choosing pairwise coprime sampling rates allows to reach this upper-bound.
slides.pdf
- Categories:
Flow-based generative models are successfully applied in image generation tasks, where an invertible neural network (INN) is built up based on flow steps. Learning-based compression commonly transforms the input into a compact space and then implements a reconstruction network in the decoder accordingly. By utilizing low-resolution images, traditional or adaptive downsamplers with their corresponding traditional or learned upsamplers usually achieve better coding quality at a low bit-rate.
script.pptx
script.pptx
- Categories:
Many information sources are not just sequences of distinguishable symbols but rather have invariances governed by alternative counting paradigms such as permutations, combinations, and partitions. We consider an entire classification of these invariances called the twelvefold way in enumerative combinatorics and develop a method to characterize lossless compression limits. Explicit computations for all twelve settings are carried out for i.i.d. uniform and Bernoulli distributions. Comparisons among settings provide quantitative insight.
- Categories:
- Read more about Graph Based Transforms based on Graph Neural Networks for Predictive Transform Coding
- 1 comment
- Log in to post comments
This paper introduces the GBT-NN, a novel class of Graph-based Transform within thecontext of block-based predictive transform coding using intra-prediction. The GBT-NNis constructed by learning a mapping function to map a graph Laplacian representing thecovariance matrix of the current block. Our objective of learning such a mapping functionis to design a GBT that performs as well as the KLT without requiring to explicitly com-pute the covariance matrix for each residual block to be transformed.
- Categories:
- Read more about An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy Image Compression Systems.
- 1 comment
- Log in to post comments
Prior work on image compression has focused on optimizing models to achieve better reconstruction at lower bit rates. These approaches are focused on creating sophisticated architectures that enhance encoder or decoder performance. In some cases, there is the desire to jointly optimize both along with a designed form of entropy encoding. In some instances, these approaches result in the creation of many redundant components, which may or may not be useful.
- Categories:
- Read more about Regularized Semi-Nonnegative Matrix Factorization Using L21-Norm for Data Compression
- Log in to post comments
Data reduction algorithms, including matrix factorization techniques, represent an essential component of many ML systems.One popular paradigm of matrix factorizations includes Non-Negative Matrix Factorization (NMF).
- Categories:
Deep neural networks (DNNs), despite their performance on a wide variety of tasks, are still out of reach for many applications as they require significant computational resources. In this paper, we present a low-rank based end-to-end deep neural network compression frame- work with the goal of enabling DNNs performance to computationally constrained devices. The proposed framework includes techniques for low-rank based structural approximation, quantization and lossless arithmetic coding.
- Categories:
In this paper, we concentrate on the super-resolution (SR) of compressed screen content video, in an effort to address the real-world challenges by considering the underlying characteristics of screen content. Firstly, we propose a new dataset for the SR of screen content video with different distortion levels. Meanwhile, we design an efficient SR structure that could capture the characteristics of compressed screen content video and manipulate the inner-connections in consecutive compressed low-resolution frames, facilitating the high-quality recovery of the high-resolution counter-part.
- Categories: