Sorry, you need to enable JavaScript to visit this website.

We describe a grammar for DNA sequencing reads from which we can compute the BWT directly. Our motivation is to perform in succinct space genomic analyses that require complex string queries not yet supported by repetition-based self-indexes. Our approach is to store the set of reads as a grammar, but when required, compute its BWT to carry out the analysis by using self-indexes. Our experiments in real data showed that the space reduction we achieve with our compressor is competitive with LZ-based methods and better than entropy-based approaches.

Categories:
77 Views

We introduce a new structural technique for pruning deep neural networks with skip-connections by removing the less informative layers using their Fisher scores. Extensive experiments on the classification of CIFAR-10, CIFAR-100, and SVHN data sets demonstrate the efficacy of our proposed method in compressing deep models, both in terms of the number of parameters and operations.

Categories:
208 Views

Data compression is used in a wide variety of tasks, including compression of databases, large learning models, videos, images, etc. The cost of decompressing (decoding) data can be prohibitive for certain real-time applications. In many scenarios, it is acceptable to sacrifice (to some extent) on compression in the interest of fast decoding.

Categories:
41 Views

We provide a compact representation of polyominoes with n cells that supports navigation and visibility queries in constant time.

Categories:
15 Views

In this paper, we present a coding framework for deep convolutional neural network compression. Our approach utilizes the classical coding theories and formulates the compression of deep convolutional neural networks as a rate-distortion optimization problem. We incorporate three coding ingredients in the coding framework, including bit allocation, dead zone quantization, and Tunstall coding, to improve the rate-distortion frontier without noticeable system-level overhead introduced.

Categories:
165 Views

The Run Length Encoding (RLE) compression method is a long standing simple lossless compression scheme which is easy to implement and achieves a good compression on input data which contains repeating consecutive symbols. In its pure form RLE is not applicable on natural text or other input data with short sequences of identical symbols. We present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string which is highly suitable for RLE compression.

Categories:
260 Views

Pages