
- Read more about Neural Network Compression via Additive Combination of Reshaped, Low-rank Matrices
- Log in to post comments
- Categories:

- Read more about Smaller RLZ-Compressed Suffix Arrays
- Log in to post comments
Recently it was shown (Puglisi and Zhukova, Proc. SPIRE, 2020) that the suffix array (SA) data structure can be effectively compressed with relative Lempel-Ziv (RLZ) dictionary compression in such a way that arbitrary subar- rays can be rapidly decompressed, thus facilitating compressed indexing. In this paper we describe optimizations to RLZ-compressed SAs, including generation of more effective dictionaries and compact encodings of index components, both of which reduce index size without adversely affecting subarray access speeds relative to other compressed indexes.
- Categories:

- Read more about On Elias-Fano for Rank Queries in FM-indexes
- Log in to post comments
We describe methods to support fast rank queries on the Burrows-Wheeler transform (BWT) string S of an input string T on alphabet Σ, in order to support pattern counting queries. Our starting point is an approach previously adopted by several authors, which is to represent S as |Σ| bitvectors, where the bitvector for symbol c has a 1 at position i if and only if S[i] = c, with the bitvec- tors stored in Elias-Fano (EF) encodings, to enable binary rank queries. We first show that the clustering of symbols induced by the BWT makes standard implementations of EF unattractive.
- Categories:


- Read more about Compressive Sensing via Unfolded ℓ_0-Constrained Convolutional Sparse Coding
- Log in to post comments
DCC的副本.pdf

- Categories:

Deep neural networks (DNNs), despite their performance on a wide variety of tasks, are still out of reach for many applications as they require significant computational resources. In this paper, we present a low-rank based end-to-end deep neural network compression frame- work with the goal of enabling DNNs performance to computationally constrained devices. The proposed framework includes techniques for low-rank based structural approximation, quantization and lossless arithmetic coding.
- Categories:

- Read more about DZip: improved general-purpose lossless compression based on novel neural network modeling
- 2 comments
- Log in to post comments
- Categories:

Little attention has been given to language support for block-based compression algorithms, despite their high implementation complexity. Current implementations have to deal with both the intricacies of the algorithm itself, as well as the low-level optimizations necessary for generating fast code. However, many block-based compression algorithms share a common structure in terms of their data representations, data partitioning operations, and data traversals.
In this work, we propose a set of high-level language abstractions that can succinctly capture this structure.
- Categories: