
- Read more about A PARTIALLY COLLAPSED GIBBS SAMPLER FOR UNSUPERVISED NONNEGATIVE SPARSE SIGNAL RESTORATION
- Log in to post comments
In this paper the problem of restoration of unsupervised nonnegative sparse signals is addressed in the Bayesian framework. We introduce a new probabilistic hierarchical prior, based on the Generalized Hyperbolic (GH) distribution, which explicitly accounts for sparsity. On the one hand, this new prior allows us to take into account the non-negativity.
- Categories:

- Read more about Neural Network Compression via Additive Combination of Reshaped, Low-rank Matrices
- Log in to post comments
- Categories:

- Read more about Smaller RLZ-Compressed Suffix Arrays
- Log in to post comments
Recently it was shown (Puglisi and Zhukova, Proc. SPIRE, 2020) that the suffix array (SA) data structure can be effectively compressed with relative Lempel-Ziv (RLZ) dictionary compression in such a way that arbitrary subar- rays can be rapidly decompressed, thus facilitating compressed indexing. In this paper we describe optimizations to RLZ-compressed SAs, including generation of more effective dictionaries and compact encodings of index components, both of which reduce index size without adversely affecting subarray access speeds relative to other compressed indexes.
- Categories:

- Read more about On Elias-Fano for Rank Queries in FM-indexes
- Log in to post comments
We describe methods to support fast rank queries on the Burrows-Wheeler transform (BWT) string S of an input string T on alphabet Σ, in order to support pattern counting queries. Our starting point is an approach previously adopted by several authors, which is to represent S as |Σ| bitvectors, where the bitvector for symbol c has a 1 at position i if and only if S[i] = c, with the bitvec- tors stored in Elias-Fano (EF) encodings, to enable binary rank queries. We first show that the clustering of symbols induced by the BWT makes standard implementations of EF unattractive.
- Categories:


- Read more about Compressive Sensing via Unfolded ℓ_0-Constrained Convolutional Sparse Coding
- Log in to post comments
DCC的副本.pdf

- Categories:

Deep neural networks (DNNs), despite their performance on a wide variety of tasks, are still out of reach for many applications as they require significant computational resources. In this paper, we present a low-rank based end-to-end deep neural network compression frame- work with the goal of enabling DNNs performance to computationally constrained devices. The proposed framework includes techniques for low-rank based structural approximation, quantization and lossless arithmetic coding.
- Categories:

- Read more about DZip: improved general-purpose lossless compression based on novel neural network modeling
- 2 comments
- Log in to post comments
- Categories: