DCC 2021Virtual Conference - The Data Compression Conference (DCC) is an international forum for current work on data compression and related applications. Both theoretical and experimental work are of interest. Visit DCC 2021 website.
- Read more about DZip: improved general-purpose lossless compression based on novel neural network modeling
- 2 comments
- Log in to post comments
- Categories:
- Read more about Reducing Image Compression Artifacts for Deep Neural Networks
- Log in to post comments
Existing compression artifacts reduction methods aim to restore images on pixel-level, which can improves human visual experience. However, in many applications, large-scale images are collected not for visual examination by human. Instead, they are used for many high-level vision tasks usually by Deep Neural Networks (DNN). One fundamental problem here is whether existing artifacts reduction methods can help DNNs improve the performance of the high-level tasks. In this paper, we find that these methods have limited performance improvements to high-level tasks, even bring negative effects.
- Categories:
- Read more about Intra Block Partition Structure Prediction via Convolutional Neural Network
- Log in to post comments
In video coding, block partition segments image into non-overlap blocks for individual coding, the structure of which is becoming more and more flexible along with the development of video coding standards. Multiple types of tree structures have been proposed recently, which extensively improved the complexity of the encoding process due to recursive rate-distortion search for the optimal partition. In this paper, a two-stage Convolutional Neural Network (CNN) based partition structure prediction method is proposed to bypass the decision process of the block size in intra frame coding.
- Categories:
Little attention has been given to language support for block-based compression algorithms, despite their high implementation complexity. Current implementations have to deal with both the intricacies of the algorithm itself, as well as the low-level optimizations necessary for generating fast code. However, many block-based compression algorithms share a common structure in terms of their data representations, data partitioning operations, and data traversals.
In this work, we propose a set of high-level language abstractions that can succinctly capture this structure.
- Categories:
- Read more about Backward Weighted Coding
- Log in to post comments
Extending recently suggested methods, a new dynamic compression algorithm is proposed, which assigns larger weights to characters that have just been coded by means of an increasing weight function. Empirical results present its efficient compression performance, which, for input files with locally skewed distributions, can improve beyond the lower bound given by the entropy for static encoding, at the price of slower running times for compression, and comparable time for decompression.
- Categories:
- Read more about Compact Representation of Spatial Hierarchies and Topological Relationships
- Log in to post comments
The topological model for spatial objects identifies common boundaries between regions, explicitly storing adjacency relations, which not only improves the efficiency of topology-related queries, but also provides advantages such as avoiding data duplication and facilitating data consistency. Recently, a compact representation of the topological model based on planar graph embeddings was proposed.
- Categories:
- Read more about Guided Blocks WOM codes
- Log in to post comments
A binary Write Once Memory (wom) device is a storage mechanism in which a 0-bit can be overwritten much more easily than a 1-bit. A famous example is the flash memory technology, where $0 \rightarrow 1$ transitions are allowed, but $1\rightarrow 0$ transitions require a costly erase procedure and are therefore prohibited. A {\sc wom} code is a coding scheme that permits multiple writes to the {\sc wom} without violating the {\sc wom} rule.
The properties of {\sc wom}
attracted attention even before flash memory was invented.
- Categories:
- Read more about Approximate Hashing for Bioinformatics
- Log in to post comments
A particular form of lossless data compression is known as deduplication, which is often applied in a scenario in which a large data repository is given and we wish to store a new, updated, version of it, in which the changes account only for a tiny fraction of the accumulated information. The idea is then to find duplicated parts and store only one copy P of them; the second and subsequent occurrences of these parts can then be replaced by pointers to P.
- Categories:
- Read more about Point AE-DCGAN: A deep learning model for 3D point cloud lossy geometry compression
- Log in to post comments
- Categories:
- Read more about Compression of point cloud geometry through a single projection
- Log in to post comments
DCC2021_v2.pdf
- Categories: