Sorry, you need to enable JavaScript to visit this website.

The Data Compression Conference (DCC) is an international forum for current work on data compression and related applications. Both theoretical and experimental work are of interest. Visit website.

We present a compact data structure to represent both the duration and length of homogeneous segments of trajectories from moving objects in a way that, as a data warehouse, it allows us to efficiently answer cumulative queries. The division of trajectories into relevant segments has been studied in the literature under the topic of Trajectory Segmentation. In this paper, we design a data structure to compactly represent them and the algorithms to answer the more relevant queries.

Categories:
114 Views

The successor and predecessor problem consists of obtaining the closest value in a set of integers, greater or smaller than a given value. This problem has interesting applications, like the intersection of inverted lists. It can be easily modeled by using a bitvector of size n and its operations rank and select. However, there is a practical approach [1], which keeps the best theoretical bounds, and allows to solve successor and predecessor more efficiently.

Categories:
98 Views

We consider the problem of coding for computing with maximal distortion, where the sender communicates with a receiver, which has its own private data and wants to compute a function of their combined data with some fidelity constraint known to both agents. We show that the minimum rate for this problem is equal to the conditional entropy of a hypergraph and design practical codes for the problem. Further, the minimum rate of this problem may be a discontinuous function of the fidelity constraint.

Categories:
86 Views

Video summarization considers the problem of selecting a concise set of frames or shots to preserve the most essential contents of the original video. Most of the current approaches apply Recurrent Neural Network (RNN) to learn the interdependencies among the video frames without considering the distinct information of particular frames. Other methods leverage the attention mechanism to explore the characteristics of some certain frames, while ignoring the systematic knowledge across the video sequence.

Categories:
77 Views

Trellis quantization as structured vector quantizer is able to improve
the rate-distortion performance of traditional scalar quantizers. As such,
it has found its way into the JPEG~2000 standard, and also recently as an
option in HEVC. In this paper, a trellis quantization option for JPEG XS is
considered and analyzed; JPEG~XS is a low-complexity, low-latency high-speed
"mezzanine" codec for Video over IP transmission in professional
production environments and industrial applications where high compression

Categories:
50 Views

High-Efficiency Video Coding (HEVC) is the latest video coding standard which is developed by the Joint Collaborative Team on Video Coding (JCT-VC). To guarantee successful transmission and to make the best use of available network resources, an effective rate control mechanism plays a critical role in video coding standards. The coding performance can be maximised through the appropriate allocation of bits under the constraints of a total bit rate budget and the buffer size.

Categories:
42 Views

Segmenting a document image into text-lines and words finds applications in many research areas of DIA(Document Image Analysis) such as OCR, Word Spotting, and document retrieval. However, carrying out segmentation operation directly in the compressed document images is still an unexplored and challenging research area. Since JPEG is most widely accepted compression algorithm, this research paper attempts to segment a JPEG compressed printed text document image into text-lines and words, without fully decompressing the image.

Categories:
50 Views

This paper designs a Distributed Arithmetic Coding (DAC) decoder using the depth- first search method. In addition, a method is proposed to control the decoder complexity. Simulation results compare the DFD with the traditional Breadth-First Decoder (BFD)
showing that under the same complexity constraints, the DFD outperforms the BFD when the code length is not too long and the quality of side information is not too poor.

Categories:
26 Views

Pages