- Read more about RD-OPTIMIZED 3D PLANAR MODEL RECONSTRUCTION & ENCODING FOR VIDEO COMPRESSION
- Log in to post comments
Conventional video coding approaches follow a hybrid motion prediction / residual transform coding paradigm, which limits the discovery of redundancy to individual pairs of video frames.
On the other hand, computer vision techniques like structure-from-motion (SfM) have long exploited redundancy across a large group of frames to estimate a rigid 3D object structure.
In this paper, leveraging on previous SfM techniques, we construct a rate-distortion (RD) optimized 3D planar model from a target spatial region in a frame group as a unified signal predictor for these frames.
- Categories:
- Read more about A STUDY ON THE 4D SPARSITY OF JPEG PLENO LIGHT FIELDS USING THE DISCRETE COSINE TRANSFORM
- Log in to post comments
In this work we study the 4D sparsity of light fields using as main tool the 4D-Discrete Cosine Transform. We analyze the two JPEG Pleno light field datasets, namely the lenslet-based and the High- Density Camera Array (HDCA) datasets. The results suggest that the lenslets datasets exhibit a high 4D redundancy, with a larger inter-view sparsity than the intra-view one. For the HDCA datasets, there is also 4D redundancy worthy to be exploited, yet in a smaller degree. Unlike the lenslets case, the intra-view redundancy is much larger than the inter-view one.
icip-2018-4d-dct.pdf
- Categories:
This work presents a thresholding method for processing the predicted samples in the state-of-the-art High Efficiency Video Coding (HEVC) standard. The method applies an integer-based approximation of the discrete cosine transform to an extended prediction block and sets transform coefficients beneath a certain threshold to zero. Transforming back into the sample domain yields the improved prediction signal. The method is incorporated into a software implementation that is conforming to the HEVC standard and applies to both intra and inter predictions.
- Categories:
- Read more about Low Complexity Joint RDO of Prediction Units Couples for HEVC Intra Coding
- Log in to post comments
HEVC is the latest block-based video compression standard, outperforming H.264/AVC by 50% bitrate savings for the same perceptual quality. An HEVC encoder provides Rate-Distortion optimization coding tools for block-wise compression. Because of complexity limitations, Rate-Distortion Optimization (RDO) is usually performed independently for each block, assuming coding efficiency losses to be negligible.
- Categories:
- Read more about Autoencoder-based image compression: can the learning be quantization independent?
- Log in to post comments
- Categories:
- Read more about LEARNING-BASED COMPLEXITY REDUCTION AND SCALINGFOR HEVC ENCODERS
- Log in to post comments
- Categories:
- Read more about A JOINT SOURCE CHANNEL ARITHMETIC MAP DECODER USING PROBABILISTIC RELATIONS AMONG INTRA MODES IN PREDICTIVE VIDEO COMPRESSION
- Log in to post comments
In this paper, residual redundancy in compressed videos is exploited to alleviate transmission errors using joint source channel arithmetic decoding. A new method is proposed to estimate a priori probability in MAP metric of H.264 intra modes decoder. The decoder generates a decoding tree using a breadth first search algorithm. An introduced statistical model is then implemented stage by stage over the decoding tree.
- Categories:
- Read more about FAST 3D-HEVC DEPTH MAPS INTRA-FRAME PREDICTION USING DATA MINING
- Log in to post comments
This paper presents a fast 3D-High Efficiency Video Coding (3D-HEVC) depth maps intra-frame prediction based on static Coding Unit (CU) splitting decisions trees. This coding approach uses data mining to extract the correlation among the encoder context attributes and to define a split decision tree for each CU level of the depth maps encoding. The decision trees were trained using the information extracted from 3D-HEVC Test Model (3D-HTM) and using the Common Test Conditions (CTC).
- Categories:
- Read more about CLUSTER-BASED POINT CLOUD CODING WITH NORMAL WEIGHTED GRAPH FOURIER TRANSFORM
- Log in to post comments
Point cloud has attracted more and more attention in 3D object representation, especially in free-view rendering. However, it is challenging to efficiently deploy the point cloud due to its huge data amount with multiple attributes including coordinates, normal and color. In order to represent point clouds more compactly, we propose a novel point cloud compression method for attributes, based on geometric clustering and Normal Weighted Graph Fourier Transform (NWGFT).
- Categories:
- Read more about An efficient deep convolutional laplacian pyramid architecture for CS reconstruction at low sampling ratios
- Log in to post comments
The compressed sensing (CS) has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. Several CS reconstruction models have been proposed and obtained superior performance. However, these methods suffer from blocking artifacts or ringing effects at low sampling ratios in most cases. To address this problem, we propose a deep convolutional Laplacian Pyramid Compressed Sensing Network (LapCSNet) for CS, which consists of a sampling sub-network and a reconstruction sub-network.
- Categories: