- Read more about Improving PSNR-Based Quality Metrics Performance for Point Cloud Geometry
- Log in to post comments
An increased interest in immersive applications has drawn attention to emerging 3D imaging representation formats, notably light fields and point clouds (PCs). Nowadays, PCs are one of the most popular 3D media formats, due to recent developments in PC acquisition, namely depth sensors and signal processing algorithms. To obtain high fidelity 3D representations of visual scenes a huge amount of PC data is typically acquired, which demands efficient compression solutions.
- Categories:
Existing techniques to compress point cloud attributes leverage either geometric or video-based compression tools. We explore a radically different approach inspired by recent advances in point cloud representation learning. Point clouds can be interpreted as 2D manifolds in 3D space. Specifically, we fold a 2D grid onto a point cloud and we map attributes from the point cloud onto the folded 2D grid using a novel optimized mapping method. This mapping results in an image, which opens a way to apply existing image processing techniques on point cloud attributes.
- Categories:
High-Throughput JPEG2000 (HTJ2K) is a new addition to the JPEG2000 suite of coding tools; it has been recently approved as Part-15 of the JPEG2000 standard, and the JPH file extension has been designated for it. The HTJ2K employs a new “fast” block coder that can achieve higher encoding and decoding throughput than a conventional JPEG2000 (C-J2K) encoder. The higher throughput is achieved because the HTJ2K codec processes wavelet coefficients in a smaller number of steps than C-J2K.
2498.pdf
- Categories:
- Read more about DEPTH MAPS FAST SCALABLE COMPRESSION BASED ON CODING UNIT DEPTH
- Log in to post comments
- Categories:
- Read more about Semantic Preserving Image Compression
- Log in to post comments
Video traffic comprises a large majority of the total traffic on the internet today. Uncompressed visual data requires a very large data rate; lossy compression techniques are employed in order to keep the data-rate manageable. Increasingly, a significant amount of visual data being generated is consumed by analytics (such as classification, detection, etc.) residing in the cloud. Image and video compression can produce visual artifacts, especially at lower data-rates, which can result in a significant drop in performance on such analytic tasks.
- Categories:
- Read more about ALTERNATIVE HALF-SAMPLE INTERPOLATION FILTERS FOR VERSATILE VIDEO CODING
- Log in to post comments
To reduce the residual energy of a video signal, motion compensated prediction with fractional-sample accuracy has been successfully employed in modern video coding technology. In contrast to the fixed quarter-sample motion vector resolution for the luma component in High Efficiency Video Coding standard, the current draft of a new Versatile Video Coding standard introduces a block-level adaptive motion vector resolution (AMVR) scheme. The AMVR allows coding of motion vector difference at different precisions.
- Categories:
- Read more about Non-Experts or Experts? Statistical Analyses of MOS using DSIS Method
- Log in to post comments
In image quality assessments, the results of subjective evaluation experiments that use the double-stimulus impairment scale (DSIS) method are often expressed in terms of the mean opinion score (MOS), which is the average score of all subjects for each test condition. Some MOS values are used to derive image quality criteria, and it has been assumed that it is preferable to perform tests with non-expert subjects rather than with experts. In this study, we analyze the results of several subjective evaluation experiments using the DSIS method.
- Categories:
- Read more about Residual Coding for Transform Skip Mode in Versatile Video Coding
- Log in to post comments
main-1.pdf
- Categories:
- Read more about Machine-Learning-Based Method for Finding Optimal Video-Codec Configurations Using Physical Input-Video Features
- Log in to post comments
Modern video codecs have many compression-tuning parameters from which numerous configurations (presets) can be constructed. The large number of presets complicates the search for one that delivers optimal encoding time, quality, and compressed-video size. This paper presents a machine-learning-based method that helps to solve this problem. We applied the method to the x264 video codec: it searches for optimal presets that demonstrate 9-20% bitrate savings relative to standard x264 presets with comparable compressed-video quality and encoding time.
- Categories:
- Read more about EPIC: Context Adaptive Lossless Light Field Compression using Epipolar Plane Images
- Log in to post comments
This paper proposes extensions of CALIC for lossless compression of light field (LF) images. The overall prediction process is improved by exploiting the linear structure of Epipolar Plane Images (EPI) in a slope based prediction scheme. The prediction is improved further by averaging predictions made using horizontal and verticals EPIs. Besides this, the difference in these predictions is included in the error energy function, and the texture context is redefined to improve the overall compression ratio.
PID6319313.pdf
- Categories: