Sorry, you need to enable JavaScript to visit this website.

Multiscale Point Cloud Geometry Compression

Citation Author(s):
Jianqiang Wang, Dandan Ding, Zhu Li, Zhan Ma
Submitted by:
Jianqiang Wang
Last updated:
25 February 2021 - 6:54am
Document Type:
Presentation Slides
Document Year:
2021
Event:
Categories:
 

Recent years have witnessed the growth of point cloud based applications for both immersive media as well as 3D sensing for auto-driving, because of its realistic and fine-grained representation of 3D objects and scenes. However, it is a challenging problem to compress sparse, unstructured, and high-precision 3D points for efficient communication. In this paper, leveraging the sparsity nature of the point cloud, we propose a multiscale end-to-end learning framework that hierarchically reconstructs the 3D Point Cloud Geometry (PCG) via progressive re-sampling. The framework is developed on top of a sparse convolution based autoencoder for point cloud compression and reconstruction. For the input PCG which has only the binary occupancy attribute, our framework translates it to a downscaled point cloud at the bottleneck layer which possesses both geometry and associated feature attributes. Then, the geometric occupancy is losslessly compressed using an octree codec and the feature attributes are lossy compressed using a learned probabilistic context model. Compared with the state-of-the-art Video-based Point Cloud Compression (V-PCC) and Geometry-based PCC (G-PCC) schemes standardized by the Moving Picture Experts Group (MPEG), our method achieves more than 40% and 70% BD-Rate (Bjøntegaard Delta Rate) reduction, respectively. We would like to make all materials publicly accessible at https://njuvision.github.io/PCGCv2/ for reproducible research.

up
0 users have voted: