Sorry, you need to enable JavaScript to visit this website.

Supplementary Material for PIT-QMM

DOI:
10.60864/6kge-6c07
Citation Author(s):
Submitted by:
Anon Anon
Last updated:
5 February 2025 - 3:02pm
Document Type:
Appendix for paper submission
 

Appendix for PIT-QMM: A Large Multimodal Model for No-Reference Point Cloud Quality Assessment. Abstract of full paper:

Large Multimodal Models (LMMs) have recently enabled considerable advances in the realm of image and video quality assessment, but this progress has yet to be fully explored in the domain of 3D assets. We are interested in using these models to conduct No-Reference Point Cloud Quality Assessment (NR-PCQA), where the aim is to automatically evaluate the perceptual quality of a point cloud in absence of a reference. We begin with the observation that different modalities of data – text descriptions, 2D projections, and 3D point cloud views – provide complementary information about point cloud quality. We then construct PIT-QMM, a novel LMM for NR-PCQA that is capable of consuming text, images and point clouds end-to-end to predict quality scores. Extensive experimentation shows that our proposed method outperforms the state-of-the-art by significant margins on popular benchmarks with fewer training iterations. We also demonstrate that our framework enables distortion localization and identification, which paves a new way forward for model explainability and interactivity. Code and datasets are available athttps://anonymous.4open.science/r/pit-qmm-BD1F/.

up
0 users have voted: