Sorry, you need to enable JavaScript to visit this website.

A FOVEATED VIDEO QUALITY ASSESSMENT MODEL USING SPACE-VARIANT NATURAL SCENE STATISTICS

Citation Author(s):
Anjul Patney, Richard Webb, Alan Bovik
Submitted by:
Yize Jin
Last updated:
23 September 2021 - 4:52pm
Document Type:
Presentation Slides
Document Year:
2021
Event:
Presenters:
Yize Jin
Paper Code:
1506
 

In Virtual Reality (VR) systems, head mounted displays (HMDs) are widely used to present VR contents. When displaying immersive (360 degree video) scenes, greater challenges arise due to limitations of computing power, frame rate, and transmission bandwidth. To address these problems, a variety of foveated video compression and streaming methods have been proposed, which seek to exploit the nonuniform sampling density of the retinal photoreceptors and ganglion cells, which decreases rapidly with increasing eccentricity. Creating foveated immersive video content leads to the need for specialized foveated video quality pridictors. Here we propose a No-Reference (NR or blind) method which we call "Space-Variant BRISQUE (SV-BRISQUE),'' which is based on a new space-variant natural scene statistics model. When tested on a large database of foveated, compression-distorted videos along with human opinions of them, we found that our new model algorithm achieves state of the art (SOTA) performance with correlation 0.88 / 0.90 (PLCC / SROCC) against human subjectivity.

up
0 users have voted: