Sorry, you need to enable JavaScript to visit this website.

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT

Citation Author(s):
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya
Submitted by:
Sumohana Channa...
Last updated:
12 November 2017 - 9:24am
Document Type:
Presentation Slides
Document Year:
2017
Event:
Paper Code:
SSP-O.5.1
 

We present a novel No-Reference (NR) video quality assessment (VQA) algorithm that operates on the sparse represent- ation coefficients of local spatio-temporal (video) volumes. Our work is motivated by the observation that the primary visual cortex adopts a sparse coding strategy to represent visual stimulus. We use the popular K-SVD algorithm to construct spatio-temporal dictionary to sparsely represent local spatio-temporal volumes of natural videos. We empirically demonstrate that the histogram of the sparse representations corresponding to each atom in the dictionary can be well modelled using a Generalised Gaussian Distribution (GGD). We then show that the GGD model parameters are good feature for distortion estimation. This, in turn leads us to the proposed NR-VQA algorithm. The GGD model parameters corresponding to each atom of the dictionary form the feature vector that is used to predict quality using Support Vector Regression (SVR). The proposed algorithm delivers competitive performance over the LIVE VQA (SD), EPFL (SD) and the LIVE Mobile high definition (HD) databases. Our algorithm is called SParsity based Objective VIdeo Quality Evaluator (SPOVIQE). The proposed algorithm is simple and computationally efficient as compared with other state-of-the-art NR-VQA algorithms.

up
0 users have voted: