Documents
Presentation Slides
MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
- Citation Author(s):
- Submitted by:
- MUHAMMED P
- Last updated:
- 12 November 2017 - 7:49am
- Document Type:
- Presentation Slides
- Document Year:
- 2017
- Event:
- Presenters:
- Nagabhushan Eswara
- Paper Code:
- 1590
- Categories:
- Log in to post comments
We present a novel No-Reference (FR) video quality assessment
(VQA) algorithm that operates on the sparse representation
coefficients of local spatio-temporal (video) volumes.
Our work is motivated by the observation that the primary
visual cortex adopts a sparse coding strategy to represent
visual stimulus. We use the popular K-SVD algorithm to construct
spatio-temporal dictionaries to sparsely represent local
spatio-temporal volumes of natural videos. We empirically
demonstrate that the histogram of the sparse representations
corresponding to each atom in the dictionary can be well
modelled using a Generalised Gaussian Distribution (GGD).
We then show that the GGD model parameters are good feature
vectors for distortion estimation. This, in turn, leads us
to the proposed NR-VQA algorithm. The GGD model parameters
corresponding to each atom of the dictionary form the
feature vector that is used to predict quality using Support
Vector Regression(SVR). The proposed algorithm delivers
competitive performance over the LIVE VQA (SD), EPFL
(SD) and the LIVE Mobile high definition (HD) databases.
Our algorithm is called SParsity based Objective VIdeo Quality
Evaluator (SPOVIQE). The proposed algorithm is simple
and computationally efficient as compared with other stateof-
the-art NR-VQA algorithms.