Sorry, you need to enable JavaScript to visit this website.

Quality Assessment

A No-Reference Autoencoder Video Quality Metric


In this work, we introduce the No-reference Autoencoder VidEo (NAVE) quality metric, which is based on a deep au-toencoder machine learning technique. The metric uses a set of spatial and temporal features to estimate the overall visual quality, taking advantage of the autoencoder ability to produce a better and more compact set of features. NAVE was tested on two databases: the UnB-AVQ database and the LiveNetflix-II database.

Paper Details

Authors:
Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines
Submitted On:
22 September 2019 - 12:53pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2019-09-ICIP-presentation.pdf

(15)

Subscribe

[1] Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines , "A No-Reference Autoencoder Video Quality Metric", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4816. Accessed: Oct. 17, 2019.
@article{4816-19,
url = {http://sigport.org/4816},
author = { Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines },
publisher = {IEEE SigPort},
title = {A No-Reference Autoencoder Video Quality Metric},
year = {2019} }
TY - EJOUR
T1 - A No-Reference Autoencoder Video Quality Metric
AU - Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4816
ER -
Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines . (2019). A No-Reference Autoencoder Video Quality Metric. IEEE SigPort. http://sigport.org/4816
Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines , 2019. A No-Reference Autoencoder Video Quality Metric. Available at: http://sigport.org/4816.
Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines . (2019). "A No-Reference Autoencoder Video Quality Metric." Web.
1. Helard B. Martinez ; Mylène C. Q. Farias ; Andrew Hines . A No-Reference Autoencoder Video Quality Metric [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4816

Quality Assessment of Images Undergoing Multiple Distortion Stages


In practical media distribution systems, visual content often undergoes multiple stages of quality degradations along the delivery chain between the source and destination. By contrast, current image quality assessment (IQA) models are typically validated on image databases with a single distortion stage. In this work, we construct two large-scale image databases that are composed of more than 2 million images undergoing multiple stages of distortions and examine how state-of-the-art IQA algorithms behave over distortion stages.

Paper Details

Authors:
Shahrukh Athar, Abdul Rehman, Zhou Wang
Submitted On:
15 September 2019 - 3:46am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017_Poster_Paper2942.pdf

(10)

Keywords

Additional Categories

Subscribe

[1] Shahrukh Athar, Abdul Rehman, Zhou Wang, "Quality Assessment of Images Undergoing Multiple Distortion Stages", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4618. Accessed: Oct. 17, 2019.
@article{4618-19,
url = {http://sigport.org/4618},
author = {Shahrukh Athar; Abdul Rehman; Zhou Wang },
publisher = {IEEE SigPort},
title = {Quality Assessment of Images Undergoing Multiple Distortion Stages},
year = {2019} }
TY - EJOUR
T1 - Quality Assessment of Images Undergoing Multiple Distortion Stages
AU - Shahrukh Athar; Abdul Rehman; Zhou Wang
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4618
ER -
Shahrukh Athar, Abdul Rehman, Zhou Wang. (2019). Quality Assessment of Images Undergoing Multiple Distortion Stages. IEEE SigPort. http://sigport.org/4618
Shahrukh Athar, Abdul Rehman, Zhou Wang, 2019. Quality Assessment of Images Undergoing Multiple Distortion Stages. Available at: http://sigport.org/4618.
Shahrukh Athar, Abdul Rehman, Zhou Wang. (2019). "Quality Assessment of Images Undergoing Multiple Distortion Stages." Web.
1. Shahrukh Athar, Abdul Rehman, Zhou Wang. Quality Assessment of Images Undergoing Multiple Distortion Stages [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4618

Perceptual Quality Assessment of UHD-HDR-WCG Videos


High Dynamic Range (HDR) Wide Color Gamut (WCG) Ultra High Definition (4K/UHD) content has become increasingly popular recently. Due to the increased data rate, novel video compression methods have been developed to maintain the quality of the videos being delivered to consumers under bandwidth constraints. This has led to new challenges for the development of objective Video Quality Assessment (VQA) models, which are traditionally designed without sufficient calibration and validation based on subjective quality assessment of UHD-HDR-WCG videos.

Paper Details

Authors:
Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang
Submitted On:
15 September 2019 - 3:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2019_Slides_Paper1584.pdf

(16)

Keywords

Additional Categories

Subscribe

[1] Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang, "Perceptual Quality Assessment of UHD-HDR-WCG Videos", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4617. Accessed: Oct. 17, 2019.
@article{4617-19,
url = {http://sigport.org/4617},
author = {Shahrukh Athar; Thilan Costa; Kai Zeng; Zhou Wang },
publisher = {IEEE SigPort},
title = {Perceptual Quality Assessment of UHD-HDR-WCG Videos},
year = {2019} }
TY - EJOUR
T1 - Perceptual Quality Assessment of UHD-HDR-WCG Videos
AU - Shahrukh Athar; Thilan Costa; Kai Zeng; Zhou Wang
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4617
ER -
Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang. (2019). Perceptual Quality Assessment of UHD-HDR-WCG Videos. IEEE SigPort. http://sigport.org/4617
Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang, 2019. Perceptual Quality Assessment of UHD-HDR-WCG Videos. Available at: http://sigport.org/4617.
Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang. (2019). "Perceptual Quality Assessment of UHD-HDR-WCG Videos." Web.
1. Shahrukh Athar, Thilan Costa, Kai Zeng, Zhou Wang. Perceptual Quality Assessment of UHD-HDR-WCG Videos [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4617

AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING


Reliably predicting where people look in images and videos remains challenging and requires substantial eye-tracking data to be collected and analysed for various applications. In this paper, we present an eye-tracking study where twenty-eight participants viewed forty still scenes of video advertising. First, we analyse human attentional behaviour based on gaze data. Then, we evaluate to what extent a machine – saliency model – can predict human behaviour. Experimental results show that there is a significant gap between human and machine in visual saliency.

Paper Details

Authors:
Hantao Liu
Submitted On:
10 September 2019 - 10:48pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Adverts poster.pdf

(18)

Subscribe

[1] Hantao Liu, "AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4583. Accessed: Oct. 17, 2019.
@article{4583-19,
url = {http://sigport.org/4583},
author = {Hantao Liu },
publisher = {IEEE SigPort},
title = {AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING},
year = {2019} }
TY - EJOUR
T1 - AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING
AU - Hantao Liu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4583
ER -
Hantao Liu. (2019). AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING. IEEE SigPort. http://sigport.org/4583
Hantao Liu, 2019. AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING. Available at: http://sigport.org/4583.
Hantao Liu. (2019). "AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING." Web.
1. Hantao Liu. AN EYE-TRACKING DATABASE OF VIDEO ADVERTISING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4583

SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION


Our previous study has shown that image distortions cause saliency distraction, and that visual saliency of a distorted image differs from that of its distortion-free reference. Being able to measure such distortion-induced saliency variation (DSV) significantly benefits algorithms for automated image quality assessment. Methods of quantifying DSV, however, remain unexplored due to the lack of a benchmark. In this paper, we build a benchmark for the measurement of DSV through a subjective study.

Paper Details

Authors:
Wei Zhang, Hantao Liu
Submitted On:
10 September 2019 - 10:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Similarity poster.pdf

(16)

Subscribe

[1] Wei Zhang, Hantao Liu, "SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4582. Accessed: Oct. 17, 2019.
@article{4582-19,
url = {http://sigport.org/4582},
author = {Wei Zhang; Hantao Liu },
publisher = {IEEE SigPort},
title = {SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION},
year = {2019} }
TY - EJOUR
T1 - SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION
AU - Wei Zhang; Hantao Liu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4582
ER -
Wei Zhang, Hantao Liu. (2019). SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION. IEEE SigPort. http://sigport.org/4582
Wei Zhang, Hantao Liu, 2019. SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION. Available at: http://sigport.org/4582.
Wei Zhang, Hantao Liu. (2019). "SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION." Web.
1. Wei Zhang, Hantao Liu. SUBJECTIVE ASSESSMENT OF IMAGE QUALITY INDUCED SALIENCY VARIATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4582

BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE


Existing blind evaluators for screen content images (SCIs) are mainly learning-based and require a number of training images with co-registered human opinion scores. However, the size of existing databases is small, and it is labor-, timeconsuming and expensive to largely generate human opinion scores. In this study, we propose a novel blind quality evaluator without training.

Paper Details

Authors:
Guanghui Yue, Chunping Hou, and Weisi Lin
Submitted On:
11 May 2019 - 9:48pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

icassp 2019 poster 2875.pdf

(27)

Subscribe

[1] Guanghui Yue, Chunping Hou, and Weisi Lin, "BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4457. Accessed: Oct. 17, 2019.
@article{4457-19,
url = {http://sigport.org/4457},
author = {Guanghui Yue; Chunping Hou; and Weisi Lin },
publisher = {IEEE SigPort},
title = {BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE},
year = {2019} }
TY - EJOUR
T1 - BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE
AU - Guanghui Yue; Chunping Hou; and Weisi Lin
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4457
ER -
Guanghui Yue, Chunping Hou, and Weisi Lin. (2019). BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE. IEEE SigPort. http://sigport.org/4457
Guanghui Yue, Chunping Hou, and Weisi Lin, 2019. BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE. Available at: http://sigport.org/4457.
Guanghui Yue, Chunping Hou, and Weisi Lin. (2019). "BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE." Web.
1. Guanghui Yue, Chunping Hou, and Weisi Lin. BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4457

Improving Facial Attractiveness Prediction via Co-Attention Learning


Facial attractiveness prediction has drawn considerable attention from image processing community.
Despite the substantial progress achieved by existing works, various challenges remain.
One is the lack of accurate representation for facial composition, which is essential for attractiveness evaluation. In this paper, we propose to use pixel-wise labelling masks as the meta information of facial composition, and input them into a network for learning high-level semantic representations.

Paper Details

Authors:
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu
Submitted On:
7 May 2019 - 11:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019_2862_Poster_small.pdf

(55)

Subscribe

[1] Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu, "Improving Facial Attractiveness Prediction via Co-Attention Learning", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3998. Accessed: Oct. 17, 2019.
@article{3998-19,
url = {http://sigport.org/3998},
author = {Shengjie Shi; Fei Gao; Xuantong Meng; Xingxin Xu; Jingjie Zhu },
publisher = {IEEE SigPort},
title = {Improving Facial Attractiveness Prediction via Co-Attention Learning},
year = {2019} }
TY - EJOUR
T1 - Improving Facial Attractiveness Prediction via Co-Attention Learning
AU - Shengjie Shi; Fei Gao; Xuantong Meng; Xingxin Xu; Jingjie Zhu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3998
ER -
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. (2019). Improving Facial Attractiveness Prediction via Co-Attention Learning. IEEE SigPort. http://sigport.org/3998
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu, 2019. Improving Facial Attractiveness Prediction via Co-Attention Learning. Available at: http://sigport.org/3998.
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. (2019). "Improving Facial Attractiveness Prediction via Co-Attention Learning." Web.
1. Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. Improving Facial Attractiveness Prediction via Co-Attention Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3998

S3D: Stacking Segmental P3D for Action Quality Assessment


Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.

Paper Details

Authors:
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran
Submitted On:
5 October 2018 - 2:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AI Referee: Score Olympic Games

(134)

Subscribe

[1] Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, "S3D: Stacking Segmental P3D for Action Quality Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3501. Accessed: Oct. 17, 2019.
@article{3501-18,
url = {http://sigport.org/3501},
author = {Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran },
publisher = {IEEE SigPort},
title = {S3D: Stacking Segmental P3D for Action Quality Assessment},
year = {2018} }
TY - EJOUR
T1 - S3D: Stacking Segmental P3D for Action Quality Assessment
AU - Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3501
ER -
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). S3D: Stacking Segmental P3D for Action Quality Assessment. IEEE SigPort. http://sigport.org/3501
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, 2018. S3D: Stacking Segmental P3D for Action Quality Assessment. Available at: http://sigport.org/3501.
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). "S3D: Stacking Segmental P3D for Action Quality Assessment." Web.
1. Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. S3D: Stacking Segmental P3D for Action Quality Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3501

PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment


Image aesthetic assessment is important for finding well taken and appealing photographs but is challenging due to the ambiguity and subjectivity of aesthetic criteria. We develop the pairwise aesthetic comparison network (PAC-Net), which consists of two parts: aesthetic feature extraction and pairwise feature comparison. To alleviate the ambiguity and subjectivity, we train PAC-Net to learn the relative aesthetic ranks of two images by employing a novel loss function, called aesthetic-adaptive cross entropy loss.

Paper Details

Authors:
Submitted On:
4 October 2018 - 11:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

2018_ICIP_poster_ks.pdf

(109)

Keywords

Additional Categories

Subscribe

[1] , "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3430. Accessed: Oct. 17, 2019.
@article{3430-18,
url = {http://sigport.org/3430},
author = { },
publisher = {IEEE SigPort},
title = {PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment},
year = {2018} }
TY - EJOUR
T1 - PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3430
ER -
. (2018). PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. IEEE SigPort. http://sigport.org/3430
, 2018. PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. Available at: http://sigport.org/3430.
. (2018). "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment." Web.
1. . PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3430

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(209)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: Oct. 17, 2019.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

Pages