Sorry, you need to enable JavaScript to visit this website.

Quality Assessment

BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE


Existing blind evaluators for screen content images (SCIs) are mainly learning-based and require a number of training images with co-registered human opinion scores. However, the size of existing databases is small, and it is labor-, timeconsuming and expensive to largely generate human opinion scores. In this study, we propose a novel blind quality evaluator without training.

Paper Details

Authors:
Guanghui Yue, Chunping Hou, and Weisi Lin
Submitted On:
11 May 2019 - 9:48pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

icassp 2019 poster 2875.pdf

(1)

Subscribe

[1] Guanghui Yue, Chunping Hou, and Weisi Lin, "BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4457. Accessed: May. 25, 2019.
@article{4457-19,
url = {http://sigport.org/4457},
author = {Guanghui Yue; Chunping Hou; and Weisi Lin },
publisher = {IEEE SigPort},
title = {BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE},
year = {2019} }
TY - EJOUR
T1 - BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE
AU - Guanghui Yue; Chunping Hou; and Weisi Lin
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4457
ER -
Guanghui Yue, Chunping Hou, and Weisi Lin. (2019). BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE. IEEE SigPort. http://sigport.org/4457
Guanghui Yue, Chunping Hou, and Weisi Lin, 2019. BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE. Available at: http://sigport.org/4457.
Guanghui Yue, Chunping Hou, and Weisi Lin. (2019). "BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE." Web.
1. Guanghui Yue, Chunping Hou, and Weisi Lin. BLIND QUALITY EVALUATOR FOR SCREEN CONTENT IMAGES VIA ANALYSIS OF STRUCTURE [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4457

Improving Facial Attractiveness Prediction via Co-Attention Learning


Facial attractiveness prediction has drawn considerable attention from image processing community.
Despite the substantial progress achieved by existing works, various challenges remain.
One is the lack of accurate representation for facial composition, which is essential for attractiveness evaluation. In this paper, we propose to use pixel-wise labelling masks as the meta information of facial composition, and input them into a network for learning high-level semantic representations.

Paper Details

Authors:
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu
Submitted On:
7 May 2019 - 11:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2019_2862_Poster_small.pdf

(9)

Subscribe

[1] Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu, "Improving Facial Attractiveness Prediction via Co-Attention Learning", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3998. Accessed: May. 25, 2019.
@article{3998-19,
url = {http://sigport.org/3998},
author = {Shengjie Shi; Fei Gao; Xuantong Meng; Xingxin Xu; Jingjie Zhu },
publisher = {IEEE SigPort},
title = {Improving Facial Attractiveness Prediction via Co-Attention Learning},
year = {2019} }
TY - EJOUR
T1 - Improving Facial Attractiveness Prediction via Co-Attention Learning
AU - Shengjie Shi; Fei Gao; Xuantong Meng; Xingxin Xu; Jingjie Zhu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3998
ER -
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. (2019). Improving Facial Attractiveness Prediction via Co-Attention Learning. IEEE SigPort. http://sigport.org/3998
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu, 2019. Improving Facial Attractiveness Prediction via Co-Attention Learning. Available at: http://sigport.org/3998.
Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. (2019). "Improving Facial Attractiveness Prediction via Co-Attention Learning." Web.
1. Shengjie Shi, Fei Gao, Xuantong Meng, Xingxin Xu, Jingjie Zhu. Improving Facial Attractiveness Prediction via Co-Attention Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3998

S3D: Stacking Segmental P3D for Action Quality Assessment


Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.

Paper Details

Authors:
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran
Submitted On:
5 October 2018 - 2:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AI Referee: Score Olympic Games

(104)

Subscribe

[1] Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, "S3D: Stacking Segmental P3D for Action Quality Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3501. Accessed: May. 25, 2019.
@article{3501-18,
url = {http://sigport.org/3501},
author = {Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran },
publisher = {IEEE SigPort},
title = {S3D: Stacking Segmental P3D for Action Quality Assessment},
year = {2018} }
TY - EJOUR
T1 - S3D: Stacking Segmental P3D for Action Quality Assessment
AU - Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3501
ER -
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). S3D: Stacking Segmental P3D for Action Quality Assessment. IEEE SigPort. http://sigport.org/3501
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, 2018. S3D: Stacking Segmental P3D for Action Quality Assessment. Available at: http://sigport.org/3501.
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). "S3D: Stacking Segmental P3D for Action Quality Assessment." Web.
1. Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. S3D: Stacking Segmental P3D for Action Quality Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3501

PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment


Image aesthetic assessment is important for finding well taken and appealing photographs but is challenging due to the ambiguity and subjectivity of aesthetic criteria. We develop the pairwise aesthetic comparison network (PAC-Net), which consists of two parts: aesthetic feature extraction and pairwise feature comparison. To alleviate the ambiguity and subjectivity, we train PAC-Net to learn the relative aesthetic ranks of two images by employing a novel loss function, called aesthetic-adaptive cross entropy loss.

Paper Details

Authors:
Submitted On:
4 October 2018 - 11:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

2018_ICIP_poster_ks.pdf

(73)

Keywords

Additional Categories

Subscribe

[1] , "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3430. Accessed: May. 25, 2019.
@article{3430-18,
url = {http://sigport.org/3430},
author = { },
publisher = {IEEE SigPort},
title = {PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment},
year = {2018} }
TY - EJOUR
T1 - PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3430
ER -
. (2018). PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. IEEE SigPort. http://sigport.org/3430
, 2018. PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. Available at: http://sigport.org/3430.
. (2018). "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment." Web.
1. . PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3430

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(181)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: May. 25, 2019.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

No-reference weighting factor selection for bimodal tomography

Paper Details

Authors:
Yan Guo, Bernd Rieger
Submitted On:
12 April 2018 - 10:58am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP-Yan-No-Animation.pdf

(102)

Subscribe

[1] Yan Guo, Bernd Rieger, "No-reference weighting factor selection for bimodal tomography", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2374. Accessed: May. 25, 2019.
@article{2374-18,
url = {http://sigport.org/2374},
author = {Yan Guo; Bernd Rieger },
publisher = {IEEE SigPort},
title = {No-reference weighting factor selection for bimodal tomography},
year = {2018} }
TY - EJOUR
T1 - No-reference weighting factor selection for bimodal tomography
AU - Yan Guo; Bernd Rieger
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2374
ER -
Yan Guo, Bernd Rieger. (2018). No-reference weighting factor selection for bimodal tomography. IEEE SigPort. http://sigport.org/2374
Yan Guo, Bernd Rieger, 2018. No-reference weighting factor selection for bimodal tomography. Available at: http://sigport.org/2374.
Yan Guo, Bernd Rieger. (2018). "No-reference weighting factor selection for bimodal tomography." Web.
1. Yan Guo, Bernd Rieger. No-reference weighting factor selection for bimodal tomography [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2374

Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering


In this paper, we propose variational fusion of time-of-flight (TOF) and stereo data using edge selective joint filtering (ESJF). We utilize ESJF to up-sample low-resolution (LR) depth captured by TOF camera and produce high-resolution (HR)depth maps with accurate edge information. First, we measure confidence of two sensor with different reliability to fuse them. Then, we up-sample TOF depth map using ESJF to generate discontinuity maps and protect edges in depth. Finally, we perform variational fusion of TOF and stereo depth data based on total variation (TV) guided by discontinuity maps.

Paper Details

Authors:
Cheolkon Jung, Zhendong Zhang
Submitted On:
10 September 2017 - 11:11pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017_Fusion_slides.

(218)

Subscribe

[1] Cheolkon Jung, Zhendong Zhang, "Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1877. Accessed: May. 25, 2019.
@article{1877-17,
url = {http://sigport.org/1877},
author = {Cheolkon Jung; Zhendong Zhang },
publisher = {IEEE SigPort},
title = {Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering},
year = {2017} }
TY - EJOUR
T1 - Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering
AU - Cheolkon Jung; Zhendong Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1877
ER -
Cheolkon Jung, Zhendong Zhang. (2017). Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering. IEEE SigPort. http://sigport.org/1877
Cheolkon Jung, Zhendong Zhang, 2017. Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering. Available at: http://sigport.org/1877.
Cheolkon Jung, Zhendong Zhang. (2017). "Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering." Web.
1. Cheolkon Jung, Zhendong Zhang. Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1877