Sorry, you need to enable JavaScript to visit this website.

Quality Assessment

S3D: Stacking Segmental P3D for Action Quality Assessment


Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.

Paper Details

Authors:
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran
Submitted On:
5 October 2018 - 2:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AI Referee: Score Olympic Games

(5 downloads)

Subscribe

[1] Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, "S3D: Stacking Segmental P3D for Action Quality Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3501. Accessed: Oct. 16, 2018.
@article{3501-18,
url = {http://sigport.org/3501},
author = {Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran },
publisher = {IEEE SigPort},
title = {S3D: Stacking Segmental P3D for Action Quality Assessment},
year = {2018} }
TY - EJOUR
T1 - S3D: Stacking Segmental P3D for Action Quality Assessment
AU - Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3501
ER -
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). S3D: Stacking Segmental P3D for Action Quality Assessment. IEEE SigPort. http://sigport.org/3501
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, 2018. S3D: Stacking Segmental P3D for Action Quality Assessment. Available at: http://sigport.org/3501.
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). "S3D: Stacking Segmental P3D for Action Quality Assessment." Web.
1. Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. S3D: Stacking Segmental P3D for Action Quality Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3501

PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment


Image aesthetic assessment is important for finding well taken and appealing photographs but is challenging due to the ambiguity and subjectivity of aesthetic criteria. We develop the pairwise aesthetic comparison network (PAC-Net), which consists of two parts: aesthetic feature extraction and pairwise feature comparison. To alleviate the ambiguity and subjectivity, we train PAC-Net to learn the relative aesthetic ranks of two images by employing a novel loss function, called aesthetic-adaptive cross entropy loss.

Paper Details

Authors:
Submitted On:
4 October 2018 - 11:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

2018_ICIP_poster_ks.pdf

(5 downloads)

Keywords

Additional Categories

Subscribe

[1] , "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3430. Accessed: Oct. 16, 2018.
@article{3430-18,
url = {http://sigport.org/3430},
author = { },
publisher = {IEEE SigPort},
title = {PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment},
year = {2018} }
TY - EJOUR
T1 - PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3430
ER -
. (2018). PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. IEEE SigPort. http://sigport.org/3430
, 2018. PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment. Available at: http://sigport.org/3430.
. (2018). "PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment." Web.
1. . PAC-Net: Pairwise Aesthetic Comparison Network For Image Aesthetic Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3430

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(115 downloads)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: Oct. 16, 2018.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

No-reference weighting factor selection for bimodal tomography

Paper Details

Authors:
Yan Guo, Bernd Rieger
Submitted On:
12 April 2018 - 10:58am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP-Yan-No-Animation.pdf

(56 downloads)

Subscribe

[1] Yan Guo, Bernd Rieger, "No-reference weighting factor selection for bimodal tomography", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2374. Accessed: Oct. 16, 2018.
@article{2374-18,
url = {http://sigport.org/2374},
author = {Yan Guo; Bernd Rieger },
publisher = {IEEE SigPort},
title = {No-reference weighting factor selection for bimodal tomography},
year = {2018} }
TY - EJOUR
T1 - No-reference weighting factor selection for bimodal tomography
AU - Yan Guo; Bernd Rieger
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2374
ER -
Yan Guo, Bernd Rieger. (2018). No-reference weighting factor selection for bimodal tomography. IEEE SigPort. http://sigport.org/2374
Yan Guo, Bernd Rieger, 2018. No-reference weighting factor selection for bimodal tomography. Available at: http://sigport.org/2374.
Yan Guo, Bernd Rieger. (2018). "No-reference weighting factor selection for bimodal tomography." Web.
1. Yan Guo, Bernd Rieger. No-reference weighting factor selection for bimodal tomography [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2374

Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering


In this paper, we propose variational fusion of time-of-flight (TOF) and stereo data using edge selective joint filtering (ESJF). We utilize ESJF to up-sample low-resolution (LR) depth captured by TOF camera and produce high-resolution (HR)depth maps with accurate edge information. First, we measure confidence of two sensor with different reliability to fuse them. Then, we up-sample TOF depth map using ESJF to generate discontinuity maps and protect edges in depth. Finally, we perform variational fusion of TOF and stereo depth data based on total variation (TV) guided by discontinuity maps.

Paper Details

Authors:
Cheolkon Jung, Zhendong Zhang
Submitted On:
10 September 2017 - 11:11pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2017_Fusion_slides.

(172 downloads)

Subscribe

[1] Cheolkon Jung, Zhendong Zhang, "Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1877. Accessed: Oct. 16, 2018.
@article{1877-17,
url = {http://sigport.org/1877},
author = {Cheolkon Jung; Zhendong Zhang },
publisher = {IEEE SigPort},
title = {Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering},
year = {2017} }
TY - EJOUR
T1 - Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering
AU - Cheolkon Jung; Zhendong Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1877
ER -
Cheolkon Jung, Zhendong Zhang. (2017). Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering. IEEE SigPort. http://sigport.org/1877
Cheolkon Jung, Zhendong Zhang, 2017. Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering. Available at: http://sigport.org/1877.
Cheolkon Jung, Zhendong Zhang. (2017). "Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering." Web.
1. Cheolkon Jung, Zhendong Zhang. Variational Fusion of Time-of-Flight and Stereo Data Using Edge Selective Joint Filtering [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1877