Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience

Paper Details

Authors:
Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik
Submitted On:
14 October 2018 - 10:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GNARX_ICIP_2018_Poster.pdf

(76)

Subscribe

[1] Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik, "Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3664. Accessed: Apr. 25, 2019.
@article{3664-18,
url = {http://sigport.org/3664},
author = {Christos Bampis; Zhi Li; Ioannis Katsavounidis; Alan C. Bovik },
publisher = {IEEE SigPort},
title = {Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience},
year = {2018} }
TY - EJOUR
T1 - Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience
AU - Christos Bampis; Zhi Li; Ioannis Katsavounidis; Alan C. Bovik
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3664
ER -
Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik. (2018). Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience. IEEE SigPort. http://sigport.org/3664
Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik, 2018. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience. Available at: http://sigport.org/3664.
Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik. (2018). "Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience." Web.
1. Christos Bampis, Zhi Li, Ioannis Katsavounidis, Alan C. Bovik. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3664

Profile Hidden Markov Models for Foreground Object Modelling


Accurate background/foreground segmentation is a preliminary process essential to most visual surveillance applications. With the increasing use of freely moving cameras, strategies have been proposed to refine initial segmentation. In this paper, it is proposed to exploit the Vide-omics paradigm, and Profile Hidden Markov Models in particular, to create a new type of object descriptors relying on spatiotemporal information. Performance of the proposed methodology has been evaluated using a standard dataset of videos captured by moving cameras.

Paper Details

Authors:
Francisco Florez-Revuelta, Jean-Christophe Nebel
Submitted On:
12 October 2018 - 7:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Bioinformatics-inspired video analysis

(105)

Subscribe

[1] Francisco Florez-Revuelta, Jean-Christophe Nebel, "Profile Hidden Markov Models for Foreground Object Modelling", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3663. Accessed: Apr. 25, 2019.
@article{3663-18,
url = {http://sigport.org/3663},
author = {Francisco Florez-Revuelta; Jean-Christophe Nebel },
publisher = {IEEE SigPort},
title = {Profile Hidden Markov Models for Foreground Object Modelling},
year = {2018} }
TY - EJOUR
T1 - Profile Hidden Markov Models for Foreground Object Modelling
AU - Francisco Florez-Revuelta; Jean-Christophe Nebel
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3663
ER -
Francisco Florez-Revuelta, Jean-Christophe Nebel. (2018). Profile Hidden Markov Models for Foreground Object Modelling. IEEE SigPort. http://sigport.org/3663
Francisco Florez-Revuelta, Jean-Christophe Nebel, 2018. Profile Hidden Markov Models for Foreground Object Modelling. Available at: http://sigport.org/3663.
Francisco Florez-Revuelta, Jean-Christophe Nebel. (2018). "Profile Hidden Markov Models for Foreground Object Modelling." Web.
1. Francisco Florez-Revuelta, Jean-Christophe Nebel. Profile Hidden Markov Models for Foreground Object Modelling [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3663

Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER


Image noise filters usually assume noise as white Gaussian. However, in a capturing pipeline, noise often becomes spatially correlated due to in-camera processing that aims to suppress the noise and increase the compression rate. Mostly, only high-frequency noise components are suppressed since the image signal is more likely to appear in the low-frequency components of the captured image. As a result, noise emerges as coarse grain which makes white (all-pass) noise filters ineffective, especially when the resolution of the target display is lower than the captured image.

Paper Details

Authors:
Meisam Rakhshanfar and Maria A. Amer
Submitted On:
10 October 2018 - 6:38pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

Source code; see also https://users.encs.concordia.ca/~amer/LFNFilter/

(64)

Subscribe

[1] Meisam Rakhshanfar and Maria A. Amer, "Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3659. Accessed: Apr. 25, 2019.
@article{3659-18,
url = {http://sigport.org/3659},
author = {Meisam Rakhshanfar and Maria A. Amer },
publisher = {IEEE SigPort},
title = {Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER},
year = {2018} }
TY - EJOUR
T1 - Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER
AU - Meisam Rakhshanfar and Maria A. Amer
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3659
ER -
Meisam Rakhshanfar and Maria A. Amer. (2018). Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER. IEEE SigPort. http://sigport.org/3659
Meisam Rakhshanfar and Maria A. Amer, 2018. Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER. Available at: http://sigport.org/3659.
Meisam Rakhshanfar and Maria A. Amer. (2018). "Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER." Web.
1. Meisam Rakhshanfar and Maria A. Amer. Code: LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3659

A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT


This paper proposes a novel unified framework for fault detection of the freight train images based on convolutional neural network (CNN) under complex environment. Firstly, the multi region proposal networks (MRPN) with a set of prior bounding boxes are introduced to achieve high quality fault proposal generation. And then, we apply a linear non-maximum suppression method to retain the most suitable anchor while removing redundant boxes. Finally, a powerful multi-level region-of-interest (ROI) pooling is proposed for proposal classification and accurate detection.

Paper Details

Authors:
Submitted On:
9 October 2018 - 8:40pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICIP-poster-A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT.pdf

(6)

Subscribe

[1] , "A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3657. Accessed: Apr. 25, 2019.
@article{3657-18,
url = {http://sigport.org/3657},
author = { },
publisher = {IEEE SigPort},
title = {A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT},
year = {2018} }
TY - EJOUR
T1 - A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3657
ER -
. (2018). A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT. IEEE SigPort. http://sigport.org/3657
, 2018. A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT. Available at: http://sigport.org/3657.
. (2018). "A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT." Web.
1. . A UNIFIED FRAMEWORK FOR FAULT DETECTION OF FREIGHT TRAIN IMAGES UNDER COMPLEX ENVIRONMENT [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3657

CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION

Paper Details

Authors:
Yongli Chang, Sumei Li, Xu Han, Chunping Hou
Submitted On:
8 October 2018 - 11:47pm
Short Link:
Type:
Event:
Paper Code:

Document Files

the poster of CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION

(66)

Subscribe

[1] Yongli Chang, Sumei Li, Xu Han, Chunping Hou, "CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3652. Accessed: Apr. 25, 2019.
@article{3652-18,
url = {http://sigport.org/3652},
author = {Yongli Chang; Sumei Li; Xu Han; Chunping Hou },
publisher = {IEEE SigPort},
title = {CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION},
year = {2018} }
TY - EJOUR
T1 - CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION
AU - Yongli Chang; Sumei Li; Xu Han; Chunping Hou
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3652
ER -
Yongli Chang, Sumei Li, Xu Han, Chunping Hou. (2018). CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION. IEEE SigPort. http://sigport.org/3652
Yongli Chang, Sumei Li, Xu Han, Chunping Hou, 2018. CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION. Available at: http://sigport.org/3652.
Yongli Chang, Sumei Li, Xu Han, Chunping Hou. (2018). "CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION." Web.
1. Yongli Chang, Sumei Li, Xu Han, Chunping Hou. CYCLOPEAN IMAGE BASED STEREOSCOPIC IMAGE QUALITY ASSESSMENT BY USING SPARSE REPRESENTATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3652

Supervised Deep Sparse Coding Networks

Paper Details

Authors:
Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran
Submitted On:
8 October 2018 - 6:31pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

TA.L1.4-SupervisedDeepSparseCodingNetworks.pdf

(67)

Subscribe

[1] Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran, "Supervised Deep Sparse Coding Networks", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3649. Accessed: Apr. 25, 2019.
@article{3649-18,
url = {http://sigport.org/3649},
author = {Xiaoxia Sun; Nasser M. Nasrabadi; Trac D. Tran },
publisher = {IEEE SigPort},
title = {Supervised Deep Sparse Coding Networks},
year = {2018} }
TY - EJOUR
T1 - Supervised Deep Sparse Coding Networks
AU - Xiaoxia Sun; Nasser M. Nasrabadi; Trac D. Tran
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3649
ER -
Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. (2018). Supervised Deep Sparse Coding Networks. IEEE SigPort. http://sigport.org/3649
Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran, 2018. Supervised Deep Sparse Coding Networks. Available at: http://sigport.org/3649.
Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. (2018). "Supervised Deep Sparse Coding Networks." Web.
1. Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. Supervised Deep Sparse Coding Networks [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3649

LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER


Image noise filters usually assume noise as white Gaussian. However, in a capturing pipeline, noise often becomes spatially correlated due to in-camera processing that aims to suppress the noise and increase the compression rate. Mostly, only high-frequency noise components are suppressed since the image signal is more likely to appear in the low-frequency components of the captured image. As a result, noise emerges as coarse grain which makes white (all-pass) noise filters ineffective, especially when the resolution of the target display is lower than the captured image.

Paper Details

Authors:
Meisam Rakhshanfar and Maria A. Amer
Submitted On:
8 October 2018 - 6:24pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

icip18Poster.pdf

(47)

Subscribe

[1] Meisam Rakhshanfar and Maria A. Amer, "LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3648. Accessed: Apr. 25, 2019.
@article{3648-18,
url = {http://sigport.org/3648},
author = {Meisam Rakhshanfar and Maria A. Amer },
publisher = {IEEE SigPort},
title = {LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER},
year = {2018} }
TY - EJOUR
T1 - LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER
AU - Meisam Rakhshanfar and Maria A. Amer
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3648
ER -
Meisam Rakhshanfar and Maria A. Amer. (2018). LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER. IEEE SigPort. http://sigport.org/3648
Meisam Rakhshanfar and Maria A. Amer, 2018. LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER. Available at: http://sigport.org/3648.
Meisam Rakhshanfar and Maria A. Amer. (2018). "LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER." Web.
1. Meisam Rakhshanfar and Maria A. Amer. LOW-FREQUENCY IMAGE NOISE REMOVAL USING WHITE NOISE FILTER [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3648

Deep 3D Human Pose Estimation under Partial Body Presence


This paper addresses the problem of 3D human pose estimation when not all body parts are present in the input image, i.e., when some body joints are present while other joints are fully absent (we exclude self-occlusion). State-of-the-art is not designed and thus not effective for such cases. We propose a deep CNN to regress the human pose directly from an input image; we design and train this network to work under partial body presence. Parallel to this, we train a detection network to classify the presence or absence of each of the main body joints in the input image.

Paper Details

Authors:
Saeid Vosoughi and Maria A. Amer
Submitted On:
8 October 2018 - 6:20pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

3D_humanPose_demo.mp4_.avi_.zip

(57)

Subscribe

[1] Saeid Vosoughi and Maria A. Amer, "Deep 3D Human Pose Estimation under Partial Body Presence", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3647. Accessed: Apr. 25, 2019.
@article{3647-18,
url = {http://sigport.org/3647},
author = {Saeid Vosoughi and Maria A. Amer },
publisher = {IEEE SigPort},
title = {Deep 3D Human Pose Estimation under Partial Body Presence},
year = {2018} }
TY - EJOUR
T1 - Deep 3D Human Pose Estimation under Partial Body Presence
AU - Saeid Vosoughi and Maria A. Amer
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3647
ER -
Saeid Vosoughi and Maria A. Amer. (2018). Deep 3D Human Pose Estimation under Partial Body Presence. IEEE SigPort. http://sigport.org/3647
Saeid Vosoughi and Maria A. Amer, 2018. Deep 3D Human Pose Estimation under Partial Body Presence. Available at: http://sigport.org/3647.
Saeid Vosoughi and Maria A. Amer. (2018). "Deep 3D Human Pose Estimation under Partial Body Presence." Web.
1. Saeid Vosoughi and Maria A. Amer. Deep 3D Human Pose Estimation under Partial Body Presence [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3647

Deep 3D Human Pose Estimation under Partial Body Presence


This paper addresses the problem of 3D human pose estimation when not all body parts are present in the input image, i.e., when some body joints are present while other joints are fully absent (we exclude self-occlusion). State-of-the-art is not designed and thus not effective for such cases. We propose a deep CNN to regress the human pose directly from an input image; we design and train this network to work under partial body presence. Parallel to this, we train a detection network to classify the presence or absence of each of the main body joints in the input image.

Paper Details

Authors:
Saeid Vosoughi and Maria A. Amer
Submitted On:
8 October 2018 - 6:09pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

icip2018_3dpose_slides.pdf

(57)

Subscribe

[1] Saeid Vosoughi and Maria A. Amer, "Deep 3D Human Pose Estimation under Partial Body Presence", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3646. Accessed: Apr. 25, 2019.
@article{3646-18,
url = {http://sigport.org/3646},
author = {Saeid Vosoughi and Maria A. Amer },
publisher = {IEEE SigPort},
title = {Deep 3D Human Pose Estimation under Partial Body Presence},
year = {2018} }
TY - EJOUR
T1 - Deep 3D Human Pose Estimation under Partial Body Presence
AU - Saeid Vosoughi and Maria A. Amer
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3646
ER -
Saeid Vosoughi and Maria A. Amer. (2018). Deep 3D Human Pose Estimation under Partial Body Presence. IEEE SigPort. http://sigport.org/3646
Saeid Vosoughi and Maria A. Amer, 2018. Deep 3D Human Pose Estimation under Partial Body Presence. Available at: http://sigport.org/3646.
Saeid Vosoughi and Maria A. Amer. (2018). "Deep 3D Human Pose Estimation under Partial Body Presence." Web.
1. Saeid Vosoughi and Maria A. Amer. Deep 3D Human Pose Estimation under Partial Body Presence [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3646

ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES


Object tracking is an active research area and numerous
techniques have been proposed recently. To evaluate a new
tracker, its performance is compared against existing ones
typically by averaging its quality based on a performance
measure, over all test video sequences. Such averaging is,
however, not representative as it does not account for outliers
(or similarities) between trackers. This paper presents a
framework for scoring and ranking of trackers using uncorrelated
quality metrics (overlap ratio and failure rate), coupled

Paper Details

Authors:
Tarek Ghoniemy, Julien Valognes, and Maria A. Amer
Submitted On:
8 October 2018 - 6:00pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

icip18_RankingPaper_Slides.pdf

(49)

Subscribe

[1] Tarek Ghoniemy, Julien Valognes, and Maria A. Amer, "ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3645. Accessed: Apr. 25, 2019.
@article{3645-18,
url = {http://sigport.org/3645},
author = {Tarek Ghoniemy; Julien Valognes; and Maria A. Amer },
publisher = {IEEE SigPort},
title = {ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES},
year = {2018} }
TY - EJOUR
T1 - ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES
AU - Tarek Ghoniemy; Julien Valognes; and Maria A. Amer
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3645
ER -
Tarek Ghoniemy, Julien Valognes, and Maria A. Amer. (2018). ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES. IEEE SigPort. http://sigport.org/3645
Tarek Ghoniemy, Julien Valognes, and Maria A. Amer, 2018. ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES. Available at: http://sigport.org/3645.
Tarek Ghoniemy, Julien Valognes, and Maria A. Amer. (2018). "ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES." Web.
1. Tarek Ghoniemy, Julien Valognes, and Maria A. Amer. ROBUST SCORING AND RANKING OF OBJECT TRACKING TECHNIQUES [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3645

Pages