Sorry, you need to enable JavaScript to visit this website.

ICIP 2019

The International Conference on Image Processing (ICIP), sponsored by the IEEE Signal Processing Society, is the premier forum for the presentation of technological advances and research results in the fields of theoretical, experimental, and applied image and video processing. ICIP has been held annually since 1994, brings together leading engineers and scientists in image and video processing from around the world. Visit website.

Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation

Paper Details

Authors:
Submitted On:
27 September 2019 - 10:50pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

RuiChen_ICIP19_Oral.pdf

(24)

Subscribe

[1] , "Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4848. Accessed: Nov. 13, 2019.
@article{4848-19,
url = {http://sigport.org/4848},
author = { },
publisher = {IEEE SigPort},
title = {Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation},
year = {2019} }
TY - EJOUR
T1 - Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4848
ER -
. (2019). Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation. IEEE SigPort. http://sigport.org/4848
, 2019. Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation. Available at: http://sigport.org/4848.
. (2019). "Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation." Web.
1. . Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4848

RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles

Paper Details

Authors:
Hairong Qi
Submitted On:
26 September 2019 - 10:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

RRPN_presentation.pdf

(21)

Subscribe

[1] Hairong Qi, "RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4843. Accessed: Nov. 13, 2019.
@article{4843-19,
url = {http://sigport.org/4843},
author = {Hairong Qi },
publisher = {IEEE SigPort},
title = {RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles},
year = {2019} }
TY - EJOUR
T1 - RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles
AU - Hairong Qi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4843
ER -
Hairong Qi. (2019). RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles. IEEE SigPort. http://sigport.org/4843
Hairong Qi, 2019. RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles. Available at: http://sigport.org/4843.
Hairong Qi. (2019). "RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles." Web.
1. Hairong Qi. RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4843

When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation


This paper combines spatially-variant filtering and non-local low-rank regularization (NLR) to exploit non-local similarity in natural images in addressing the problem of image interpolation. We propose to build a carefully designed spatially-variant, non-local filtering scheme to generate a reliable estimate of the interpolated image and utilize NLR to refine the estimation. Our work uses a simple, parallelizable algorithm without the need to solve complicated optimization problems.

Paper Details

Authors:
Submitted On:
25 September 2019 - 11:39am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

When spatially-variant filtering meets low rank approximation.pdf

(16)

Subscribe

[1] , "When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4842. Accessed: Nov. 13, 2019.
@article{4842-19,
url = {http://sigport.org/4842},
author = { },
publisher = {IEEE SigPort},
title = {When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation},
year = {2019} }
TY - EJOUR
T1 - When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4842
ER -
. (2019). When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation. IEEE SigPort. http://sigport.org/4842
, 2019. When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation. Available at: http://sigport.org/4842.
. (2019). "When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation." Web.
1. . When Spatially-Variant Filtering Meets Low-Rank Regularization: Exploiting Non-Local Similarity for Single Image Interpolation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4842

LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES


Motion modelling plays a central role in video compression. This role is even more critical in highly textured video sequences, whereby a small error can produce large residuals that are costly to compress. While the translational motion model employed by existing coding standards, such as HEVC, is sufficient in most cases, using higher order models is beneficial; for this reason, the upcoming video coding standard, VVC, employs a 4-parameter affine model.

Paper Details

Authors:
Ashek Ahmmed, Aous Naman, Mark Pickering
Submitted On:
24 September 2019 - 10:48am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES

(50)

Subscribe

[1] Ashek Ahmmed, Aous Naman, Mark Pickering, "LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4833. Accessed: Nov. 13, 2019.
@article{4833-19,
url = {http://sigport.org/4833},
author = {Ashek Ahmmed; Aous Naman; Mark Pickering },
publisher = {IEEE SigPort},
title = {LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES},
year = {2019} }
TY - EJOUR
T1 - LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES
AU - Ashek Ahmmed; Aous Naman; Mark Pickering
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4833
ER -
Ashek Ahmmed, Aous Naman, Mark Pickering. (2019). LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES. IEEE SigPort. http://sigport.org/4833
Ashek Ahmmed, Aous Naman, Mark Pickering, 2019. LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES. Available at: http://sigport.org/4833.
Ashek Ahmmed, Aous Naman, Mark Pickering. (2019). "LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES." Web.
1. Ashek Ahmmed, Aous Naman, Mark Pickering. LEVERAGING THE DISCRETE COSINE BASIS FOR BETTER MOTION MODELLING IN HIGHLY TEXTURED VIDEO SEQUENCES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4833

TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS


Modelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. Yet, in the absence of ground truth data, it is unclear whether such predictions are in alignment with the actual human viewing behavior in virtual reality environments. In this study, we work towards solving this problem by conducting an eye-tracking experiment in an immersive 3D scene that offers 6 degrees of freedom. A wide range of static point cloud models is inspected by human subjects, while their gaze is captured in real-time.

Paper Details

Authors:
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi
Submitted On:
24 September 2019 - 7:29am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

2019-ICIP-presentation.pdf

(15)

Keywords

Additional Categories

Subscribe

[1] Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi, "TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4831. Accessed: Nov. 13, 2019.
@article{4831-19,
url = {http://sigport.org/4831},
author = {Evangelos Alexiou; Peisen Xu; Touradj Ebrahimi },
publisher = {IEEE SigPort},
title = {TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS},
year = {2019} }
TY - EJOUR
T1 - TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS
AU - Evangelos Alexiou; Peisen Xu; Touradj Ebrahimi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4831
ER -
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. (2019). TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS. IEEE SigPort. http://sigport.org/4831
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi, 2019. TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS. Available at: http://sigport.org/4831.
Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. (2019). "TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS." Web.
1. Evangelos Alexiou, Peisen Xu, Touradj Ebrahimi. TOWARDS MODELLING OF VISUAL SALIENCY IN POINT CLOUDS FOR IMMERSIVE APPLICATIONS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4831

STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES


We present Steadiface, a new real-time face-centric video stabilization method that simultaneously removes hand shake and keeps subject's head stable. We use a CNN to estimate the face landmarks and use them to optimize a stabilized head center. We then formulate an optimization problem to find a virtual camera pose that locates the face to the stabilized head center while retains smooth rotation and translation transitions across frames. We test the proposed method on fieldtest videos and show it stabilizes both the head motion and background.

Paper Details

Authors:
Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang
Submitted On:
24 September 2019 - 4:05am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

steadiface_poster.pdf

(15)

Subscribe

[1] Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang, "STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4830. Accessed: Nov. 13, 2019.
@article{4830-19,
url = {http://sigport.org/4830},
author = {Fuhao Shi; Sung-Fang Tsai; Youyou Wang; Chia-Kai Liang },
publisher = {IEEE SigPort},
title = {STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES},
year = {2019} }
TY - EJOUR
T1 - STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES
AU - Fuhao Shi; Sung-Fang Tsai; Youyou Wang; Chia-Kai Liang
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4830
ER -
Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang. (2019). STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES. IEEE SigPort. http://sigport.org/4830
Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang, 2019. STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES. Available at: http://sigport.org/4830.
Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang. (2019). "STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES." Web.
1. Fuhao Shi, Sung-Fang Tsai, Youyou Wang, Chia-Kai Liang. STEADIFACE: REAL-TIME FACE-CENTRIC STABILIZATION ON MOBILE PHONES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4830

VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING


High-quality dehazing performance is highly dependent upon the accurate estimation of transmission map. In this work, the coarse estimation version is first obtained by weightedly fusing two different transmission maps, which are generated from foreground and sky regions, respectively. A hybrid variational model with promoted regularization terms is then proposed to assisting in refining transmission map. The resulting complicated optimization problem is effectively solved via an alternating direction algorithm.

Paper Details

Authors:
Qiaoling Shu, Chuansheng Wu, Zhe Xiao
Submitted On:
24 September 2019 - 2:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Eposter_WHUT.pdf

(14)

Subscribe

[1] Qiaoling Shu, Chuansheng Wu, Zhe Xiao, "VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4829. Accessed: Nov. 13, 2019.
@article{4829-19,
url = {http://sigport.org/4829},
author = {Qiaoling Shu; Chuansheng Wu; Zhe Xiao },
publisher = {IEEE SigPort},
title = {VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING},
year = {2019} }
TY - EJOUR
T1 - VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING
AU - Qiaoling Shu; Chuansheng Wu; Zhe Xiao
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4829
ER -
Qiaoling Shu, Chuansheng Wu, Zhe Xiao. (2019). VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING. IEEE SigPort. http://sigport.org/4829
Qiaoling Shu, Chuansheng Wu, Zhe Xiao, 2019. VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING. Available at: http://sigport.org/4829.
Qiaoling Shu, Chuansheng Wu, Zhe Xiao. (2019). "VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING." Web.
1. Qiaoling Shu, Chuansheng Wu, Zhe Xiao. VARIATIONAL REGULARIZED TRANSMISSION REFINEMENT FOR IMAGE DEHAZING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4829

Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides


The problem of objectively measuring perceptual quality of omnidirectional visual content arises in many immersive imaging applications and particularly in compression. The interactive nature of this type of content limits the performance of earlier methods designed for static images or for video with a predefined dynamic. The non-deterministic impact must be addressed using statistical approach. One of the ways to describe, analyze and predict viewer interactions in omnidirectional imaging is through estimation of visual attention.

Paper Details

Authors:
Evgeniy Upenik, Touradj Ebrahimi
Submitted On:
23 September 2019 - 11:16pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Saliency360_Slides.pdf

(20)

Subscribe

[1] Evgeniy Upenik, Touradj Ebrahimi, "Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4828. Accessed: Nov. 13, 2019.
@article{4828-19,
url = {http://sigport.org/4828},
author = {Evgeniy Upenik; Touradj Ebrahimi },
publisher = {IEEE SigPort},
title = {Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides},
year = {2019} }
TY - EJOUR
T1 - Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides
AU - Evgeniy Upenik; Touradj Ebrahimi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4828
ER -
Evgeniy Upenik, Touradj Ebrahimi. (2019). Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides. IEEE SigPort. http://sigport.org/4828
Evgeniy Upenik, Touradj Ebrahimi, 2019. Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides. Available at: http://sigport.org/4828.
Evgeniy Upenik, Touradj Ebrahimi. (2019). "Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides." Web.
1. Evgeniy Upenik, Touradj Ebrahimi. Saliency Driven Perceptual Quality Metric for Omnidirectional Visual Content - Slides [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4828

Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models

Paper Details

Authors:
Edward Lee, Simon Wong, Mark Horowitz
Submitted On:
23 September 2019 - 10:34pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icip_1623_presentation.pdf

(23)

Subscribe

[1] Edward Lee, Simon Wong, Mark Horowitz, "Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4827. Accessed: Nov. 13, 2019.
@article{4827-19,
url = {http://sigport.org/4827},
author = {Edward Lee; Simon Wong; Mark Horowitz },
publisher = {IEEE SigPort},
title = {Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models},
year = {2019} }
TY - EJOUR
T1 - Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models
AU - Edward Lee; Simon Wong; Mark Horowitz
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4827
ER -
Edward Lee, Simon Wong, Mark Horowitz. (2019). Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models. IEEE SigPort. http://sigport.org/4827
Edward Lee, Simon Wong, Mark Horowitz, 2019. Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models. Available at: http://sigport.org/4827.
Edward Lee, Simon Wong, Mark Horowitz. (2019). "Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models." Web.
1. Edward Lee, Simon Wong, Mark Horowitz. Dataset Culling: Towards Efficient Training of Distillation-based Domain Specific Models [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4827

Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks

Paper Details

Authors:
Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos
Submitted On:
23 September 2019 - 10:10pm
Short Link:
Type:
Event:

Document Files

sinogram_completion_ICIP2019.pdf

(14)

Subscribe

[1] Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos, "Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4826. Accessed: Nov. 13, 2019.
@article{4826-19,
url = {http://sigport.org/4826},
author = {Xiaogang Yang; Mark Wolfman; Doga Gursoy; and Aggelos K. Katsaggelos },
publisher = {IEEE SigPort},
title = {Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks},
year = {2019} }
TY - EJOUR
T1 - Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks
AU - Xiaogang Yang; Mark Wolfman; Doga Gursoy; and Aggelos K. Katsaggelos
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4826
ER -
Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos. (2019). Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks. IEEE SigPort. http://sigport.org/4826
Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos, 2019. Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks. Available at: http://sigport.org/4826.
Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos. (2019). "Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks." Web.
1. Xiaogang Yang, Mark Wolfman, Doga Gursoy, and Aggelos K. Katsaggelos. Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4826

Pages