Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION


Canonical correlation analysis (CCA) is a classical subspace learning method of capturing the common semantic information underlying multi-view data. It has been used in person re-identification (re-ID) task by treating the task of matching identical individuals across non-overlapping multi-cameras as a multi-view learning problem. However, CCA-based re-ID methods still achieve unsatisfactory results because few jointly consider discriminative margin information and selecting importantly relevant features.

Paper Details

Authors:
Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo
Submitted On:
5 October 2018 - 1:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Ma - ICIP 2018 - Poster.pdf

(4)

Subscribe

[1] Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo, "MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3548. Accessed: Apr. 26, 2019.
@article{3548-18,
url = {http://sigport.org/3548},
author = {Linfei Ma; Xiang Zhang; Long Lan; Xuhui Huang; Zhigang Luo },
publisher = {IEEE SigPort},
title = {MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION},
year = {2018} }
TY - EJOUR
T1 - MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION
AU - Linfei Ma; Xiang Zhang; Long Lan; Xuhui Huang; Zhigang Luo
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3548
ER -
Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo. (2018). MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION. IEEE SigPort. http://sigport.org/3548
Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo, 2018. MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION. Available at: http://sigport.org/3548.
Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo. (2018). "MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION." Web.
1. Linfei Ma, Xiang Zhang, Long Lan, Xuhui Huang, Zhigang Luo. MARGIN-EMBEDDING CANONICAL CORRELATION ANALYSIS WITH FEATURE SELECTION FOR PERSON RE-IDENTIFICATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3548

OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION


Abundant image and sensory data collected over the last decades represents an invaluable source of information for cataloging and monitoring of the environment. Fusion of heterogeneous data sources is a challenging but promising tool to efficiently leverage such information. In this work we propose a pipeline for automatic detection and geolocation of recurring stationary objects deployed on fusion scenario of street level imagery and LiDAR point cloud data. The objects are geolocated coherently using a fusion procedure formalized as a Markov random field problem.

Paper Details

Authors:
Vladimir A. Krylov, Rozenn Dahyot
Submitted On:
5 October 2018 - 12:36pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

krylovICIP18poster.pdf

(35)

Subscribe

[1] Vladimir A. Krylov, Rozenn Dahyot, "OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3546. Accessed: Apr. 26, 2019.
@article{3546-18,
url = {http://sigport.org/3546},
author = {Vladimir A. Krylov; Rozenn Dahyot },
publisher = {IEEE SigPort},
title = {OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION},
year = {2018} }
TY - EJOUR
T1 - OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION
AU - Vladimir A. Krylov; Rozenn Dahyot
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3546
ER -
Vladimir A. Krylov, Rozenn Dahyot. (2018). OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION. IEEE SigPort. http://sigport.org/3546
Vladimir A. Krylov, Rozenn Dahyot, 2018. OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION. Available at: http://sigport.org/3546.
Vladimir A. Krylov, Rozenn Dahyot. (2018). "OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION." Web.
1. Vladimir A. Krylov, Rozenn Dahyot. OBJECT GEOLOCATION USING MRF BASED MULTI-SENSOR FUSION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3546

Single-image Rain Removal Using Deep Residual Learning


Most outdoor vision systems can be influenced by rainy weather conditions. In this paper, we address a rain removal problem from a single image. Some existing de-raining methods suffer from hue change due to neglect of the information in low frequency layer. Others fail in assuming enough rainy image models. To solve them, we propose a residual deep network architecture called ResDerainNet. Based on the deep convolutional neural network (CNN), we learn the mapping relationship between rainy and residual images from data.

Paper Details

Authors:
Submitted On:
5 October 2018 - 9:53am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2018_poster.pdf

(26)

Subscribe

[1] , "Single-image Rain Removal Using Deep Residual Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3540. Accessed: Apr. 26, 2019.
@article{3540-18,
url = {http://sigport.org/3540},
author = { },
publisher = {IEEE SigPort},
title = {Single-image Rain Removal Using Deep Residual Learning},
year = {2018} }
TY - EJOUR
T1 - Single-image Rain Removal Using Deep Residual Learning
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3540
ER -
. (2018). Single-image Rain Removal Using Deep Residual Learning. IEEE SigPort. http://sigport.org/3540
, 2018. Single-image Rain Removal Using Deep Residual Learning. Available at: http://sigport.org/3540.
. (2018). "Single-image Rain Removal Using Deep Residual Learning." Web.
1. . Single-image Rain Removal Using Deep Residual Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3540

Single-image Rain Removal Using Deep Residual Learning


Most outdoor vision systems can be influenced by rainy weather conditions. In this paper, we address a rain removal problem from a single image. Some existing de-raining methods suffer from hue change due to neglect of the information in low frequency layer. Others fail in assuming enough rainy image models. To solve them, we propose a residual deep network architecture called ResDerainNet. Based on the deep convolutional neural network (CNN), we learn the mapping relationship between rainy and residual images from data.

Paper Details

Authors:
Submitted On:
5 October 2018 - 9:53am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2018_poster.pdf

(81)

Subscribe

[1] , "Single-image Rain Removal Using Deep Residual Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3539. Accessed: Apr. 26, 2019.
@article{3539-18,
url = {http://sigport.org/3539},
author = { },
publisher = {IEEE SigPort},
title = {Single-image Rain Removal Using Deep Residual Learning},
year = {2018} }
TY - EJOUR
T1 - Single-image Rain Removal Using Deep Residual Learning
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3539
ER -
. (2018). Single-image Rain Removal Using Deep Residual Learning. IEEE SigPort. http://sigport.org/3539
, 2018. Single-image Rain Removal Using Deep Residual Learning. Available at: http://sigport.org/3539.
. (2018). "Single-image Rain Removal Using Deep Residual Learning." Web.
1. . Single-image Rain Removal Using Deep Residual Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3539

MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL


We propose a method for multi-exposure image fusion based on information-theoretic channel. In the fusion scheme, conditional entropy, as an information measurement of each pixel in one image to the other image, is calculated through an information channel built between two source images, and then weight maps of the source images are generated. Based on the pyramid scheme, images at every scale are fused by weight maps, and a final fused image is inversely reconstructed.

Paper Details

Authors:
Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu
Submitted On:
5 October 2018 - 7:41am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2018_poster_new.pdf

(29)

Subscribe

[1] Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu, "MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3534. Accessed: Apr. 26, 2019.
@article{3534-18,
url = {http://sigport.org/3534},
author = {Qi Zhao; Mateu Sbert; Miquel Feixas; Qing Xu },
publisher = {IEEE SigPort},
title = {MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL},
year = {2018} }
TY - EJOUR
T1 - MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL
AU - Qi Zhao; Mateu Sbert; Miquel Feixas; Qing Xu
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3534
ER -
Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu. (2018). MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL. IEEE SigPort. http://sigport.org/3534
Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu, 2018. MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL. Available at: http://sigport.org/3534.
Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu. (2018). "MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL." Web.
1. Qi Zhao, Mateu Sbert, Miquel Feixas, Qing Xu. MULTI-EXPOSURE IMAGE FUSION BASED ON INFORMATION-THEORETIC CHANNEL [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3534

Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding


In this work, we derive the rate-distortion function for video coding using affine global motion compensation.
We model the displacement estimation error during motion estimation and obtain the bit rate after applying the rate-distortion theory.
We assume that the displacement estimation error is caused by a perturbed affine transformation.
The 6 affine transformation parameters are assumed statistically independent, with each of them having a zero-mean Gaussian distributed estimation error.

Paper Details

Authors:
Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann
Submitted On:
16 October 2018 - 4:45am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Meuel_et_al_Rate-Distortion_Theory_for_Affine_Global_Motion_Compensation_in_Video_Coding_ICIP2018_POSTER.pdf

(53)

Keywords

Additional Categories

Subscribe

[1] Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann, "Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3532. Accessed: Apr. 26, 2019.
@article{3532-18,
url = {http://sigport.org/3532},
author = {Holger Meuel; Stephan Ferenz; Yiqun Liu; Jörn Ostermann },
publisher = {IEEE SigPort},
title = {Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding},
year = {2018} }
TY - EJOUR
T1 - Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding
AU - Holger Meuel; Stephan Ferenz; Yiqun Liu; Jörn Ostermann
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3532
ER -
Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann. (2018). Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding. IEEE SigPort. http://sigport.org/3532
Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann, 2018. Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding. Available at: http://sigport.org/3532.
Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann. (2018). "Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding." Web.
1. Holger Meuel, Stephan Ferenz, Yiqun Liu, Jörn Ostermann. Rate-Distortion Theory for Affine Global Motion Compensation in Video Coding [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3532

Image fusion of X-ray and electron tomograms

Paper Details

Authors:
Bernd Rieger
Submitted On:
5 October 2018 - 5:26am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP-Yan.pdf

(31)

Subscribe

[1] Bernd Rieger, "Image fusion of X-ray and electron tomograms", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3526. Accessed: Apr. 26, 2019.
@article{3526-18,
url = {http://sigport.org/3526},
author = {Bernd Rieger },
publisher = {IEEE SigPort},
title = {Image fusion of X-ray and electron tomograms},
year = {2018} }
TY - EJOUR
T1 - Image fusion of X-ray and electron tomograms
AU - Bernd Rieger
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3526
ER -
Bernd Rieger. (2018). Image fusion of X-ray and electron tomograms. IEEE SigPort. http://sigport.org/3526
Bernd Rieger, 2018. Image fusion of X-ray and electron tomograms. Available at: http://sigport.org/3526.
Bernd Rieger. (2018). "Image fusion of X-ray and electron tomograms." Web.
1. Bernd Rieger. Image fusion of X-ray and electron tomograms [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3526

OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL


The prediction drifts to the non-object backgrounds is a critical issue in conversional correlation filter (CF) based trackers. The key insight of this paper is to propose a doublelayer model to address this problem. Specifically, the first layer is a CF tracker, which is employed to predict a rough position of the target, and the objectness layer, which is regarded as the second layer, is utilized to reveal the object characteristics of the predicted target.

Paper Details

Authors:
Jianxiang Ma, Anlong Ming, Yu Zhou
Submitted On:
5 October 2018 - 4:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP 2018 Poster #2492: OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL

(37)

Subscribe

[1] Jianxiang Ma, Anlong Ming, Yu Zhou, "OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3521. Accessed: Apr. 26, 2019.
@article{3521-18,
url = {http://sigport.org/3521},
author = {Jianxiang Ma; Anlong Ming; Yu Zhou },
publisher = {IEEE SigPort},
title = {OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL},
year = {2018} }
TY - EJOUR
T1 - OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL
AU - Jianxiang Ma; Anlong Ming; Yu Zhou
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3521
ER -
Jianxiang Ma, Anlong Ming, Yu Zhou. (2018). OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL. IEEE SigPort. http://sigport.org/3521
Jianxiang Ma, Anlong Ming, Yu Zhou, 2018. OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL. Available at: http://sigport.org/3521.
Jianxiang Ma, Anlong Ming, Yu Zhou. (2018). "OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL." Web.
1. Jianxiang Ma, Anlong Ming, Yu Zhou. OBJECTNESS-AWARE TRACKING VIA DOUBLE-LAYER MODEL [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3521

Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information

Paper Details

Authors:
Nina Heide, Christian Frese, Thomas Emter, Janko Petereit
Submitted On:
5 October 2018 - 4:15am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Poster ICIP final rev.pdf

(4)

Subscribe

[1] Nina Heide, Christian Frese, Thomas Emter, Janko Petereit, "Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3519. Accessed: Apr. 26, 2019.
@article{3519-18,
url = {http://sigport.org/3519},
author = {Nina Heide; Christian Frese; Thomas Emter; Janko Petereit },
publisher = {IEEE SigPort},
title = {Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information},
year = {2018} }
TY - EJOUR
T1 - Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information
AU - Nina Heide; Christian Frese; Thomas Emter; Janko Petereit
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3519
ER -
Nina Heide, Christian Frese, Thomas Emter, Janko Petereit. (2018). Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information. IEEE SigPort. http://sigport.org/3519
Nina Heide, Christian Frese, Thomas Emter, Janko Petereit, 2018. Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information. Available at: http://sigport.org/3519.
Nina Heide, Christian Frese, Thomas Emter, Janko Petereit. (2018). "Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information." Web.
1. Nina Heide, Christian Frese, Thomas Emter, Janko Petereit. Real-time Hyperspectral Stereo Processing for the Generation of 3D Depth Information [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3519

Unsupervised Detection of Periodic Segments in Videos


We present a solution to the problem of discovering all periodic
segments of a video and of estimating their period in
a completely unsupervised manner. These segments may be
located anywhere in the video, may differ in duration, speed,
period and may represent unseen motion patterns of any type
of objects (e.g., humans, animals, machines, etc). The proposed
method capitalizes on earlier research on the problem
of detecting common actions in videos, also known as commonality
detection or video co-segmentation. The proposed

Paper Details

Authors:
Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros
Submitted On:
5 October 2018 - 4:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Unsupervised Detection of Periodic Segments in Videos

(6)

Keywords

Additional Categories

Subscribe

[1] Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros, "Unsupervised Detection of Periodic Segments in Videos", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3516. Accessed: Apr. 26, 2019.
@article{3516-18,
url = {http://sigport.org/3516},
author = {Costas Panagiotakis; Giorgos Karvounas; Antonis A. Argyros },
publisher = {IEEE SigPort},
title = {Unsupervised Detection of Periodic Segments in Videos},
year = {2018} }
TY - EJOUR
T1 - Unsupervised Detection of Periodic Segments in Videos
AU - Costas Panagiotakis; Giorgos Karvounas; Antonis A. Argyros
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3516
ER -
Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros. (2018). Unsupervised Detection of Periodic Segments in Videos. IEEE SigPort. http://sigport.org/3516
Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros, 2018. Unsupervised Detection of Periodic Segments in Videos. Available at: http://sigport.org/3516.
Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros. (2018). "Unsupervised Detection of Periodic Segments in Videos." Web.
1. Costas Panagiotakis, Giorgos Karvounas, Antonis A. Argyros. Unsupervised Detection of Periodic Segments in Videos [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3516

Pages