Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION


Flooding is one of the most harmful natural disasters, as it poses danger to both buildings and human lives. Therefore, it is fundamental to monitor these disasters to define prevention strategies and help authorities in damage control. With the wide use of portable devices (e.g., smartphones), there is an increase of the documentation and communication of flood events in social media. However, the use of these data in monitoring systems is not straightforward and depends on the creation of effective recognition strategies.

Paper Details

Authors:
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres
Submitted On:
4 October 2018 - 10:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icip_landscape.pdf

(53 downloads)

Subscribe

[1] Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres, "GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3413. Accessed: Dec. 13, 2018.
@article{3413-18,
url = {http://sigport.org/3413},
author = {Rafael de O. Werneck; Icaro C. Dourado; Samuel G. Fadel; Salvatore Tabbone; Ricardo da S. Torres },
publisher = {IEEE SigPort},
title = {GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION},
year = {2018} }
TY - EJOUR
T1 - GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION
AU - Rafael de O. Werneck; Icaro C. Dourado; Samuel G. Fadel; Salvatore Tabbone; Ricardo da S. Torres
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3413
ER -
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. (2018). GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION. IEEE SigPort. http://sigport.org/3413
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres, 2018. GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION. Available at: http://sigport.org/3413.
Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. (2018). "GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION." Web.
1. Rafael de O. Werneck, Icaro C. Dourado, Samuel G. Fadel, Salvatore Tabbone, Ricardo da S. Torres. GRAPH-BASED EARLY-FUSION FOR FLOOD DETECTION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3413

Unpaired Image-to-Image Translation from Shared Deep Space


Unpaired image-to-image translation is a tricky task which aims at learning a mapping from one image collection to another image collection without any pair-labeled information. Recent works have proposed cycle-consistency assumption to deal with this task. However, the result is still unsatisfactory for geometric translation. To address this limitation, this paper proposes a novel method using shared deep space generative adversarial network (SDSGAN).

Paper Details

Authors:
Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen
Submitted On:
4 October 2018 - 9:44am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Unpaired Image-to-Image Translation from Shared Deep Space.pdf

(18 downloads)

Subscribe

[1] Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen, "Unpaired Image-to-Image Translation from Shared Deep Space", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3412. Accessed: Dec. 13, 2018.
@article{3412-18,
url = {http://sigport.org/3412},
author = {Xuehui Wu; Jie Shao; Lianli Gao; Heng Tao Shen },
publisher = {IEEE SigPort},
title = {Unpaired Image-to-Image Translation from Shared Deep Space},
year = {2018} }
TY - EJOUR
T1 - Unpaired Image-to-Image Translation from Shared Deep Space
AU - Xuehui Wu; Jie Shao; Lianli Gao; Heng Tao Shen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3412
ER -
Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen. (2018). Unpaired Image-to-Image Translation from Shared Deep Space. IEEE SigPort. http://sigport.org/3412
Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen, 2018. Unpaired Image-to-Image Translation from Shared Deep Space. Available at: http://sigport.org/3412.
Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen. (2018). "Unpaired Image-to-Image Translation from Shared Deep Space." Web.
1. Xuehui Wu, Jie Shao, Lianli Gao, Heng Tao Shen. Unpaired Image-to-Image Translation from Shared Deep Space [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3412

Trained perceptual transform for quality assessment of high dynamic range images and video

Paper Details

Authors:
Submitted On:
4 October 2018 - 9:42am
Short Link:
Type:
Event:
Presenter's Name:

Document Files

icip2018_poster_final.pdf

(18 downloads)

Subscribe

[1] , "Trained perceptual transform for quality assessment of high dynamic range images and video", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3411. Accessed: Dec. 13, 2018.
@article{3411-18,
url = {http://sigport.org/3411},
author = { },
publisher = {IEEE SigPort},
title = {Trained perceptual transform for quality assessment of high dynamic range images and video},
year = {2018} }
TY - EJOUR
T1 - Trained perceptual transform for quality assessment of high dynamic range images and video
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3411
ER -
. (2018). Trained perceptual transform for quality assessment of high dynamic range images and video. IEEE SigPort. http://sigport.org/3411
, 2018. Trained perceptual transform for quality assessment of high dynamic range images and video. Available at: http://sigport.org/3411.
. (2018). "Trained perceptual transform for quality assessment of high dynamic range images and video." Web.
1. . Trained perceptual transform for quality assessment of high dynamic range images and video [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3411

Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs


The Discrete Periodic Radon Transform (DPRT) has many important applications in reconstructing images from their projections and has recently been used in fast and scalable architectures for computing 2D convolutions. Unfortunately, the direct computation of the DPRT involves O(N^3) additions and memory accesses that can be very costly in single-core architectures.

Paper Details

Authors:
Cesar Carranza, Daniel Llamocca, Marios Pattichis
Submitted On:
4 October 2018 - 9:39am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster2560 Final.pdf

(10 downloads)

Subscribe

[1] Cesar Carranza, Daniel Llamocca, Marios Pattichis, "Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3410. Accessed: Dec. 13, 2018.
@article{3410-18,
url = {http://sigport.org/3410},
author = {Cesar Carranza; Daniel Llamocca; Marios Pattichis },
publisher = {IEEE SigPort},
title = {Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs},
year = {2018} }
TY - EJOUR
T1 - Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs
AU - Cesar Carranza; Daniel Llamocca; Marios Pattichis
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3410
ER -
Cesar Carranza, Daniel Llamocca, Marios Pattichis. (2018). Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs. IEEE SigPort. http://sigport.org/3410
Cesar Carranza, Daniel Llamocca, Marios Pattichis, 2018. Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs. Available at: http://sigport.org/3410.
Cesar Carranza, Daniel Llamocca, Marios Pattichis. (2018). "Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs." Web.
1. Cesar Carranza, Daniel Llamocca, Marios Pattichis. Fast and Parallel Computation of the Discrete Periodic Radon Transform on GPUs, multi-core CPUs and FPGAs [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3410

GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK

Paper Details

Authors:
Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu
Submitted On:
4 October 2018 - 9:44am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster_icip2018.pdf

(83 downloads)

Subscribe

[1] Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu, "GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3409. Accessed: Dec. 13, 2018.
@article{3409-18,
url = {http://sigport.org/3409},
author = {Weiwei Liu; Chongyang Zhang; Jiaying Zhang ; Zhonghao Wu },
publisher = {IEEE SigPort},
title = {GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK},
year = {2018} }
TY - EJOUR
T1 - GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK
AU - Weiwei Liu; Chongyang Zhang; Jiaying Zhang ; Zhonghao Wu
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3409
ER -
Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu. (2018). GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK. IEEE SigPort. http://sigport.org/3409
Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu, 2018. GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK. Available at: http://sigport.org/3409.
Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu. (2018). "GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK." Web.
1. Weiwei Liu, Chongyang Zhang, Jiaying Zhang , Zhonghao Wu. GLOBAL FOR COARSE AND PART FOR FINE: A HIERARCHICAL ACTION RECOGNITION FRAMEWORK [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3409

Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures


The manuscript describes fast and scalable architectures and associated algorithms for computing convolutions and cross-correlations. The basic idea is to map 2D convolutions and cross-correlations to a collection of 1D convolutions and cross-correlations in the transform domain. This is accomplished through the use of the Discrete Periodic Radon Transform (DPRT) for general kernels and the use of SVD-LU decompositions for low-rank kernels.

Paper Details

Authors:
Cesar Carranza, Daniel Llamocca, Marios Pattichis
Submitted On:
4 October 2018 - 9:51am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster3392 Final.pdf

(20 downloads)

Subscribe

[1] Cesar Carranza, Daniel Llamocca, Marios Pattichis, "Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3402. Accessed: Dec. 13, 2018.
@article{3402-18,
url = {http://sigport.org/3402},
author = {Cesar Carranza; Daniel Llamocca; Marios Pattichis },
publisher = {IEEE SigPort},
title = {Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures},
year = {2018} }
TY - EJOUR
T1 - Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures
AU - Cesar Carranza; Daniel Llamocca; Marios Pattichis
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3402
ER -
Cesar Carranza, Daniel Llamocca, Marios Pattichis. (2018). Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures. IEEE SigPort. http://sigport.org/3402
Cesar Carranza, Daniel Llamocca, Marios Pattichis, 2018. Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures. Available at: http://sigport.org/3402.
Cesar Carranza, Daniel Llamocca, Marios Pattichis. (2018). "Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures." Web.
1. Cesar Carranza, Daniel Llamocca, Marios Pattichis. Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3402

Motion Inpainting by an Image-Based Geodesic AMLE Method


This work presents an automatic method for optical flow inpainting. Given a video, each frame domain is endowed with a Riemannian metric based on the video pixel values. The missing optical flow is recovered by solving the Absolutely Minimizing Lipschitz Extension (AMLE) partial differential equation on the Riemannian manifold. An efficient numerical algorithm is proposed using eikonal operators for nonlinear elliptic partial differential equations on a finite graph.

Paper Details

Authors:
Gloria Haro, Coloma Ballester
Submitted On:
4 October 2018 - 9:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icip18.pdf

(22 downloads)

Subscribe

[1] Gloria Haro, Coloma Ballester, "Motion Inpainting by an Image-Based Geodesic AMLE Method", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3397. Accessed: Dec. 13, 2018.
@article{3397-18,
url = {http://sigport.org/3397},
author = {Gloria Haro; Coloma Ballester },
publisher = {IEEE SigPort},
title = {Motion Inpainting by an Image-Based Geodesic AMLE Method},
year = {2018} }
TY - EJOUR
T1 - Motion Inpainting by an Image-Based Geodesic AMLE Method
AU - Gloria Haro; Coloma Ballester
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3397
ER -
Gloria Haro, Coloma Ballester. (2018). Motion Inpainting by an Image-Based Geodesic AMLE Method. IEEE SigPort. http://sigport.org/3397
Gloria Haro, Coloma Ballester, 2018. Motion Inpainting by an Image-Based Geodesic AMLE Method. Available at: http://sigport.org/3397.
Gloria Haro, Coloma Ballester. (2018). "Motion Inpainting by an Image-Based Geodesic AMLE Method." Web.
1. Gloria Haro, Coloma Ballester. Motion Inpainting by an Image-Based Geodesic AMLE Method [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3397

GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION


In this paper, we propose a novel approach for the domain of makeup-robust face recognition. Most face recognition schemes usually fail to generalize well on these data where there is a large difference between the training and testing sets, e.g., makeup changes. Our method focuses on the problem of determining whether face images before and after makeup refer to the same identity. The work on this fundamental research topic benefits various real-world applications, for example automated passport control, security in general, and surveillance.

Poster_A1.pdf

PDF icon Poster_A1.pdf (20 downloads)

Paper Details

Authors:
Horst Eidenberger
Submitted On:
4 October 2018 - 9:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster_A1.pdf

(20 downloads)

Subscribe

[1] Horst Eidenberger, "GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3394. Accessed: Dec. 13, 2018.
@article{3394-18,
url = {http://sigport.org/3394},
author = {Horst Eidenberger },
publisher = {IEEE SigPort},
title = {GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION
AU - Horst Eidenberger
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3394
ER -
Horst Eidenberger. (2018). GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION. IEEE SigPort. http://sigport.org/3394
Horst Eidenberger, 2018. GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION. Available at: http://sigport.org/3394.
Horst Eidenberger. (2018). "GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION." Web.
1. Horst Eidenberger. GESTALT INTEREST POINTS WITH A NEURAL NETWORK FOR MAKEUP-ROBUST FACE RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3394

DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?


Visual attention allows the human visual system to effectively deal with the huge flow of visual information acquired by the retina. Since the years 2000, the human visual system began to be modelled in computer vision to predict abnormal, rare and surprising data. Attention is a product of the continuous interaction between bottom-up (mainly feature-based) and top-down (mainly learning-based) information. Deep-learning (DNN) is now well established in visual attention modelling with very effective models.

Paper Details

Authors:
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin
Submitted On:
4 October 2018 - 9:19am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_ICIP2018.pdf

(83 downloads)

Subscribe

[1] Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin, "DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3392. Accessed: Dec. 13, 2018.
@article{3392-18,
url = {http://sigport.org/3392},
author = {Phutphalla Kong; Matei Mancas; Nimol Thuon; Seng Kheang; Bernard Gosselin },
publisher = {IEEE SigPort},
title = {DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?},
year = {2018} }
TY - EJOUR
T1 - DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?
AU - Phutphalla Kong; Matei Mancas; Nimol Thuon; Seng Kheang; Bernard Gosselin
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3392
ER -
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. (2018). DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?. IEEE SigPort. http://sigport.org/3392
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin, 2018. DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?. Available at: http://sigport.org/3392.
Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. (2018). "DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?." Web.
1. Phutphalla Kong, Matei Mancas, Nimol Thuon, Seng Kheang, Bernard Gosselin. DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY? [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3392

FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION

Paper Details

Authors:
Christian Bailer, Oliver Wasenmüller, Didier Stricker
Submitted On:
4 October 2018 - 9:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

schuster2018ffpp_poster

(37 downloads)

Subscribe

[1] Christian Bailer, Oliver Wasenmüller, Didier Stricker, "FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3388. Accessed: Dec. 13, 2018.
@article{3388-18,
url = {http://sigport.org/3388},
author = {Christian Bailer; Oliver Wasenmüller; Didier Stricker },
publisher = {IEEE SigPort},
title = {FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION},
year = {2018} }
TY - EJOUR
T1 - FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION
AU - Christian Bailer; Oliver Wasenmüller; Didier Stricker
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3388
ER -
Christian Bailer, Oliver Wasenmüller, Didier Stricker. (2018). FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION. IEEE SigPort. http://sigport.org/3388
Christian Bailer, Oliver Wasenmüller, Didier Stricker, 2018. FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION. Available at: http://sigport.org/3388.
Christian Bailer, Oliver Wasenmüller, Didier Stricker. (2018). "FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION." Web.
1. Christian Bailer, Oliver Wasenmüller, Didier Stricker. FLOWFIELDS++: ACCURATE OPTICAL FLOW CORRESPONDENCES MEET ROBUST INTERPOLATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3388

Pages