Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

Immersive Optical-See-Through Augmented Reality (Keynote Talk)


Immersive Optical-See-Through Augmented Reality. Augmented Reality has been getting ready for the last 20 years, and is finally becoming real, powered by progress in enabling technologies such as graphics, vision, sensors, and displays. In this talk I’ll provide a personal retrospective on my journey, working on all those enablers, getting ready for the coming AR revolution. At Meta, we are working on immersive optical-see-through AR headset, as well as the full software stack. We’ll discuss the differences of optical vs.

Paper Details

Authors:
Kari Pulli
Submitted On:
22 December 2017 - 1:30pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_2017_Meta_AR_small.pdf

(334)

Subscribe

[1] Kari Pulli, "Immersive Optical-See-Through Augmented Reality (Keynote Talk)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2261. Accessed: Nov. 12, 2019.
@article{2261-17,
url = {http://sigport.org/2261},
author = {Kari Pulli },
publisher = {IEEE SigPort},
title = {Immersive Optical-See-Through Augmented Reality (Keynote Talk)},
year = {2017} }
TY - EJOUR
T1 - Immersive Optical-See-Through Augmented Reality (Keynote Talk)
AU - Kari Pulli
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2261
ER -
Kari Pulli. (2017). Immersive Optical-See-Through Augmented Reality (Keynote Talk). IEEE SigPort. http://sigport.org/2261
Kari Pulli, 2017. Immersive Optical-See-Through Augmented Reality (Keynote Talk). Available at: http://sigport.org/2261.
Kari Pulli. (2017). "Immersive Optical-See-Through Augmented Reality (Keynote Talk)." Web.
1. Kari Pulli. Immersive Optical-See-Through Augmented Reality (Keynote Talk) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2261

JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING

Paper Details

Authors:
Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li
Submitted On:
23 September 2019 - 4:37pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icip 2019 poster: multi-view image clustering

(14)

Subscribe

[1] Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li, "JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4819. Accessed: Nov. 12, 2019.
@article{4819-19,
url = {http://sigport.org/4819},
author = {Zhiqiang Lu; Hao Tang; Yan Yan; Songhao Zhu; Xiao-Yuan Jing; Zuoyong Li },
publisher = {IEEE SigPort},
title = {JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING},
year = {2019} }
TY - EJOUR
T1 - JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING
AU - Zhiqiang Lu; Hao Tang; Yan Yan; Songhao Zhu; Xiao-Yuan Jing; Zuoyong Li
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4819
ER -
Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li. (2019). JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING. IEEE SigPort. http://sigport.org/4819
Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li, 2019. JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING. Available at: http://sigport.org/4819.
Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li. (2019). "JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING." Web.
1. Zhiqiang Lu, Hao Tang, Yan Yan, Songhao Zhu, Xiao-Yuan Jing, Zuoyong Li. JOINT LEARNING OF SELF-REPRESENTATION AND INDICATOR FOR MULTI-VIEW IMAGE CLUSTERING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4819

Single-image rain removal via multi-scale cascading image generation


A novel single-image rain removal method is proposed based on multi-scale cascading image generation (MSCG). In particular, the proposed method consists of an encoder extracting multi-scale features from images and a decoder generating de-rained images with a cascading mechanism. The encoder ensembles the convolution neural networks using the kernels with different sizes, and integrates their outputs across different scales.

Paper Details

Authors:
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu
Submitted On:
22 September 2019 - 2:38pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Poster ICIP 2019 Paper #2542.pdf

(12)

Subscribe

[1] Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu, "Single-image rain removal via multi-scale cascading image generation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4817. Accessed: Nov. 12, 2019.
@article{4817-19,
url = {http://sigport.org/4817},
author = {Zheng Zhang; Yi Xu; He Wang; Bingbing Ni; Hongteng Xu },
publisher = {IEEE SigPort},
title = {Single-image rain removal via multi-scale cascading image generation},
year = {2019} }
TY - EJOUR
T1 - Single-image rain removal via multi-scale cascading image generation
AU - Zheng Zhang; Yi Xu; He Wang; Bingbing Ni; Hongteng Xu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4817
ER -
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. (2019). Single-image rain removal via multi-scale cascading image generation. IEEE SigPort. http://sigport.org/4817
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu, 2019. Single-image rain removal via multi-scale cascading image generation. Available at: http://sigport.org/4817.
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. (2019). "Single-image rain removal via multi-scale cascading image generation." Web.
1. Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. Single-image rain removal via multi-scale cascading image generation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4817

Towards Unified Aesthetics and Emotion Prediction in Images

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Towards Unified Aesthetics and Emotion Prediction in Images

(20)

Subscribe

[1] , "Towards Unified Aesthetics and Emotion Prediction in Images", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4810. Accessed: Nov. 12, 2019.
@article{4810-19,
url = {http://sigport.org/4810},
author = { },
publisher = {IEEE SigPort},
title = {Towards Unified Aesthetics and Emotion Prediction in Images},
year = {2019} }
TY - EJOUR
T1 - Towards Unified Aesthetics and Emotion Prediction in Images
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4810
ER -
. (2019). Towards Unified Aesthetics and Emotion Prediction in Images. IEEE SigPort. http://sigport.org/4810
, 2019. Towards Unified Aesthetics and Emotion Prediction in Images. Available at: http://sigport.org/4810.
. (2019). "Towards Unified Aesthetics and Emotion Prediction in Images." Web.
1. . Towards Unified Aesthetics and Emotion Prediction in Images [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4810

Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:43am
Short Link:
Type:
Event:

Document Files

icip2019-poster-haoliang-WQ.PD_.5.pdf

(16)

Subscribe

[1] , "Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4809. Accessed: Nov. 12, 2019.
@article{4809-19,
url = {http://sigport.org/4809},
author = { },
publisher = {IEEE SigPort},
title = {Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks},
year = {2019} }
TY - EJOUR
T1 - Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4809
ER -
. (2019). Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks. IEEE SigPort. http://sigport.org/4809
, 2019. Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks. Available at: http://sigport.org/4809.
. (2019). "Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks." Web.
1. . Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4809

VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:49am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

VLQ.pptx

(12)

Subscribe

[1] , "VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4808. Accessed: Nov. 12, 2019.
@article{4808-19,
url = {http://sigport.org/4808},
author = { },
publisher = {IEEE SigPort},
title = {VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING},
year = {2019} }
TY - EJOUR
T1 - VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4808
ER -
. (2019). VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING. IEEE SigPort. http://sigport.org/4808
, 2019. VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING. Available at: http://sigport.org/4808.
. (2019). "VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING." Web.
1. . VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4808

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks


Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation.

Paper Details

Authors:
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma
Submitted On:
21 September 2019 - 7:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Oral_ICIP_2019_Adversarial_Causality_0927.pdf

(17)

Subscribe

[1] Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4806. Accessed: Nov. 12, 2019.
@article{4806-19,
url = {http://sigport.org/4806},
author = {Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma },
publisher = {IEEE SigPort},
title = {When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks},
year = {2019} }
TY - EJOUR
T1 - When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
AU - Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4806
ER -
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. IEEE SigPort. http://sigport.org/4806
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, 2019. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. Available at: http://sigport.org/4806.
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks." Web.
1. Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4806

Dynamic Spatial Predicted Background for Video Surveillance

Paper Details

Authors:
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos
Submitted On:
21 September 2019 - 2:56am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP19 DSPB.pdf

(14)

Subscribe

[1] Yaniv Tocker, Rami R. Hagege, Joseph M. Francos, "Dynamic Spatial Predicted Background for Video Surveillance", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4802. Accessed: Nov. 12, 2019.
@article{4802-19,
url = {http://sigport.org/4802},
author = {Yaniv Tocker; Rami R. Hagege; Joseph M. Francos },
publisher = {IEEE SigPort},
title = {Dynamic Spatial Predicted Background for Video Surveillance},
year = {2019} }
TY - EJOUR
T1 - Dynamic Spatial Predicted Background for Video Surveillance
AU - Yaniv Tocker; Rami R. Hagege; Joseph M. Francos
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4802
ER -
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. (2019). Dynamic Spatial Predicted Background for Video Surveillance. IEEE SigPort. http://sigport.org/4802
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos, 2019. Dynamic Spatial Predicted Background for Video Surveillance. Available at: http://sigport.org/4802.
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. (2019). "Dynamic Spatial Predicted Background for Video Surveillance." Web.
1. Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. Dynamic Spatial Predicted Background for Video Surveillance [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4802

CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES


Image restoration and image enhancement are critical image processing tasks since good image quality is mandatory for many image applications. We are particularly interested in the restoration of ill-exposed images. These effects are caused by sensor limitation or optical arrangement. They prevent the details of the scene from being adequately represented in the captured image. We proposed a deep neural network model due to the number of uncontrolled variables that impact the acquisition.

Paper Details

Authors:
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra
Submitted On:
20 September 2019 - 8:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PPT_ICIP.pdf

(11)

Subscribe

[1] Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra, "CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4799. Accessed: Nov. 12, 2019.
@article{4799-19,
url = {http://sigport.org/4799},
author = {Cristiano Rafael Steffens; Valquiria Huttner; Lucas Ricardo Vieira Messias; Paulo Lilles Jorge Drews-Jr; Silvia Silva da Costa Botelho; Rodrigo da Silva Guerra },
publisher = {IEEE SigPort},
title = {CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES},
year = {2019} }
TY - EJOUR
T1 - CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES
AU - Cristiano Rafael Steffens; Valquiria Huttner; Lucas Ricardo Vieira Messias; Paulo Lilles Jorge Drews-Jr; Silvia Silva da Costa Botelho; Rodrigo da Silva Guerra
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4799
ER -
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. (2019). CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES. IEEE SigPort. http://sigport.org/4799
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra, 2019. CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES. Available at: http://sigport.org/4799.
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. (2019). "CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES." Web.
1. Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4799

EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION

Paper Details

Authors:
Ram Longman, Raymond Ptucha
Submitted On:
20 September 2019 - 7:54pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

EmbeddedCycleGAN_v3.pdf

(12)

Subscribe

[1] Ram Longman, Raymond Ptucha, "EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4796. Accessed: Nov. 12, 2019.
@article{4796-19,
url = {http://sigport.org/4796},
author = {Ram Longman; Raymond Ptucha },
publisher = {IEEE SigPort},
title = {EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION},
year = {2019} }
TY - EJOUR
T1 - EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION
AU - Ram Longman; Raymond Ptucha
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4796
ER -
Ram Longman, Raymond Ptucha. (2019). EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION. IEEE SigPort. http://sigport.org/4796
Ram Longman, Raymond Ptucha, 2019. EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION. Available at: http://sigport.org/4796.
Ram Longman, Raymond Ptucha. (2019). "EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION." Web.
1. Ram Longman, Raymond Ptucha. EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4796

Pages