Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

Immersive Optical-See-Through Augmented Reality (Keynote Talk)


Immersive Optical-See-Through Augmented Reality. Augmented Reality has been getting ready for the last 20 years, and is finally becoming real, powered by progress in enabling technologies such as graphics, vision, sensors, and displays. In this talk I’ll provide a personal retrospective on my journey, working on all those enablers, getting ready for the coming AR revolution. At Meta, we are working on immersive optical-see-through AR headset, as well as the full software stack. We’ll discuss the differences of optical vs.

Paper Details

Authors:
Kari Pulli
Submitted On:
22 December 2017 - 1:30pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_2017_Meta_AR_small.pdf

(324)

Subscribe

[1] Kari Pulli, "Immersive Optical-See-Through Augmented Reality (Keynote Talk)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2261. Accessed: Sep. 21, 2019.
@article{2261-17,
url = {http://sigport.org/2261},
author = {Kari Pulli },
publisher = {IEEE SigPort},
title = {Immersive Optical-See-Through Augmented Reality (Keynote Talk)},
year = {2017} }
TY - EJOUR
T1 - Immersive Optical-See-Through Augmented Reality (Keynote Talk)
AU - Kari Pulli
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2261
ER -
Kari Pulli. (2017). Immersive Optical-See-Through Augmented Reality (Keynote Talk). IEEE SigPort. http://sigport.org/2261
Kari Pulli, 2017. Immersive Optical-See-Through Augmented Reality (Keynote Talk). Available at: http://sigport.org/2261.
Kari Pulli. (2017). "Immersive Optical-See-Through Augmented Reality (Keynote Talk)." Web.
1. Kari Pulli. Immersive Optical-See-Through Augmented Reality (Keynote Talk) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2261

Towards Unified Aesthetics and Emotion Prediction in Images

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Towards Unified Aesthetics and Emotion Prediction in Images

(1)

Subscribe

[1] , "Towards Unified Aesthetics and Emotion Prediction in Images", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4810. Accessed: Sep. 21, 2019.
@article{4810-19,
url = {http://sigport.org/4810},
author = { },
publisher = {IEEE SigPort},
title = {Towards Unified Aesthetics and Emotion Prediction in Images},
year = {2019} }
TY - EJOUR
T1 - Towards Unified Aesthetics and Emotion Prediction in Images
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4810
ER -
. (2019). Towards Unified Aesthetics and Emotion Prediction in Images. IEEE SigPort. http://sigport.org/4810
, 2019. Towards Unified Aesthetics and Emotion Prediction in Images. Available at: http://sigport.org/4810.
. (2019). "Towards Unified Aesthetics and Emotion Prediction in Images." Web.
1. . Towards Unified Aesthetics and Emotion Prediction in Images [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4810

Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:43am
Short Link:
Type:
Event:

Document Files

icip2019-poster-haoliang-WQ.PD_.5.pdf

(1)

Subscribe

[1] , "Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4809. Accessed: Sep. 21, 2019.
@article{4809-19,
url = {http://sigport.org/4809},
author = { },
publisher = {IEEE SigPort},
title = {Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks},
year = {2019} }
TY - EJOUR
T1 - Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4809
ER -
. (2019). Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks. IEEE SigPort. http://sigport.org/4809
, 2019. Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks. Available at: http://sigport.org/4809.
. (2019). "Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks." Web.
1. . Learning The Set Graphs: Image-Set Classification Using Sparse Graph Convolutional Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4809

VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING

Paper Details

Authors:
Submitted On:
21 September 2019 - 9:49am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

VLQ.pptx

(0)

Subscribe

[1] , "VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4808. Accessed: Sep. 21, 2019.
@article{4808-19,
url = {http://sigport.org/4808},
author = { },
publisher = {IEEE SigPort},
title = {VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING},
year = {2019} }
TY - EJOUR
T1 - VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4808
ER -
. (2019). VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING. IEEE SigPort. http://sigport.org/4808
, 2019. VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING. Available at: http://sigport.org/4808.
. (2019). "VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING." Web.
1. . VARIABLE-LENGTH QUANTIZATION STRATEGY FOR HASHING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4808

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks


Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation.

Paper Details

Authors:
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma
Submitted On:
21 September 2019 - 7:34am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Oral_ICIP_2019_Adversarial_Causality_0927.pdf

(0)

Subscribe

[1] Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4806. Accessed: Sep. 21, 2019.
@article{4806-19,
url = {http://sigport.org/4806},
author = {Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma },
publisher = {IEEE SigPort},
title = {When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks},
year = {2019} }
TY - EJOUR
T1 - When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
AU - Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4806
ER -
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. IEEE SigPort. http://sigport.org/4806
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, 2019. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. Available at: http://sigport.org/4806.
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks." Web.
1. Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4806

Dynamic Spatial Predicted Background for Video Surveillance

Paper Details

Authors:
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos
Submitted On:
21 September 2019 - 2:56am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP19 DSPB.pdf

(1)

Subscribe

[1] Yaniv Tocker, Rami R. Hagege, Joseph M. Francos, "Dynamic Spatial Predicted Background for Video Surveillance", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4802. Accessed: Sep. 21, 2019.
@article{4802-19,
url = {http://sigport.org/4802},
author = {Yaniv Tocker; Rami R. Hagege; Joseph M. Francos },
publisher = {IEEE SigPort},
title = {Dynamic Spatial Predicted Background for Video Surveillance},
year = {2019} }
TY - EJOUR
T1 - Dynamic Spatial Predicted Background for Video Surveillance
AU - Yaniv Tocker; Rami R. Hagege; Joseph M. Francos
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4802
ER -
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. (2019). Dynamic Spatial Predicted Background for Video Surveillance. IEEE SigPort. http://sigport.org/4802
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos, 2019. Dynamic Spatial Predicted Background for Video Surveillance. Available at: http://sigport.org/4802.
Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. (2019). "Dynamic Spatial Predicted Background for Video Surveillance." Web.
1. Yaniv Tocker, Rami R. Hagege, Joseph M. Francos. Dynamic Spatial Predicted Background for Video Surveillance [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4802

CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES


Image restoration and image enhancement are critical image processing tasks since good image quality is mandatory for many image applications. We are particularly interested in the restoration of ill-exposed images. These effects are caused by sensor limitation or optical arrangement. They prevent the details of the scene from being adequately represented in the captured image. We proposed a deep neural network model due to the number of uncontrolled variables that impact the acquisition.

Paper Details

Authors:
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra
Submitted On:
20 September 2019 - 8:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PPT_ICIP.pdf

(0)

Subscribe

[1] Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra, "CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4799. Accessed: Sep. 21, 2019.
@article{4799-19,
url = {http://sigport.org/4799},
author = {Cristiano Rafael Steffens; Valquiria Huttner; Lucas Ricardo Vieira Messias; Paulo Lilles Jorge Drews-Jr; Silvia Silva da Costa Botelho; Rodrigo da Silva Guerra },
publisher = {IEEE SigPort},
title = {CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES},
year = {2019} }
TY - EJOUR
T1 - CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES
AU - Cristiano Rafael Steffens; Valquiria Huttner; Lucas Ricardo Vieira Messias; Paulo Lilles Jorge Drews-Jr; Silvia Silva da Costa Botelho; Rodrigo da Silva Guerra
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4799
ER -
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. (2019). CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES. IEEE SigPort. http://sigport.org/4799
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra, 2019. CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES. Available at: http://sigport.org/4799.
Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. (2019). "CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES." Web.
1. Cristiano Rafael Steffens, Valquiria Huttner, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho, Rodrigo da Silva Guerra. CNN-BASED LUMINANCE AND COLOR CORRECTION FOR ILL-EXPOSED IMAGES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4799

EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION

Paper Details

Authors:
Ram Longman, Raymond Ptucha
Submitted On:
20 September 2019 - 7:54pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

EmbeddedCycleGAN_v3.pdf

(0)

Subscribe

[1] Ram Longman, Raymond Ptucha, "EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4796. Accessed: Sep. 21, 2019.
@article{4796-19,
url = {http://sigport.org/4796},
author = {Ram Longman; Raymond Ptucha },
publisher = {IEEE SigPort},
title = {EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION},
year = {2019} }
TY - EJOUR
T1 - EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION
AU - Ram Longman; Raymond Ptucha
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4796
ER -
Ram Longman, Raymond Ptucha. (2019). EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION. IEEE SigPort. http://sigport.org/4796
Ram Longman, Raymond Ptucha, 2019. EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION. Available at: http://sigport.org/4796.
Ram Longman, Raymond Ptucha. (2019). "EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION." Web.
1. Ram Longman, Raymond Ptucha. EMBEDDED CYCLEGAN FOR SHAPE-AGNOSTIC IMAGE-TO-IMAGE TRANSLATION [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4796

SHOW, TRANSLATE AND TELL


Humans have an incredible ability to process and understand
information from multiple sources such as images,
video, text, and speech. Recent success of deep neural
networks has enabled us to develop algorithms which give
machines the ability to understand and interpret this information.
There is a need to both broaden their applicability and
develop methods which correlate visual information along
with semantic content. We propose a unified model which
jointly trains on images and captions, and learns to generate

Paper Details

Authors:
Dheeraj Peri, Shagan Sah, Raymond Ptucha
Submitted On:
20 September 2019 - 7:51pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

STT_v5.pdf

(0)

Subscribe

[1] Dheeraj Peri, Shagan Sah, Raymond Ptucha, "SHOW, TRANSLATE AND TELL", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4795. Accessed: Sep. 21, 2019.
@article{4795-19,
url = {http://sigport.org/4795},
author = {Dheeraj Peri; Shagan Sah; Raymond Ptucha },
publisher = {IEEE SigPort},
title = {SHOW, TRANSLATE AND TELL},
year = {2019} }
TY - EJOUR
T1 - SHOW, TRANSLATE AND TELL
AU - Dheeraj Peri; Shagan Sah; Raymond Ptucha
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4795
ER -
Dheeraj Peri, Shagan Sah, Raymond Ptucha. (2019). SHOW, TRANSLATE AND TELL. IEEE SigPort. http://sigport.org/4795
Dheeraj Peri, Shagan Sah, Raymond Ptucha, 2019. SHOW, TRANSLATE AND TELL. Available at: http://sigport.org/4795.
Dheeraj Peri, Shagan Sah, Raymond Ptucha. (2019). "SHOW, TRANSLATE AND TELL." Web.
1. Dheeraj Peri, Shagan Sah, Raymond Ptucha. SHOW, TRANSLATE AND TELL [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4795

MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE


A promising way to deploy Artificial Intelligence (AI)-based services on mobile devices is to run a part of the AI model (a deep neural network) on the mobile itself, and the rest in the cloud. This is sometimes referred to as collaborative intelligence. In this framework, intermediate features from the deep network need to be transmitted to the cloud for further processing. We study the case where such features are used for multiple purposes in the cloud (multi-tasking) and where they need to be compressible in order to allow efficient transmission to the cloud.

Paper Details

Authors:
Submitted On:
20 September 2019 - 2:18pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP_2019.pptx

(1)

Subscribe

[1] , "MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4793. Accessed: Sep. 21, 2019.
@article{4793-19,
url = {http://sigport.org/4793},
author = { },
publisher = {IEEE SigPort},
title = {MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE},
year = {2019} }
TY - EJOUR
T1 - MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE
AU -
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4793
ER -
. (2019). MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE. IEEE SigPort. http://sigport.org/4793
, 2019. MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE. Available at: http://sigport.org/4793.
. (2019). "MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE." Web.
1. . MULTI-TASK LEARNING WITH COMPRESSIBLE FEATURES FOR COLLABORATIVE INTELLIGENCE [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4793

Pages