Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

Immersive Optical-See-Through Augmented Reality (Keynote Talk)


Immersive Optical-See-Through Augmented Reality. Augmented Reality has been getting ready for the last 20 years, and is finally becoming real, powered by progress in enabling technologies such as graphics, vision, sensors, and displays. In this talk I’ll provide a personal retrospective on my journey, working on all those enablers, getting ready for the coming AR revolution. At Meta, we are working on immersive optical-see-through AR headset, as well as the full software stack. We’ll discuss the differences of optical vs.

Paper Details

Authors:
Kari Pulli
Submitted On:
22 December 2017 - 1:30pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_2017_Meta_AR_small.pdf

(438)

Subscribe

[1] Kari Pulli, "Immersive Optical-See-Through Augmented Reality (Keynote Talk)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2261. Accessed: Sep. 24, 2020.
@article{2261-17,
url = {http://sigport.org/2261},
author = {Kari Pulli },
publisher = {IEEE SigPort},
title = {Immersive Optical-See-Through Augmented Reality (Keynote Talk)},
year = {2017} }
TY - EJOUR
T1 - Immersive Optical-See-Through Augmented Reality (Keynote Talk)
AU - Kari Pulli
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2261
ER -
Kari Pulli. (2017). Immersive Optical-See-Through Augmented Reality (Keynote Talk). IEEE SigPort. http://sigport.org/2261
Kari Pulli, 2017. Immersive Optical-See-Through Augmented Reality (Keynote Talk). Available at: http://sigport.org/2261.
Kari Pulli. (2017). "Immersive Optical-See-Through Augmented Reality (Keynote Talk)." Web.
1. Kari Pulli. Immersive Optical-See-Through Augmented Reality (Keynote Talk) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2261

MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA


In this paper, we estimate depth information using two defocused images from dual aperture camera. Recent advances in deep learning techniques have increased the accuracy of depth estimation. Besides, methods of using a defocused image in which an object is blurred according to a distance from a camera have been widely studied. We further improve the accuracy of the depth estimation by training the network using two images with different degrees of depth-of-field.

Paper Details

Authors:
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee
Submitted On:
20 May 2020 - 11:55am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP_MIDFD_PPT.pdf

(43)

Subscribe

[1] Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee, "MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5418. Accessed: Sep. 24, 2020.
@article{5418-20,
url = {http://sigport.org/5418},
author = {Gwangmo Song; Yumee Kim; Kukjin Chun; Kyoung Mu Lee },
publisher = {IEEE SigPort},
title = {MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA},
year = {2020} }
TY - EJOUR
T1 - MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA
AU - Gwangmo Song; Yumee Kim; Kukjin Chun; Kyoung Mu Lee
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5418
ER -
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. (2020). MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA. IEEE SigPort. http://sigport.org/5418
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee, 2020. MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA. Available at: http://sigport.org/5418.
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. (2020). "MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA." Web.
1. Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5418

IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES


The recent success of the Transformer based sequence-to-sequence framework for various Natural Language Processing tasks has motivated its application to Automatic Speech Recognition. In this work, we explore the application of Transformers on low resource Indian languages in a multilingual framework. We explore various methods to incorporate language information into a multilingual Transformer, i.e.,(i) at the decoder, (ii) at the encoder. These methods include using language identity tokens or providing language information to the acoustic vectors.

Paper Details

Authors:
Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh
Submitted On:
19 May 2020 - 3:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

shetty.pdf

(36)

Subscribe

[1] Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh, "IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5397. Accessed: Sep. 24, 2020.
@article{5397-20,
url = {http://sigport.org/5397},
author = {Vishwas M. Shetty; Metilda Sagaya Mary N J; S. Umesh },
publisher = {IEEE SigPort},
title = {IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES},
year = {2020} }
TY - EJOUR
T1 - IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES
AU - Vishwas M. Shetty; Metilda Sagaya Mary N J; S. Umesh
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5397
ER -
Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh. (2020). IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES. IEEE SigPort. http://sigport.org/5397
Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh, 2020. IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES. Available at: http://sigport.org/5397.
Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh. (2020). "IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES." Web.
1. Vishwas M. Shetty, Metilda Sagaya Mary N J, S. Umesh. IMPROVING THE PERFORMANCE OF TRANSFORMER BASED LOW RESOURCE SPEECH RECOGNITION FOR INDIAN LANGUAGES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5397

INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING


Performing driving behaviors based on causal reasoning is essential to ensure driving safety. In this work, we investigated how state-of-the-art 3D Convolutional Neural Networks (CNNs) perform on classifying driving behaviors based on causal reasoning. We proposed a perturbation-based visual explanation method to inspect the models' performance visually. By examining the video attention saliency, we found that existing models could not precisely capture the causes (e.g., traffic light) of the specific action (e.g., stopping).

Paper Details

Authors:
Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai
Submitted On:
14 May 2020 - 11:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING.pdf

(47)

Subscribe

[1] Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai, "INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5305. Accessed: Sep. 24, 2020.
@article{5305-20,
url = {http://sigport.org/5305},
author = {Yi-Chieh Liu; Yung-An Hsieh; Min-Hung Chen; C.-H. Huck Yang; J. Tegner; Y.-C. James Tsai },
publisher = {IEEE SigPort},
title = {INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING},
year = {2020} }
TY - EJOUR
T1 - INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING
AU - Yi-Chieh Liu; Yung-An Hsieh; Min-Hung Chen; C.-H. Huck Yang; J. Tegner; Y.-C. James Tsai
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5305
ER -
Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai. (2020). INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING. IEEE SigPort. http://sigport.org/5305
Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai, 2020. INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING. Available at: http://sigport.org/5305.
Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai. (2020). "INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING." Web.
1. Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, C.-H. Huck Yang, J. Tegner, Y.-C. James Tsai. INTERPRETABLE SELF-ATTENTION TEMPORAL REASONING FOR DRIVING BEHAVIOR UNDERSTANDING [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5305

Parsing Map Guided Multi-Scale Attention Network For Face Hallucination

Paper Details

Authors:
Submitted On:
14 May 2020 - 11:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

4929wang.pdf

(39)

Subscribe

[1] , "Parsing Map Guided Multi-Scale Attention Network For Face Hallucination", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5304. Accessed: Sep. 24, 2020.
@article{5304-20,
url = {http://sigport.org/5304},
author = { },
publisher = {IEEE SigPort},
title = {Parsing Map Guided Multi-Scale Attention Network For Face Hallucination},
year = {2020} }
TY - EJOUR
T1 - Parsing Map Guided Multi-Scale Attention Network For Face Hallucination
AU -
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5304
ER -
. (2020). Parsing Map Guided Multi-Scale Attention Network For Face Hallucination. IEEE SigPort. http://sigport.org/5304
, 2020. Parsing Map Guided Multi-Scale Attention Network For Face Hallucination. Available at: http://sigport.org/5304.
. (2020). "Parsing Map Guided Multi-Scale Attention Network For Face Hallucination." Web.
1. . Parsing Map Guided Multi-Scale Attention Network For Face Hallucination [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5304

IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION

Paper Details

Authors:
Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi
Submitted On:
14 May 2020 - 4:44am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Image Segmentation Based Privacy-Preserving Human Action Recognition for Anomaly Detection .pdf

(49)

Subscribe

[1] Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi, "IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5272. Accessed: Sep. 24, 2020.
@article{5272-20,
url = {http://sigport.org/5272},
author = {Jiawei Yan; Federico Angelini; Syed Mohsen Naqvi },
publisher = {IEEE SigPort},
title = {IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION},
year = {2020} }
TY - EJOUR
T1 - IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION
AU - Jiawei Yan; Federico Angelini; Syed Mohsen Naqvi
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5272
ER -
Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi. (2020). IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION. IEEE SigPort. http://sigport.org/5272
Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi, 2020. IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION. Available at: http://sigport.org/5272.
Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi. (2020). "IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION." Web.
1. Jiawei Yan, Federico Angelini, Syed Mohsen Naqvi. IMAGE SEGMENTATION BASED PRIVACY-PRESERVING HUMAN ACTION RECOGNITION FOR ANOMALY DETECTION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5272

COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION


The recently introduced plenoptic point cloud representation marries a 3D point cloud with a light field. Instead of each point being associated with a single colour value, there can be multiple values to represent the colour at that point as perceived from different viewpoints. This representation was introduced together with a compression technique for the multi-view colour vectors, which is an extension of the RAHT method for point cloud attribute coding.

Paper Details

Authors:
Christine Guillemot
Submitted On:
14 May 2020 - 4:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

1461_Krivokuca_Presentation.pdf

(64)

Subscribe

[1] Christine Guillemot, "COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5268. Accessed: Sep. 24, 2020.
@article{5268-20,
url = {http://sigport.org/5268},
author = {Christine Guillemot },
publisher = {IEEE SigPort},
title = {COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION},
year = {2020} }
TY - EJOUR
T1 - COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION
AU - Christine Guillemot
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5268
ER -
Christine Guillemot. (2020). COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION. IEEE SigPort. http://sigport.org/5268
Christine Guillemot, 2020. COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION. Available at: http://sigport.org/5268.
Christine Guillemot. (2020). "COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION." Web.
1. Christine Guillemot. COLOUR COMPRESSION OF PLENOPTIC POINT CLOUDS USING RAHT-KLT WITH PRIOR COLOUR CLUSTERING AND SPECULAR/DIFFUSE COMPONENT SEPARATION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5268

Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors


Multiple-Path Interference (MPI) is a major drawback of Time-of-Flight (ToF) sensors. MPI occurs when a ToF pixel receives more than a single light bounce from the scene. Current methods resolving more than a single return per pixel rely on the sequential acquisition of large amounts of data and are too computationally expensive to deliver depth images in real time. These factors have precluded the development of a multiple-path ToF camera to date. In this work we consider two hardware alternatives that can be used to acquire all necessary raw data in a single shot.

Paper Details

Authors:
Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld
Submitted On:
14 May 2020 - 4:11am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

heredia_ICASSP2020_kagawa_v3_final.pdf

(67)

Subscribe

[1] Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld, "Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5262. Accessed: Sep. 24, 2020.
@article{5262-20,
url = {http://sigport.org/5262},
author = {Keiichiro Kagawa; Tomoya Kokado; Shoji Kawahito; Otmar Loffeld },
publisher = {IEEE SigPort},
title = {Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors},
year = {2020} }
TY - EJOUR
T1 - Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors
AU - Keiichiro Kagawa; Tomoya Kokado; Shoji Kawahito; Otmar Loffeld
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5262
ER -
Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld. (2020). Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors. IEEE SigPort. http://sigport.org/5262
Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld, 2020. Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors. Available at: http://sigport.org/5262.
Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld. (2020). "Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors." Web.
1. Keiichiro Kagawa, Tomoya Kokado, Shoji Kawahito, Otmar Loffeld. Single-Shot Real-Time Multiple-Path Time-of-Flight Depth Imaging for Multi-Aperture and Macro-Pixel Sensors [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5262

COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING


Learning with few labeled data is a key challenge for visual recognition, as deep neural networks tend to overfit using a few samples only. One of the Few-shot learning methods called metric learning addresses this challenge by first learning a deep distance metric to determine whether a pair of images belong to the same category, then applying the trained metric to instances from other test set with limited labels. This method makes the most of the few samples and limits the overfitting effectively.

Paper Details

Authors:
Meng Pan, Weigao Wen, Dong Li
Submitted On:
13 May 2020 - 10:22pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020 presentation.pdf

(55)

Subscribe

[1] Meng Pan, Weigao Wen, Dong Li, "COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5195. Accessed: Sep. 24, 2020.
@article{5195-20,
url = {http://sigport.org/5195},
author = {Meng Pan; Weigao Wen; Dong Li },
publisher = {IEEE SigPort},
title = {COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING},
year = {2020} }
TY - EJOUR
T1 - COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING
AU - Meng Pan; Weigao Wen; Dong Li
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5195
ER -
Meng Pan, Weigao Wen, Dong Li. (2020). COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING. IEEE SigPort. http://sigport.org/5195
Meng Pan, Weigao Wen, Dong Li, 2020. COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING. Available at: http://sigport.org/5195.
Meng Pan, Weigao Wen, Dong Li. (2020). "COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING." Web.
1. Meng Pan, Weigao Wen, Dong Li. COMPARE LEARNING: BI-ATTENTION NETWORK FOR FEW-SHOT LEARNING [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5195

EXPOSURE INTERPOLATION VIA HYBRID LEARNING


Deep learning based methods have become dominant solutions to many image processing problems. A natural question would be “Is there any space for conventional methods on these problems?” In this paper, exposure interpolation is taken as an example to answer this question and the answer is “Yes”. A new hybrid learning framework is introduced to interpolate a medium exposure image for two large-exposure-ratio images from an emerging high dynamic range (HDR) video capturing device. The framework is set up by fusing conventional and deep learning methods.

Paper Details

Authors:
Zhengguo Li, Yi Yang, Shiqian Wu
Submitted On:
13 May 2020 - 9:22pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP2020HybridLearning.pdf

(42)

Subscribe

[1] Zhengguo Li, Yi Yang, Shiqian Wu, "EXPOSURE INTERPOLATION VIA HYBRID LEARNING", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5178. Accessed: Sep. 24, 2020.
@article{5178-20,
url = {http://sigport.org/5178},
author = { Zhengguo Li; Yi Yang; Shiqian Wu },
publisher = {IEEE SigPort},
title = {EXPOSURE INTERPOLATION VIA HYBRID LEARNING},
year = {2020} }
TY - EJOUR
T1 - EXPOSURE INTERPOLATION VIA HYBRID LEARNING
AU - Zhengguo Li; Yi Yang; Shiqian Wu
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5178
ER -
Zhengguo Li, Yi Yang, Shiqian Wu. (2020). EXPOSURE INTERPOLATION VIA HYBRID LEARNING. IEEE SigPort. http://sigport.org/5178
Zhengguo Li, Yi Yang, Shiqian Wu, 2020. EXPOSURE INTERPOLATION VIA HYBRID LEARNING. Available at: http://sigport.org/5178.
Zhengguo Li, Yi Yang, Shiqian Wu. (2020). "EXPOSURE INTERPOLATION VIA HYBRID LEARNING." Web.
1. Zhengguo Li, Yi Yang, Shiqian Wu. EXPOSURE INTERPOLATION VIA HYBRID LEARNING [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5178

Pages