Sorry, you need to enable JavaScript to visit this website.

Image, Video, and Multidimensional Signal Processing

Immersive Optical-See-Through Augmented Reality (Keynote Talk)


Immersive Optical-See-Through Augmented Reality. Augmented Reality has been getting ready for the last 20 years, and is finally becoming real, powered by progress in enabling technologies such as graphics, vision, sensors, and displays. In this talk I’ll provide a personal retrospective on my journey, working on all those enablers, getting ready for the coming AR revolution. At Meta, we are working on immersive optical-see-through AR headset, as well as the full software stack. We’ll discuss the differences of optical vs.

Paper Details

Authors:
Kari Pulli
Submitted On:
22 December 2017 - 1:30pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_2017_Meta_AR_small.pdf

(197 downloads)

Keywords

Subscribe

[1] Kari Pulli, "Immersive Optical-See-Through Augmented Reality (Keynote Talk)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2261. Accessed: Jun. 18, 2018.
@article{2261-17,
url = {http://sigport.org/2261},
author = {Kari Pulli },
publisher = {IEEE SigPort},
title = {Immersive Optical-See-Through Augmented Reality (Keynote Talk)},
year = {2017} }
TY - EJOUR
T1 - Immersive Optical-See-Through Augmented Reality (Keynote Talk)
AU - Kari Pulli
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2261
ER -
Kari Pulli. (2017). Immersive Optical-See-Through Augmented Reality (Keynote Talk). IEEE SigPort. http://sigport.org/2261
Kari Pulli, 2017. Immersive Optical-See-Through Augmented Reality (Keynote Talk). Available at: http://sigport.org/2261.
Kari Pulli. (2017). "Immersive Optical-See-Through Augmented Reality (Keynote Talk)." Web.
1. Kari Pulli. Immersive Optical-See-Through Augmented Reality (Keynote Talk) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2261

RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network


Video change detection, which plays an important role in computer vision, is far from being well resolved due to the complexity of diverse scenes in real world. Most of the current methods are designed based on hand-crafted features and perform well in some certain scenes but may fail on others. This paper puts up forward a deep learning based method to automatically fuse multiple basic detections into an optimal

Paper Details

Authors:
Submitted On:
20 April 2018 - 10:43am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_poster.pdf

(58 downloads)

Keywords

Subscribe

[1] , " RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3107. Accessed: Jun. 18, 2018.
@article{3107-18,
url = {http://sigport.org/3107},
author = { },
publisher = {IEEE SigPort},
title = { RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network},
year = {2018} }
TY - EJOUR
T1 - RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3107
ER -
. (2018). RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network. IEEE SigPort. http://sigport.org/3107
, 2018. RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network. Available at: http://sigport.org/3107.
. (2018). " RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network." Web.
1. . RCDFNN: Robust Change Detection based on Convolutional Fusion Neural Network [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3107

Deep Blind Image Quality Assessment by Learning Sensitivity Map


Applying a deep convolutional neural network CNN to no reference image quality assessment (NR-IQA) is a challenging task due to the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method–the Deep Blind image Quality Assessment predictor (DeepBQA)– adopts two-step training stages to avoid overfitting. In the first stage, a ground-truth objective error map is generated and used as a proxy training target.

Paper Details

Authors:
Submitted On:
20 April 2018 - 1:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018_Sanghoon_Lee.pdf

(44 downloads)

Keywords

Subscribe

[1] , "Deep Blind Image Quality Assessment by Learning Sensitivity Map", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3074. Accessed: Jun. 18, 2018.
@article{3074-18,
url = {http://sigport.org/3074},
author = { },
publisher = {IEEE SigPort},
title = {Deep Blind Image Quality Assessment by Learning Sensitivity Map},
year = {2018} }
TY - EJOUR
T1 - Deep Blind Image Quality Assessment by Learning Sensitivity Map
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3074
ER -
. (2018). Deep Blind Image Quality Assessment by Learning Sensitivity Map. IEEE SigPort. http://sigport.org/3074
, 2018. Deep Blind Image Quality Assessment by Learning Sensitivity Map. Available at: http://sigport.org/3074.
. (2018). "Deep Blind Image Quality Assessment by Learning Sensitivity Map." Web.
1. . Deep Blind Image Quality Assessment by Learning Sensitivity Map [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3074

BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES

Paper Details

Authors:
D. Fabrice ATREVI, Damien VIVET, Bruno EMILE
Submitted On:
20 April 2018 - 1:06am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster_Icassp_ATREVI.pdf

(31 downloads)

Keywords

Subscribe

[1] D. Fabrice ATREVI, Damien VIVET, Bruno EMILE, "BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3071. Accessed: Jun. 18, 2018.
@article{3071-18,
url = {http://sigport.org/3071},
author = { D. Fabrice ATREVI; Damien VIVET; Bruno EMILE },
publisher = {IEEE SigPort},
title = {BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES},
year = {2018} }
TY - EJOUR
T1 - BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES
AU - D. Fabrice ATREVI; Damien VIVET; Bruno EMILE
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3071
ER -
D. Fabrice ATREVI, Damien VIVET, Bruno EMILE. (2018). BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES. IEEE SigPort. http://sigport.org/3071
D. Fabrice ATREVI, Damien VIVET, Bruno EMILE, 2018. BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES. Available at: http://sigport.org/3071.
D. Fabrice ATREVI, Damien VIVET, Bruno EMILE. (2018). "BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES." Web.
1. D. Fabrice ATREVI, Damien VIVET, Bruno EMILE. BAYESIAN GENERATIVE MODEL BASED ON COLOR HISTOGRAM OF ORIENTED PHASE AND HISTOGRAM OF ORIENTED OPTICAL FLOW FOR RARE EVENT DETECTION IN CROWDED SCENES [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3071

Analysis and Optimization of Aperture Design in Computational Imaging


There is growing interest in the use of coded aperture imaging systems for a variety of applications. Using an analysis framework based on mutual information, we examine the fundamental limits of such systems—and the associated optimum aperture coding—under simple but meaningful propagation and sensor models. Among other results, we show that when SNR is high and thermal noise dominates shot noise, spectrally-flat masks, which have 50% transmissivity, are optimal, but that when shot noise dominates thermal noise, randomly generated masks with lower transmissivity offer greater performance.

Paper Details

Authors:
Adam Yedidia, Christos Thrampoulidis, Gregory Wornell
Submitted On:
19 April 2018 - 9:20pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp_talk_short.pptx

(29 downloads)

Keywords

Subscribe

[1] Adam Yedidia, Christos Thrampoulidis, Gregory Wornell, "Analysis and Optimization of Aperture Design in Computational Imaging", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3049. Accessed: Jun. 18, 2018.
@article{3049-18,
url = {http://sigport.org/3049},
author = {Adam Yedidia; Christos Thrampoulidis; Gregory Wornell },
publisher = {IEEE SigPort},
title = {Analysis and Optimization of Aperture Design in Computational Imaging},
year = {2018} }
TY - EJOUR
T1 - Analysis and Optimization of Aperture Design in Computational Imaging
AU - Adam Yedidia; Christos Thrampoulidis; Gregory Wornell
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3049
ER -
Adam Yedidia, Christos Thrampoulidis, Gregory Wornell. (2018). Analysis and Optimization of Aperture Design in Computational Imaging. IEEE SigPort. http://sigport.org/3049
Adam Yedidia, Christos Thrampoulidis, Gregory Wornell, 2018. Analysis and Optimization of Aperture Design in Computational Imaging. Available at: http://sigport.org/3049.
Adam Yedidia, Christos Thrampoulidis, Gregory Wornell. (2018). "Analysis and Optimization of Aperture Design in Computational Imaging." Web.
1. Adam Yedidia, Christos Thrampoulidis, Gregory Wornell. Analysis and Optimization of Aperture Design in Computational Imaging [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3049

RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES


Cross-modal sketch-photo recognition is of vital importance
in law enforcement and public security. Most existing methods
are dedicated to bridging the gap between the low-level
visual features of sketches and photo images, which is limited
due to intrinsic differences in pixel values. In this paper, based
on the intuition that sketches and photo images are highly correlated
in the semantic domain, we propose to jointly utilize
the low-level visual features and high-level facial attributes to

xiao_yang.pdf

PDF icon xiao_yang.pdf (29 downloads)

Paper Details

Authors:
Submitted On:
19 April 2018 - 2:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

xiao_yang.pdf

(29 downloads)

Keywords

Subscribe

[1] , "RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3001. Accessed: Jun. 18, 2018.
@article{3001-18,
url = {http://sigport.org/3001},
author = { },
publisher = {IEEE SigPort},
title = {RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES},
year = {2018} }
TY - EJOUR
T1 - RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3001
ER -
. (2018). RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES. IEEE SigPort. http://sigport.org/3001
, 2018. RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES. Available at: http://sigport.org/3001.
. (2018). "RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES." Web.
1. . RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3001

RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES


Cross-modal sketch-photo recognition is of vital importance
in law enforcement and public security. Most existing methods
are dedicated to bridging the gap between the low-level
visual features of sketches and photo images, which is limited
due to intrinsic differences in pixel values. In this paper, based
on the intuition that sketches and photo images are highly correlated
in the semantic domain, we propose to jointly utilize
the low-level visual features and high-level facial attributes to

xiao_yang.pdf

PDF icon xiao_yang.pdf (26 downloads)

Paper Details

Authors:
Submitted On:
19 April 2018 - 2:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

xiao_yang.pdf

(26 downloads)

Keywords

Subscribe

[1] , "RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3000. Accessed: Jun. 18, 2018.
@article{3000-18,
url = {http://sigport.org/3000},
author = { },
publisher = {IEEE SigPort},
title = {RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES},
year = {2018} }
TY - EJOUR
T1 - RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3000
ER -
. (2018). RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES. IEEE SigPort. http://sigport.org/3000
, 2018. RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES. Available at: http://sigport.org/3000.
. (2018). "RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES." Web.
1. . RECOGNIZING MINIMAL FACIAL SKETCH BY GENERATING PHOTOREALISTIC FACES WITH THE GUIDANCE OF DESCRIPTIVE ATTRIBUTES [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3000

Image Restoration with Deep Generative Models


Many image restoration problems are ill-posed in nature, hence, beyond the input image, most existing methods rely on a carefully engineered image prior, which enforces some local image consistency in the recovered image. How tightly the prior assumptions are fulfilled has a big impact on the resulting task performance. To obtain more flexibility, in this work, we proposed to design the image prior in a data-driven manner. Instead of explicitly defining the prior, we learn it using deep generative models.

Paper Details

Authors:
Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do
Submitted On:
17 April 2018 - 12:40am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation.pdf

(184 downloads)

Keywords

Subscribe

[1] Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do, "Image Restoration with Deep Generative Models", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2927. Accessed: Jun. 18, 2018.
@article{2927-18,
url = {http://sigport.org/2927},
author = {Chen Chen; Alexander G. Schwing; Mark Hasegawa-Johnson; Minh N. Do },
publisher = {IEEE SigPort},
title = {Image Restoration with Deep Generative Models},
year = {2018} }
TY - EJOUR
T1 - Image Restoration with Deep Generative Models
AU - Chen Chen; Alexander G. Schwing; Mark Hasegawa-Johnson; Minh N. Do
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2927
ER -
Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do. (2018). Image Restoration with Deep Generative Models. IEEE SigPort. http://sigport.org/2927
Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do, 2018. Image Restoration with Deep Generative Models. Available at: http://sigport.org/2927.
Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do. (2018). "Image Restoration with Deep Generative Models." Web.
1. Chen Chen, Alexander G. Schwing, Mark Hasegawa-Johnson, Minh N. Do. Image Restoration with Deep Generative Models [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2927

SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM)


We develop new efficient online algorithms for detecting transient sparse signals in TEM video sequences, by adopting the recently developed framework for sequential detection jointly with online convex optimization [1]. We cast the problem as detecting an unknown sparse mean shift of Gaussian observations, and develop adaptive CUSUM and adaptive SSRS procedures, which are based on likelihood ratio statistics with post-change mean vector being online maximum likelihood estimators with ℓ1. We demonstrate the meritorious performance of our algorithms for TEM imaging using real data.

icassp2018_poster.pdf

PDF icon poster (63 downloads)

Paper Details

Authors:
Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau
Submitted On:
13 April 2018 - 11:40pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(63 downloads)

Keywords

Subscribe

[1] Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau, "SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM)", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2789. Accessed: Jun. 18, 2018.
@article{2789-18,
url = {http://sigport.org/2789},
author = {Yang Cao; Shixiang Zhu; Yao Xie; Jordan Key; Josh Kacher; Raymond Unocic; Christopher Rouleau },
publisher = {IEEE SigPort},
title = {SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM)},
year = {2018} }
TY - EJOUR
T1 - SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM)
AU - Yang Cao; Shixiang Zhu; Yao Xie; Jordan Key; Josh Kacher; Raymond Unocic; Christopher Rouleau
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2789
ER -
Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau. (2018). SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM). IEEE SigPort. http://sigport.org/2789
Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau, 2018. SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM). Available at: http://sigport.org/2789.
Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau. (2018). "SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM)." Web.
1. Yang Cao, Shixiang Zhu, Yao Xie, Jordan Key, Josh Kacher, Raymond Unocic, Christopher Rouleau. SEQUENTIAL ADAPTIVE DETECTION FOR IN-SITU TRANSMISSION ELECTRON MICROSCOPY (TEM) [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2789

LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS


Low-resolution (LR) face identification is always a challenge in computer vision. In this paper, we propose a new LR face recognition and reconstruction method using deep canonical correlation analysis (DCCA). Unlike linear CCA-based methods, our proposed method can learn flexible nonlinear representations by passing LR and high-resolution (HR) image principal component features through multiple stacked layers of nonlinear transformation.

Paper Details

Authors:
Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li
Submitted On:
13 April 2018 - 11:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2018-poster-wide.pdf

(36 downloads)

Keywords

Subscribe

[1] Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li, "LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2722. Accessed: Jun. 18, 2018.
@article{2722-18,
url = {http://sigport.org/2722},
author = {Zhao Zhang; Yun-Hao Yuan; Xiao-Bo Shen; Yun Li },
publisher = {IEEE SigPort},
title = {LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS},
year = {2018} }
TY - EJOUR
T1 - LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS
AU - Zhao Zhang; Yun-Hao Yuan; Xiao-Bo Shen; Yun Li
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2722
ER -
Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li. (2018). LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS. IEEE SigPort. http://sigport.org/2722
Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li, 2018. LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS. Available at: http://sigport.org/2722.
Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li. (2018). "LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS." Web.
1. Zhao Zhang, Yun-Hao Yuan, Xiao-Bo Shen, Yun Li. LOW RESOLUTION FACE RECOGNITION AND RECONSTRUCTION VIA DEEP CANONICAL CORRELATION ANALYSIS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2722

Pages