Sorry, you need to enable JavaScript to visit this website.

Machine Learning for Signal Processing

DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS

Paper Details

Authors:
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş
Submitted On:
14 October 2019 - 7:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

P_Demir_IEEEMLSP2019_Haptix.pdf

(95)

Subscribe

[1] Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş, "DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4869. Accessed: Sep. 28, 2020.
@article{4869-19,
url = {http://sigport.org/4869},
author = {Andac Demir; Safaa Eldeeb; Murat Akçakaya; Deniz Erdoğmuş },
publisher = {IEEE SigPort},
title = {DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS},
year = {2019} }
TY - EJOUR
T1 - DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS
AU - Andac Demir; Safaa Eldeeb; Murat Akçakaya; Deniz Erdoğmuş
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4869
ER -
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. (2019). DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS. IEEE SigPort. http://sigport.org/4869
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş, 2019. DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS. Available at: http://sigport.org/4869.
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. (2019). "DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS." Web.
1. Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4869

Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis


Semi-continuous data have a point mass at zero and are continuous with positive support. Such data arise naturally in several real-life situations like signals in a blind source separation problem, daily rainfall at a location, sales of durable goods among many others. Therefore, efficient estimation of the underlying probability density function is of significant interest.

Paper Details

Authors:
Sai K. Popuri, Zois Boukouvalas
Submitted On:
13 October 2019 - 4:45pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

MLSP_2019.pdf

(89)

Subscribe

[1] Sai K. Popuri, Zois Boukouvalas, "Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4865. Accessed: Sep. 28, 2020.
@article{4865-19,
url = {http://sigport.org/4865},
author = {Sai K. Popuri; Zois Boukouvalas },
publisher = {IEEE SigPort},
title = {Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis},
year = {2019} }
TY - EJOUR
T1 - Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis
AU - Sai K. Popuri; Zois Boukouvalas
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4865
ER -
Sai K. Popuri, Zois Boukouvalas. (2019). Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis. IEEE SigPort. http://sigport.org/4865
Sai K. Popuri, Zois Boukouvalas, 2019. Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis. Available at: http://sigport.org/4865.
Sai K. Popuri, Zois Boukouvalas. (2019). "Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis." Web.
1. Sai K. Popuri, Zois Boukouvalas. Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4865

VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES

Paper Details

Authors:
Andres F. Duque, Guy Wolf, Kevin R. Moon
Submitted On:
12 October 2019 - 2:07pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

VISUALIZING_HIGH_DIMENSIONAL_DYNAMICAL_PROCESSES_poster.pdf

(89)

Subscribe

[1] Andres F. Duque, Guy Wolf, Kevin R. Moon, "VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4860. Accessed: Sep. 28, 2020.
@article{4860-19,
url = {http://sigport.org/4860},
author = {Andres F. Duque; Guy Wolf; Kevin R. Moon },
publisher = {IEEE SigPort},
title = {VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES},
year = {2019} }
TY - EJOUR
T1 - VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES
AU - Andres F. Duque; Guy Wolf; Kevin R. Moon
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4860
ER -
Andres F. Duque, Guy Wolf, Kevin R. Moon. (2019). VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES. IEEE SigPort. http://sigport.org/4860
Andres F. Duque, Guy Wolf, Kevin R. Moon, 2019. VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES. Available at: http://sigport.org/4860.
Andres F. Duque, Guy Wolf, Kevin R. Moon. (2019). "VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES." Web.
1. Andres F. Duque, Guy Wolf, Kevin R. Moon. VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4860

Deep Clustering based on a Mixture of Autoencoders


In this paper we propose a Deep Autoencoder Mixture Clustering(DAMIC) algorithm based on a mixture of deep autoencoders whereeach cluster is represented by an autoencoder. A clustering networktransforms the data into another space and then selects one of theclusters. Next, the autoencoder associated with this cluster is usedto reconstruct the data-point. The clustering algorithm jointly learnsthe nonlinear data representation and the set of autoencoders. Theoptimal clustering is found by minimizing the reconstruction loss ofthe mixture of autoencoder network.

Paper Details

Authors:
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger
Submitted On:
11 October 2019 - 9:39pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Deep Clustering based on a Mixture of Autoencoders

(130)

Subscribe

[1] Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger , "Deep Clustering based on a Mixture of Autoencoders", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4859. Accessed: Sep. 28, 2020.
@article{4859-19,
url = {http://sigport.org/4859},
author = {Shlomo E. Chazan; Sharon Gannot and Jacob Goldberger },
publisher = {IEEE SigPort},
title = {Deep Clustering based on a Mixture of Autoencoders},
year = {2019} }
TY - EJOUR
T1 - Deep Clustering based on a Mixture of Autoencoders
AU - Shlomo E. Chazan; Sharon Gannot and Jacob Goldberger
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4859
ER -
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . (2019). Deep Clustering based on a Mixture of Autoencoders. IEEE SigPort. http://sigport.org/4859
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger , 2019. Deep Clustering based on a Mixture of Autoencoders. Available at: http://sigport.org/4859.
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . (2019). "Deep Clustering based on a Mixture of Autoencoders." Web.
1. Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . Deep Clustering based on a Mixture of Autoencoders [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4859

VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES


We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss to learn a latent representation
from a face image that is invariant to identity but preserves head-pose information. This facilitates synthesis of a realistic face
image with the same head pose as a given input image, but with a different identity. One application of this network is in
privacy-sensitive scenarios; after identity replacement in an image, utility, such as head pose, can still

Paper Details

Authors:
Jiawei Chen, Janusz Konrad, Prakash Ishwar
Submitted On:
11 October 2019 - 4:41pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

MLSP poster presentation

(99)

Subscribe

[1] Jiawei Chen, Janusz Konrad, Prakash Ishwar, "VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4856. Accessed: Sep. 28, 2020.
@article{4856-19,
url = {http://sigport.org/4856},
author = {Jiawei Chen; Janusz Konrad; Prakash Ishwar },
publisher = {IEEE SigPort},
title = {VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES},
year = {2019} }
TY - EJOUR
T1 - VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES
AU - Jiawei Chen; Janusz Konrad; Prakash Ishwar
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4856
ER -
Jiawei Chen, Janusz Konrad, Prakash Ishwar. (2019). VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES. IEEE SigPort. http://sigport.org/4856
Jiawei Chen, Janusz Konrad, Prakash Ishwar, 2019. VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES. Available at: http://sigport.org/4856.
Jiawei Chen, Janusz Konrad, Prakash Ishwar. (2019). "VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES." Web.
1. Jiawei Chen, Janusz Konrad, Prakash Ishwar. VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4856

Super-resolution of Omnidirectional Images Using Adversarial Learning


An omnidirectional image (ODI) enables viewers to look in every direction from a fixed point through a head-mounted display providing an immersive experience compared to that of a standard image. Designing immersive virtual reality systems with ODIs is challenging as they require high resolution content. In this paper, we study super-resolution for ODIs and propose an improved generative adversarial network based model which is optimized to handle the artifacts obtained in the spherical observational space.

Paper Details

Authors:
Aakanksha Rana, Aljosa Smolic
Submitted On:
30 September 2019 - 3:45am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

vsense_poster_template (3).pdf

(105)

Subscribe

[1] Aakanksha Rana, Aljosa Smolic, "Super-resolution of Omnidirectional Images Using Adversarial Learning ", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4849. Accessed: Sep. 28, 2020.
@article{4849-19,
url = {http://sigport.org/4849},
author = {Aakanksha Rana; Aljosa Smolic },
publisher = {IEEE SigPort},
title = {Super-resolution of Omnidirectional Images Using Adversarial Learning },
year = {2019} }
TY - EJOUR
T1 - Super-resolution of Omnidirectional Images Using Adversarial Learning
AU - Aakanksha Rana; Aljosa Smolic
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4849
ER -
Aakanksha Rana, Aljosa Smolic. (2019). Super-resolution of Omnidirectional Images Using Adversarial Learning . IEEE SigPort. http://sigport.org/4849
Aakanksha Rana, Aljosa Smolic, 2019. Super-resolution of Omnidirectional Images Using Adversarial Learning . Available at: http://sigport.org/4849.
Aakanksha Rana, Aljosa Smolic. (2019). "Super-resolution of Omnidirectional Images Using Adversarial Learning ." Web.
1. Aakanksha Rana, Aljosa Smolic. Super-resolution of Omnidirectional Images Using Adversarial Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4849

Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning


The growing use of virtual autonomous agents in applications like games and entertainment demands better control policies for natural-looking movements and actions. Unlike the conventional approach of hard-coding motion routines, we propose a deep learning method for obtaining control policies by directly mimicking raw video demonstrations. Previous methods in this domain rely on extracting low-dimensional features from expert videos followed by a separate hand-crafted reward estimation step.

Paper Details

Authors:
Daiki Kimura, Asim Munawar, Ryuki Tachibana
Submitted On:
24 September 2019 - 4:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

mmps_final.pdf

(152)

Subscribe

[1] Daiki Kimura, Asim Munawar, Ryuki Tachibana, "Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4836. Accessed: Sep. 28, 2020.
@article{4836-19,
url = {http://sigport.org/4836},
author = {Daiki Kimura; Asim Munawar; Ryuki Tachibana },
publisher = {IEEE SigPort},
title = {Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning},
year = {2019} }
TY - EJOUR
T1 - Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning
AU - Daiki Kimura; Asim Munawar; Ryuki Tachibana
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4836
ER -
Daiki Kimura, Asim Munawar, Ryuki Tachibana. (2019). Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning. IEEE SigPort. http://sigport.org/4836
Daiki Kimura, Asim Munawar, Ryuki Tachibana, 2019. Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning. Available at: http://sigport.org/4836.
Daiki Kimura, Asim Munawar, Ryuki Tachibana. (2019). "Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning." Web.
1. Daiki Kimura, Asim Munawar, Ryuki Tachibana. Injective State-Image Mapping facilitates Visual Adversarial Imitation Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4836

Single-image rain removal via multi-scale cascading image generation


A novel single-image rain removal method is proposed based on multi-scale cascading image generation (MSCG). In particular, the proposed method consists of an encoder extracting multi-scale features from images and a decoder generating de-rained images with a cascading mechanism. The encoder ensembles the convolution neural networks using the kernels with different sizes, and integrates their outputs across different scales.

Paper Details

Authors:
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu
Submitted On:
22 September 2019 - 2:38pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Poster ICIP 2019 Paper #2542.pdf

(86)

Subscribe

[1] Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu, "Single-image rain removal via multi-scale cascading image generation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4817. Accessed: Sep. 28, 2020.
@article{4817-19,
url = {http://sigport.org/4817},
author = {Zheng Zhang; Yi Xu; He Wang; Bingbing Ni; Hongteng Xu },
publisher = {IEEE SigPort},
title = {Single-image rain removal via multi-scale cascading image generation},
year = {2019} }
TY - EJOUR
T1 - Single-image rain removal via multi-scale cascading image generation
AU - Zheng Zhang; Yi Xu; He Wang; Bingbing Ni; Hongteng Xu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4817
ER -
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. (2019). Single-image rain removal via multi-scale cascading image generation. IEEE SigPort. http://sigport.org/4817
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu, 2019. Single-image rain removal via multi-scale cascading image generation. Available at: http://sigport.org/4817.
Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. (2019). "Single-image rain removal via multi-scale cascading image generation." Web.
1. Zheng Zhang, Yi Xu, He Wang, Bingbing Ni, Hongteng Xu. Single-image rain removal via multi-scale cascading image generation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4817

A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING


Convolutional neural networks (CNN) have shown state-of-the-art results for low-level computer vision problems such as stereo and monocular disparity estimations, but still, have much room to further improve their performance in terms of accuracy, numbers of parameters, etc. Recent works have uncovered the advantages of using an unsupervised scheme to train CNN’s to estimate monocular disparity, where only the relatively-easy-to-obtain stereo images are needed for training.

Paper Details

Authors:
Munchurl Kim
Submitted On:
19 September 2019 - 8:16am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:

Document Files

Poster_1748.pdf

(63)

Subscribe

[1] Munchurl Kim, "A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4712. Accessed: Sep. 28, 2020.
@article{4712-19,
url = {http://sigport.org/4712},
author = {Munchurl Kim },
publisher = {IEEE SigPort},
title = {A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING},
year = {2019} }
TY - EJOUR
T1 - A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING
AU - Munchurl Kim
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4712
ER -
Munchurl Kim. (2019). A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING. IEEE SigPort. http://sigport.org/4712
Munchurl Kim, 2019. A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING. Available at: http://sigport.org/4712.
Munchurl Kim. (2019). "A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING." Web.
1. Munchurl Kim. A NOVEL MONOCULAR DISPARITY ESTIMATION NETWORK WITH DOMAIN TRANSFORMATION AND AMBIGUITY LEARNING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4712

MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS

Paper Details

Authors:
Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee
Submitted On:
19 September 2019 - 2:46am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Elkhamy_Telewide_depth_2019_ICIP.pdf

(148)

Subscribe

[1] Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee, "MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4709. Accessed: Sep. 28, 2020.
@article{4709-19,
url = {http://sigport.org/4709},
author = {Mostafa El-Khamy; Xianzhi Du; Haoyu Ren; Jungwon Lee },
publisher = {IEEE SigPort},
title = {MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS},
year = {2019} }
TY - EJOUR
T1 - MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS
AU - Mostafa El-Khamy; Xianzhi Du; Haoyu Ren; Jungwon Lee
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4709
ER -
Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee. (2019). MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS. IEEE SigPort. http://sigport.org/4709
Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee, 2019. MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS. Available at: http://sigport.org/4709.
Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee. (2019). "MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS." Web.
1. Mostafa El-Khamy, Xianzhi Du, Haoyu Ren, Jungwon Lee. MULTI TASK LEARNING OF DEPTH FROM TELE AND WIDE STEREO IMAGE PAIRS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4709

Pages