Sorry, you need to enable JavaScript to visit this website.

Machine Learning for Signal Processing

Sparse Modeling


Sparse Modeling in Image Processing and Deep LearningSparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC).  Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning.

Paper Details

Authors:
Michael Elad
Submitted On:
22 December 2017 - 1:26pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICIP_KeyNote_Talk_small size.pdf

(58)

Subscribe

[1] Michael Elad, "Sparse Modeling ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2260. Accessed: Nov. 12, 2019.
@article{2260-17,
url = {http://sigport.org/2260},
author = {Michael Elad },
publisher = {IEEE SigPort},
title = {Sparse Modeling },
year = {2017} }
TY - EJOUR
T1 - Sparse Modeling
AU - Michael Elad
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2260
ER -
Michael Elad. (2017). Sparse Modeling . IEEE SigPort. http://sigport.org/2260
Michael Elad, 2017. Sparse Modeling . Available at: http://sigport.org/2260.
Michael Elad. (2017). "Sparse Modeling ." Web.
1. Michael Elad. Sparse Modeling [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2260

Poster: Generative-Discriminative Crop Type Identification using Satellite Images


Crop type identification refers to distinguishing certain crop from other landcovers, which is an essential and crucial task in agricultural monitoring. Satellite images are good data input for identifying different crops since satellites capture relatively wider area and more spectral information. Based on prior knowledge of crop phenology, multi-temporal images are stacked to extract the growth pattern of varied crops.

Paper Details

Authors:
Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu
Submitted On:
9 November 2019 - 7:23pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Poster: Generative-Discriminative Crop Type Identification using Satellite Images

(3)

Subscribe

[1] Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu, "Poster: Generative-Discriminative Crop Type Identification using Satellite Images", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4942. Accessed: Nov. 12, 2019.
@article{4942-19,
url = {http://sigport.org/4942},
author = {Nan Qiao; Yi Zhao; Ruei-Sung Lin; Bo Gong; Zhongxiang Wu; Mei Han; Jiashu Liu },
publisher = {IEEE SigPort},
title = {Poster: Generative-Discriminative Crop Type Identification using Satellite Images},
year = {2019} }
TY - EJOUR
T1 - Poster: Generative-Discriminative Crop Type Identification using Satellite Images
AU - Nan Qiao; Yi Zhao; Ruei-Sung Lin; Bo Gong; Zhongxiang Wu; Mei Han; Jiashu Liu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4942
ER -
Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu. (2019). Poster: Generative-Discriminative Crop Type Identification using Satellite Images. IEEE SigPort. http://sigport.org/4942
Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu, 2019. Poster: Generative-Discriminative Crop Type Identification using Satellite Images. Available at: http://sigport.org/4942.
Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu. (2019). "Poster: Generative-Discriminative Crop Type Identification using Satellite Images." Web.
1. Nan Qiao, Yi Zhao, Ruei-Sung Lin, Bo Gong, Zhongxiang Wu, Mei Han, Jiashu Liu. Poster: Generative-Discriminative Crop Type Identification using Satellite Images [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4942

A deep network for single-snapshot direction of arrival estimation


This paper examines a deep feedforward network for beamforming with the single--snapshot Sample Covariance Matrix (SCM). The Conventional beamforming formulation, typically quadratic in the complex weight space, is reformulated as real and linear in the weight covariance and SCM. The reformulated SCMs are used as input to a deep feed--forward neural network (FNN) for two source localization. Simulations demonstrate the effect of source incoherence and performance in a noisy tracking example.

Paper Details

Authors:
Peter Gerstoft, Emma Ozanich, Haiqiang Niu
Submitted On:
28 October 2019 - 10:56am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

conference_poster_6.pdf

(16)

Subscribe

[1] Peter Gerstoft, Emma Ozanich, Haiqiang Niu, "A deep network for single-snapshot direction of arrival estimation", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4898. Accessed: Nov. 12, 2019.
@article{4898-19,
url = {http://sigport.org/4898},
author = {Peter Gerstoft; Emma Ozanich; Haiqiang Niu },
publisher = {IEEE SigPort},
title = {A deep network for single-snapshot direction of arrival estimation},
year = {2019} }
TY - EJOUR
T1 - A deep network for single-snapshot direction of arrival estimation
AU - Peter Gerstoft; Emma Ozanich; Haiqiang Niu
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4898
ER -
Peter Gerstoft, Emma Ozanich, Haiqiang Niu. (2019). A deep network for single-snapshot direction of arrival estimation. IEEE SigPort. http://sigport.org/4898
Peter Gerstoft, Emma Ozanich, Haiqiang Niu, 2019. A deep network for single-snapshot direction of arrival estimation. Available at: http://sigport.org/4898.
Peter Gerstoft, Emma Ozanich, Haiqiang Niu. (2019). "A deep network for single-snapshot direction of arrival estimation." Web.
1. Peter Gerstoft, Emma Ozanich, Haiqiang Niu. A deep network for single-snapshot direction of arrival estimation [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4898

Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks


We focus on a wireless sensor network powered with an energy beacon, where sensors send their measurements to the sink using the harvested energy. The aim of the system is to estimate an unknown signal over the area of interest as accurately as possible. We investigate optimal energy beamforming at the energy beacon and optimal transmit power allocation at the sensors under non-linear energy harvesting models. We use a deep reinforcement learning (RL) based approach where multi-layer neural networks are utilized.

Paper Details

Authors:
Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen
Submitted On:
16 October 2019 - 8:16am
Short Link:
Type:
Event:

Document Files

2019MLSP_poster.pdf

(21)

Subscribe

[1] Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen, "Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4875. Accessed: Nov. 12, 2019.
@article{4875-19,
url = {http://sigport.org/4875},
author = {Ayca Ozcelikkale; Mehmet Koseoglu; Mani Srivastava; Anders Ahlen },
publisher = {IEEE SigPort},
title = {Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks},
year = {2019} }
TY - EJOUR
T1 - Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks
AU - Ayca Ozcelikkale; Mehmet Koseoglu; Mani Srivastava; Anders Ahlen
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4875
ER -
Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen. (2019). Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks. IEEE SigPort. http://sigport.org/4875
Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen, 2019. Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks. Available at: http://sigport.org/4875.
Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen. (2019). "Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks." Web.
1. Ayca Ozcelikkale, Mehmet Koseoglu, Mani Srivastava, Anders Ahlen. Deep Reinforcement Learning Based Energy Beamforming for Powering Sensor Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4875

DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS

Paper Details

Authors:
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş
Submitted On:
14 October 2019 - 7:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

P_Demir_IEEEMLSP2019_Haptix.pdf

(12)

Subscribe

[1] Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş, "DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4869. Accessed: Nov. 12, 2019.
@article{4869-19,
url = {http://sigport.org/4869},
author = {Andac Demir; Safaa Eldeeb; Murat Akçakaya; Deniz Erdoğmuş },
publisher = {IEEE SigPort},
title = {DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS},
year = {2019} }
TY - EJOUR
T1 - DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS
AU - Andac Demir; Safaa Eldeeb; Murat Akçakaya; Deniz Erdoğmuş
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4869
ER -
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. (2019). DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS. IEEE SigPort. http://sigport.org/4869
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş, 2019. DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS. Available at: http://sigport.org/4869.
Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. (2019). "DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS." Web.
1. Andac Demir, Safaa Eldeeb, Murat Akçakaya, Deniz Erdoğmuş. DYNAMIC SYSTEM IDENTIFICATION FOR GUIDANCE OF STIMULATION PARAMETERS IN HAPTIC SIMULATION ENVIRONMENTS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4869

Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis


Semi-continuous data have a point mass at zero and are continuous with positive support. Such data arise naturally in several real-life situations like signals in a blind source separation problem, daily rainfall at a location, sales of durable goods among many others. Therefore, efficient estimation of the underlying probability density function is of significant interest.

Paper Details

Authors:
Sai K. Popuri, Zois Boukouvalas
Submitted On:
13 October 2019 - 4:45pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

MLSP_2019.pdf

(20)

Subscribe

[1] Sai K. Popuri, Zois Boukouvalas, "Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4865. Accessed: Nov. 12, 2019.
@article{4865-19,
url = {http://sigport.org/4865},
author = {Sai K. Popuri; Zois Boukouvalas },
publisher = {IEEE SigPort},
title = {Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis},
year = {2019} }
TY - EJOUR
T1 - Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis
AU - Sai K. Popuri; Zois Boukouvalas
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4865
ER -
Sai K. Popuri, Zois Boukouvalas. (2019). Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis. IEEE SigPort. http://sigport.org/4865
Sai K. Popuri, Zois Boukouvalas, 2019. Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis. Available at: http://sigport.org/4865.
Sai K. Popuri, Zois Boukouvalas. (2019). "Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis." Web.
1. Sai K. Popuri, Zois Boukouvalas. Efficient Parameter Estimation for Semi-Continuous Data: An Application to Independent Component Analysis [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4865

VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES

Paper Details

Authors:
Andres F. Duque, Guy Wolf, Kevin R. Moon
Submitted On:
12 October 2019 - 2:07pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

VISUALIZING_HIGH_DIMENSIONAL_DYNAMICAL_PROCESSES_poster.pdf

(16)

Subscribe

[1] Andres F. Duque, Guy Wolf, Kevin R. Moon, "VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4860. Accessed: Nov. 12, 2019.
@article{4860-19,
url = {http://sigport.org/4860},
author = {Andres F. Duque; Guy Wolf; Kevin R. Moon },
publisher = {IEEE SigPort},
title = {VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES},
year = {2019} }
TY - EJOUR
T1 - VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES
AU - Andres F. Duque; Guy Wolf; Kevin R. Moon
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4860
ER -
Andres F. Duque, Guy Wolf, Kevin R. Moon. (2019). VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES. IEEE SigPort. http://sigport.org/4860
Andres F. Duque, Guy Wolf, Kevin R. Moon, 2019. VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES. Available at: http://sigport.org/4860.
Andres F. Duque, Guy Wolf, Kevin R. Moon. (2019). "VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES." Web.
1. Andres F. Duque, Guy Wolf, Kevin R. Moon. VISUALIZING HIGH DIMENSIONAL DYNAMICAL PROCESSES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4860

Deep Clustering based on a Mixture of Autoencoders


In this paper we propose a Deep Autoencoder Mixture Clustering(DAMIC) algorithm based on a mixture of deep autoencoders whereeach cluster is represented by an autoencoder. A clustering networktransforms the data into another space and then selects one of theclusters. Next, the autoencoder associated with this cluster is usedto reconstruct the data-point. The clustering algorithm jointly learnsthe nonlinear data representation and the set of autoencoders. Theoptimal clustering is found by minimizing the reconstruction loss ofthe mixture of autoencoder network.

Paper Details

Authors:
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger
Submitted On:
11 October 2019 - 9:39pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Deep Clustering based on a Mixture of Autoencoders

(31)

Subscribe

[1] Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger , "Deep Clustering based on a Mixture of Autoencoders", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4859. Accessed: Nov. 12, 2019.
@article{4859-19,
url = {http://sigport.org/4859},
author = {Shlomo E. Chazan; Sharon Gannot and Jacob Goldberger },
publisher = {IEEE SigPort},
title = {Deep Clustering based on a Mixture of Autoencoders},
year = {2019} }
TY - EJOUR
T1 - Deep Clustering based on a Mixture of Autoencoders
AU - Shlomo E. Chazan; Sharon Gannot and Jacob Goldberger
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4859
ER -
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . (2019). Deep Clustering based on a Mixture of Autoencoders. IEEE SigPort. http://sigport.org/4859
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger , 2019. Deep Clustering based on a Mixture of Autoencoders. Available at: http://sigport.org/4859.
Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . (2019). "Deep Clustering based on a Mixture of Autoencoders." Web.
1. Shlomo E. Chazan, Sharon Gannot and Jacob Goldberger . Deep Clustering based on a Mixture of Autoencoders [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4859

VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES


We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss to learn a latent representation
from a face image that is invariant to identity but preserves head-pose information. This facilitates synthesis of a realistic face
image with the same head pose as a given input image, but with a different identity. One application of this network is in
privacy-sensitive scenarios; after identity replacement in an image, utility, such as head pose, can still

Paper Details

Authors:
Jiawei Chen, Janusz Konrad, Prakash Ishwar
Submitted On:
11 October 2019 - 4:41pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

MLSP poster presentation

(14)

Subscribe

[1] Jiawei Chen, Janusz Konrad, Prakash Ishwar, "VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4856. Accessed: Nov. 12, 2019.
@article{4856-19,
url = {http://sigport.org/4856},
author = {Jiawei Chen; Janusz Konrad; Prakash Ishwar },
publisher = {IEEE SigPort},
title = {VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES},
year = {2019} }
TY - EJOUR
T1 - VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES
AU - Jiawei Chen; Janusz Konrad; Prakash Ishwar
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4856
ER -
Jiawei Chen, Janusz Konrad, Prakash Ishwar. (2019). VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES. IEEE SigPort. http://sigport.org/4856
Jiawei Chen, Janusz Konrad, Prakash Ishwar, 2019. VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES. Available at: http://sigport.org/4856.
Jiawei Chen, Janusz Konrad, Prakash Ishwar. (2019). "VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES." Web.
1. Jiawei Chen, Janusz Konrad, Prakash Ishwar. VAE/WGAN-BASED IMAGE REPRESENTATION LEARNING FOR POSE-PRESERVING SEAMLESS IDENTITY REPLACEMENT IN FACIAL IMAGES [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4856

Super-resolution of Omnidirectional Images Using Adversarial Learning


An omnidirectional image (ODI) enables viewers to look in every direction from a fixed point through a head-mounted display providing an immersive experience compared to that of a standard image. Designing immersive virtual reality systems with ODIs is challenging as they require high resolution content. In this paper, we study super-resolution for ODIs and propose an improved generative adversarial network based model which is optimized to handle the artifacts obtained in the spherical observational space.

Paper Details

Authors:
Aakanksha Rana, Aljosa Smolic
Submitted On:
30 September 2019 - 3:45am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

vsense_poster_template (3).pdf

(20)

Subscribe

[1] Aakanksha Rana, Aljosa Smolic, "Super-resolution of Omnidirectional Images Using Adversarial Learning ", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4849. Accessed: Nov. 12, 2019.
@article{4849-19,
url = {http://sigport.org/4849},
author = {Aakanksha Rana; Aljosa Smolic },
publisher = {IEEE SigPort},
title = {Super-resolution of Omnidirectional Images Using Adversarial Learning },
year = {2019} }
TY - EJOUR
T1 - Super-resolution of Omnidirectional Images Using Adversarial Learning
AU - Aakanksha Rana; Aljosa Smolic
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4849
ER -
Aakanksha Rana, Aljosa Smolic. (2019). Super-resolution of Omnidirectional Images Using Adversarial Learning . IEEE SigPort. http://sigport.org/4849
Aakanksha Rana, Aljosa Smolic, 2019. Super-resolution of Omnidirectional Images Using Adversarial Learning . Available at: http://sigport.org/4849.
Aakanksha Rana, Aljosa Smolic. (2019). "Super-resolution of Omnidirectional Images Using Adversarial Learning ." Web.
1. Aakanksha Rana, Aljosa Smolic. Super-resolution of Omnidirectional Images Using Adversarial Learning [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4849

Pages