Sorry, you need to enable JavaScript to visit this website.

Neural network learning (MLR-NNLR)

UNIQUE: Unsupervised Image Quality Estimation


In this paper, we estimate perceived image quality using sparse representations obtained from generic image databases through an unsupervised learning approach. A color space transformation, a mean subtraction, and a whitening operation are used to enhance descriptiveness of images by reducing spatial redundancy; a linear decoder is used to obtain sparse representations; and a thresholding stage is used to formulate suppression mechanisms in a visual system.

Paper Details

Authors:
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib
Submitted On:
1 March 2017 - 6:01pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

UNIQUE: Unsupervised Image Quality Estimation

(62)

Subscribe

[1] Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib, "UNIQUE: Unsupervised Image Quality Estimation", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1560. Accessed: Apr. 25, 2019.
@article{1560-17,
url = {http://sigport.org/1560},
author = {Dogancan Temel; Mohit Prabhushankar; and Ghassan Alregib },
publisher = {IEEE SigPort},
title = {UNIQUE: Unsupervised Image Quality Estimation},
year = {2017} }
TY - EJOUR
T1 - UNIQUE: Unsupervised Image Quality Estimation
AU - Dogancan Temel; Mohit Prabhushankar; and Ghassan Alregib
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1560
ER -
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. (2017). UNIQUE: Unsupervised Image Quality Estimation. IEEE SigPort. http://sigport.org/1560
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib, 2017. UNIQUE: Unsupervised Image Quality Estimation. Available at: http://sigport.org/1560.
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. (2017). "UNIQUE: Unsupervised Image Quality Estimation." Web.
1. Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. UNIQUE: Unsupervised Image Quality Estimation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1560

Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)

Paper Details

Authors:
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras
Submitted On:
1 March 2017 - 10:31am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster.pdf

(444)

Subscribe

[1] Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras, "Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1556. Accessed: Apr. 25, 2019.
@article{1556-17,
url = {http://sigport.org/1556},
author = {Lionel Pibre; Marc Chaumont; Gérard Subsol; Dino Ienco; Mustapha Derras },
publisher = {IEEE SigPort},
title = {Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)},
year = {2017} }
TY - EJOUR
T1 - Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)
AU - Lionel Pibre; Marc Chaumont; Gérard Subsol; Dino Ienco; Mustapha Derras
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1556
ER -
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. (2017). Detection of urban trees in multiple-source aerial data (optical, infrared, DSM). IEEE SigPort. http://sigport.org/1556
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras, 2017. Detection of urban trees in multiple-source aerial data (optical, infrared, DSM). Available at: http://sigport.org/1556.
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. (2017). "Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)." Web.
1. Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. Detection of urban trees in multiple-source aerial data (optical, infrared, DSM) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1556

CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS


Automatic inspection of underwater pipelines has been a task of growing importance for the detection of a variety of events, which include inner coating exposure and presence of algae. Such inspections might benefit of machine learning techniques in order to accurately classify such occurrences. This article describes a deep convolutional neural network algorithm for the classification of underwater pipeline events. The neural network architecture and parameters that result in optimal classifier performance are selected.

Paper Details

Authors:
José Gabriel R. C. Gomes
Submitted On:
28 February 2017 - 11:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP Poster-2017_FelipePetraglia.pdf

(55)

Subscribe

[1] José Gabriel R. C. Gomes, "CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1513. Accessed: Apr. 25, 2019.
@article{1513-17,
url = {http://sigport.org/1513},
author = {José Gabriel R. C. Gomes },
publisher = {IEEE SigPort},
title = {CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS
AU - José Gabriel R. C. Gomes
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1513
ER -
José Gabriel R. C. Gomes. (2017). CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/1513
José Gabriel R. C. Gomes, 2017. CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/1513.
José Gabriel R. C. Gomes. (2017). "CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS." Web.
1. José Gabriel R. C. Gomes. CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1513

SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS

Paper Details

Authors:
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick
Submitted On:
28 February 2017 - 7:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2017.pdf

(284)

Subscribe

[1] Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick, "SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1509. Accessed: Apr. 25, 2019.
@article{1509-17,
url = {http://sigport.org/1509},
author = {Yossi adi; Joseph Keshet; Emily Cibelli; Matthew Goldrick },
publisher = {IEEE SigPort},
title = {SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS},
year = {2017} }
TY - EJOUR
T1 - SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS
AU - Yossi adi; Joseph Keshet; Emily Cibelli; Matthew Goldrick
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1509
ER -
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. (2017). SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS. IEEE SigPort. http://sigport.org/1509
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick, 2017. SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS. Available at: http://sigport.org/1509.
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. (2017). "SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS." Web.
1. Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1509

MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION


Recurrent neural networks (RNNs) have shown clear superiority in sequence modeling, particularly the ones with gated units, such as long short-term memory (LSTM) and gated recurrent unit (GRU). However, the dynamic properties behind the remarkable performance remain unclear in many applications, e.g., automatic speech recognition (ASR). This paper employs visualization techniques to study the behavior of LSTM and GRU when performing speech recognition tasks.

Paper Details

Authors:
Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang
Submitted On:
4 March 2017 - 3:11am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp17_visual.pdf

(356)

Subscribe

[1] Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang, "MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1463. Accessed: Apr. 25, 2019.
@article{1463-17,
url = {http://sigport.org/1463},
author = {Zhiyuan Tang; Ying Shi; Dong Wang; Yang Feng; Shiyue Zhang },
publisher = {IEEE SigPort},
title = {MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION},
year = {2017} }
TY - EJOUR
T1 - MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION
AU - Zhiyuan Tang; Ying Shi; Dong Wang; Yang Feng; Shiyue Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1463
ER -
Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang. (2017). MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/1463
Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang, 2017. MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION. Available at: http://sigport.org/1463.
Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang. (2017). "MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION." Web.
1. Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, Shiyue Zhang. MEMORY VISUALIZATION FOR GATED RECURRENT NEURAL NETWORKS IN SPEECH RECOGNITION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1463

Onsager-corrected deep learning for sparse linear inverse problems


Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem encountered in compressive sensing, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction.

Paper Details

Authors:
Mark Borgerding and Philip Schniter
Submitted On:
6 December 2016 - 10:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

slides

(744)

Subscribe

[1] Mark Borgerding and Philip Schniter, "Onsager-corrected deep learning for sparse linear inverse problems", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1359. Accessed: Apr. 25, 2019.
@article{1359-16,
url = {http://sigport.org/1359},
author = {Mark Borgerding and Philip Schniter },
publisher = {IEEE SigPort},
title = {Onsager-corrected deep learning for sparse linear inverse problems},
year = {2016} }
TY - EJOUR
T1 - Onsager-corrected deep learning for sparse linear inverse problems
AU - Mark Borgerding and Philip Schniter
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1359
ER -
Mark Borgerding and Philip Schniter. (2016). Onsager-corrected deep learning for sparse linear inverse problems. IEEE SigPort. http://sigport.org/1359
Mark Borgerding and Philip Schniter, 2016. Onsager-corrected deep learning for sparse linear inverse problems. Available at: http://sigport.org/1359.
Mark Borgerding and Philip Schniter. (2016). "Onsager-corrected deep learning for sparse linear inverse problems." Web.
1. Mark Borgerding and Philip Schniter. Onsager-corrected deep learning for sparse linear inverse problems [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1359

COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR


To date, automatic target recognition (ATR) techniques in synthetic aperture radar (SAR) imagery have largely focused on features that use only the magnitude part of SAR’s complex valued magnitude-plus-phase history. While such techniques are often very successful, they inherently ignore the significant amount of discriminatory information available in the phase. This paper describes a method for exploiting the complex information for ATR by using a convolutional neural network (CNN) that accepts fully complex input features.

Paper Details

Authors:
Michael Wilmanski, Chris Kreucher, Alfred Hero
Submitted On:
11 December 2016 - 2:48pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

presentation

(522)

Subscribe

[1] Michael Wilmanski, Chris Kreucher, Alfred Hero, "COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1351. Accessed: Apr. 25, 2019.
@article{1351-16,
url = {http://sigport.org/1351},
author = {Michael Wilmanski; Chris Kreucher; Alfred Hero },
publisher = {IEEE SigPort},
title = {COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR},
year = {2016} }
TY - EJOUR
T1 - COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR
AU - Michael Wilmanski; Chris Kreucher; Alfred Hero
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1351
ER -
Michael Wilmanski, Chris Kreucher, Alfred Hero. (2016). COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR. IEEE SigPort. http://sigport.org/1351
Michael Wilmanski, Chris Kreucher, Alfred Hero, 2016. COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR. Available at: http://sigport.org/1351.
Michael Wilmanski, Chris Kreucher, Alfred Hero. (2016). "COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR." Web.
1. Michael Wilmanski, Chris Kreucher, Alfred Hero. COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1351

Multi-Task Joint-Learning for Robust Voice Activity Detection


Model based VAD approaches have been widely used and
achieved success in practice. These approaches usually cast
VAD as a frame-level classification problem and employ statistical
classifiers, such as Gaussian Mixture Model (GMM) or
Deep Neural Network (DNN) to assign a speech/silence label
for each frame. Due to the frame independent assumption classification,
the VAD results tend to be fragile. To address this
problem, in this paper, a new structured multi-frame prediction
DNN approach is proposed to improve the segment-level

Paper Details

Authors:
Yanmin Qian, Kai Yu
Submitted On:
15 October 2016 - 3:51am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

zhuang-iscslp16-slides.pdf

(348)

Subscribe

[1] Yanmin Qian, Kai Yu, "Multi-Task Joint-Learning for Robust Voice Activity Detection", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1220. Accessed: Apr. 25, 2019.
@article{1220-16,
url = {http://sigport.org/1220},
author = {Yanmin Qian; Kai Yu },
publisher = {IEEE SigPort},
title = {Multi-Task Joint-Learning for Robust Voice Activity Detection},
year = {2016} }
TY - EJOUR
T1 - Multi-Task Joint-Learning for Robust Voice Activity Detection
AU - Yanmin Qian; Kai Yu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1220
ER -
Yanmin Qian, Kai Yu. (2016). Multi-Task Joint-Learning for Robust Voice Activity Detection. IEEE SigPort. http://sigport.org/1220
Yanmin Qian, Kai Yu, 2016. Multi-Task Joint-Learning for Robust Voice Activity Detection. Available at: http://sigport.org/1220.
Yanmin Qian, Kai Yu. (2016). "Multi-Task Joint-Learning for Robust Voice Activity Detection." Web.
1. Yanmin Qian, Kai Yu. Multi-Task Joint-Learning for Robust Voice Activity Detection [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1220

Lecture ICASSP 2016 Pierre Laffitte


This presentation introduces a Deep Learning model that performs classification of the Audio Scene in the subway environment. The final goal is to detect Screams and Shouts for surveillance purposes. The model is a combination of Deep Belief Network and Deep Neural Network, (generatively pre-trained within the DBN framework and fine-tuned discriminatively within the DNN framework), and is trained on a novel database of pseudo-real signals collected in the Paris metro.

Paper Details

Authors:
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu
Submitted On:
23 March 2016 - 10:01am
Short Link:
Type:
Event:
Paper Code:

Document Files

ICASSP Lecture.pdf

(89)

Subscribe

[1] Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu, "Lecture ICASSP 2016 Pierre Laffitte", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/991. Accessed: Apr. 25, 2019.
@article{991-16,
url = {http://sigport.org/991},
author = {Pierre Laffitte; David Sodoyer; Laurent Girin; Charles Tatkeu },
publisher = {IEEE SigPort},
title = {Lecture ICASSP 2016 Pierre Laffitte},
year = {2016} }
TY - EJOUR
T1 - Lecture ICASSP 2016 Pierre Laffitte
AU - Pierre Laffitte; David Sodoyer; Laurent Girin; Charles Tatkeu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/991
ER -
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. (2016). Lecture ICASSP 2016 Pierre Laffitte. IEEE SigPort. http://sigport.org/991
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu, 2016. Lecture ICASSP 2016 Pierre Laffitte. Available at: http://sigport.org/991.
Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. (2016). "Lecture ICASSP 2016 Pierre Laffitte." Web.
1. Pierre Laffitte, David Sodoyer, Laurent Girin, Charles Tatkeu. Lecture ICASSP 2016 Pierre Laffitte [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/991

Supervised Subspace Learning based on Deep Randomized Networks


In this paper, we propose a supervised subspace learning method that exploits the rich representation power of deep feedforward networks. In order to derive a fast, yet efficient, learning scheme we employ deep randomized neural networks that have been recently shown to provide good compromise between training speed and performance.

Paper Details

Authors:
Alexandros Iosifidis, Moncef Gabbouj
Submitted On:
22 March 2016 - 3:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2016_POSTER.pdf

(993)

Subscribe

[1] Alexandros Iosifidis, Moncef Gabbouj, "Supervised Subspace Learning based on Deep Randomized Networks", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/961. Accessed: Apr. 25, 2019.
@article{961-16,
url = {http://sigport.org/961},
author = {Alexandros Iosifidis; Moncef Gabbouj },
publisher = {IEEE SigPort},
title = {Supervised Subspace Learning based on Deep Randomized Networks},
year = {2016} }
TY - EJOUR
T1 - Supervised Subspace Learning based on Deep Randomized Networks
AU - Alexandros Iosifidis; Moncef Gabbouj
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/961
ER -
Alexandros Iosifidis, Moncef Gabbouj. (2016). Supervised Subspace Learning based on Deep Randomized Networks. IEEE SigPort. http://sigport.org/961
Alexandros Iosifidis, Moncef Gabbouj, 2016. Supervised Subspace Learning based on Deep Randomized Networks. Available at: http://sigport.org/961.
Alexandros Iosifidis, Moncef Gabbouj. (2016). "Supervised Subspace Learning based on Deep Randomized Networks." Web.
1. Alexandros Iosifidis, Moncef Gabbouj. Supervised Subspace Learning based on Deep Randomized Networks [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/961

Pages