Sorry, you need to enable JavaScript to visit this website.

Neural network learning (MLR-NNLR)

Face Aging with Conditional Generative Adversarial Networks

Paper Details

Authors:
Moez Baccouche, Jean-Luc Dugelay
Submitted On:
15 September 2017 - 9:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Antipov_ICIP_2017_updated.pdf

(273 downloads)

Subscribe

[1] Moez Baccouche, Jean-Luc Dugelay, "Face Aging with Conditional Generative Adversarial Networks", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1818. Accessed: Oct. 17, 2018.
@article{1818-17,
url = {http://sigport.org/1818},
author = {Moez Baccouche; Jean-Luc Dugelay },
publisher = {IEEE SigPort},
title = {Face Aging with Conditional Generative Adversarial Networks},
year = {2017} }
TY - EJOUR
T1 - Face Aging with Conditional Generative Adversarial Networks
AU - Moez Baccouche; Jean-Luc Dugelay
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1818
ER -
Moez Baccouche, Jean-Luc Dugelay. (2017). Face Aging with Conditional Generative Adversarial Networks. IEEE SigPort. http://sigport.org/1818
Moez Baccouche, Jean-Luc Dugelay, 2017. Face Aging with Conditional Generative Adversarial Networks. Available at: http://sigport.org/1818.
Moez Baccouche, Jean-Luc Dugelay. (2017). "Face Aging with Conditional Generative Adversarial Networks." Web.
1. Moez Baccouche, Jean-Luc Dugelay. Face Aging with Conditional Generative Adversarial Networks [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1818

LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)

Paper Details

Authors:
Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel
Submitted On:
14 September 2017 - 10:45pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

supplementary

(184 downloads)

Subscribe

[1] Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel, "LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1797. Accessed: Oct. 17, 2018.
@article{1797-17,
url = {http://sigport.org/1797},
author = {Jake Snell; Karl Ridgeway; Renjie Liao; Brett D. Roads; Michael C. Mozer; Richard S. Zemel },
publisher = {IEEE SigPort},
title = {LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)},
year = {2017} }
TY - EJOUR
T1 - LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)
AU - Jake Snell; Karl Ridgeway; Renjie Liao; Brett D. Roads; Michael C. Mozer; Richard S. Zemel
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1797
ER -
Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel. (2017). LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL). IEEE SigPort. http://sigport.org/1797
Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel, 2017. LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL). Available at: http://sigport.org/1797.
Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel. (2017). "LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL)." Web.
1. Jake Snell, Karl Ridgeway, Renjie Liao, Brett D. Roads, Michael C. Mozer, Richard S. Zemel. LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS (SUPPLEMENTARY MATERIAL) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1797

A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation


In this paper, we explore the redundancy in convolutional neural network, which scales with the complexity of vision tasks. Considering that many front-end visual systems are interested in only a limited range of visual targets, the removing of task-specified network redundancy can promote a wide range of potential applications. We propose a task-specified knowledge distillation algorithm to derive a simplified model with pre-set computation cost and minimized accuracy loss, which suits the resource constraint front-end systems well.

Paper Details

Authors:
Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao
Submitted On:
12 March 2017 - 8:20pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2017 poster 80cm.pdf

(161 downloads)

Subscribe

[1] Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao, "A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1751. Accessed: Oct. 17, 2018.
@article{1751-17,
url = {http://sigport.org/1751},
author = {Mengnan Shi; Fei Qin; Qixiang Ye; Zhenjun Han; Jianbin Jiao },
publisher = {IEEE SigPort},
title = {A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation},
year = {2017} }
TY - EJOUR
T1 - A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation
AU - Mengnan Shi; Fei Qin; Qixiang Ye; Zhenjun Han; Jianbin Jiao
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1751
ER -
Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao. (2017). A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation. IEEE SigPort. http://sigport.org/1751
Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao, 2017. A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation. Available at: http://sigport.org/1751.
Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao. (2017). "A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation." Web.
1. Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao. A Scalable Convolutional Neural Network for Task-specified Scenarios via Knowledge Distillation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1751

NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE

Paper Details

Authors:
Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney
Submitted On:
8 March 2017 - 3:58pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

main.pdf

(682 downloads)

Subscribe

[1] Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney, "NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1710. Accessed: Oct. 17, 2018.
@article{1710-17,
url = {http://sigport.org/1710},
author = {Markus Nussbaum-Thom; Ralf Schlueter; Vaibhava Goel; Hermann Ney },
publisher = {IEEE SigPort},
title = {NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE},
year = {2017} }
TY - EJOUR
T1 - NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE
AU - Markus Nussbaum-Thom; Ralf Schlueter; Vaibhava Goel; Hermann Ney
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1710
ER -
Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney. (2017). NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE. IEEE SigPort. http://sigport.org/1710
Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney, 2017. NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE. Available at: http://sigport.org/1710.
Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney. (2017). "NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE." Web.
1. Markus Nussbaum-Thom, Ralf Schlueter, Vaibhava Goel, Hermann Ney. NOISY OBJECTIVE FUNCTIONS BASED ON THE F-DIVERGENCE [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1710

Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention


Automatic emotion recognition from speech is a challenging task which relies heavily on the effectiveness of the speech features used for classification. In this work, we study the use of deep learning to automatically discover emotionally relevant features from speech. It is shown that using a deep recurrent neural network, we can learn both the short-time frame-level acoustic features that are emotionally relevant, as well as an appropriate temporal aggregation of those features into a compact utterance-level representation.

Paper Details

Authors:
Emad Barsoum, Cha Zhang
Submitted On:
15 March 2017 - 12:33am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2017.pptx

(235 downloads)

icassp2017.pdf

(459 downloads)

Subscribe

[1] Emad Barsoum, Cha Zhang, "Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1667. Accessed: Oct. 17, 2018.
@article{1667-17,
url = {http://sigport.org/1667},
author = {Emad Barsoum; Cha Zhang },
publisher = {IEEE SigPort},
title = {Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention},
year = {2017} }
TY - EJOUR
T1 - Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention
AU - Emad Barsoum; Cha Zhang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1667
ER -
Emad Barsoum, Cha Zhang. (2017). Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention. IEEE SigPort. http://sigport.org/1667
Emad Barsoum, Cha Zhang, 2017. Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention. Available at: http://sigport.org/1667.
Emad Barsoum, Cha Zhang. (2017). "Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention." Web.
1. Emad Barsoum, Cha Zhang. Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1667

CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS


Recurrent neural network (RNN) based character-level language models (CLMs) are extremely useful for modeling out-of-vocabulary words by nature. However, their performance is generally much worse than the word-level language models (WLMs), since CLMs need to consider longer history of tokens to properly predict the next one. We address this problem by proposing hierarchical RNN architectures, which consist of multiple modules with different timescales.

poster.pdf

PDF icon poster.pdf (647 downloads)

Paper Details

Authors:
Kyuyeon Hwang, Wonyong Sung
Submitted On:
6 March 2017 - 3:05am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(647 downloads)

Subscribe

[1] Kyuyeon Hwang, Wonyong Sung, "CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1645. Accessed: Oct. 17, 2018.
@article{1645-17,
url = {http://sigport.org/1645},
author = {Kyuyeon Hwang; Wonyong Sung },
publisher = {IEEE SigPort},
title = {CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS
AU - Kyuyeon Hwang; Wonyong Sung
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1645
ER -
Kyuyeon Hwang, Wonyong Sung. (2017). CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS. IEEE SigPort. http://sigport.org/1645
Kyuyeon Hwang, Wonyong Sung, 2017. CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS. Available at: http://sigport.org/1645.
Kyuyeon Hwang, Wonyong Sung. (2017). "CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS." Web.
1. Kyuyeon Hwang, Wonyong Sung. CHARACTER-LEVEL LANGUAGE MODELING WITH HIERARCHICAL RECURRENT NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1645

UNIQUE: Unsupervised Image Quality Estimation


In this paper, we estimate perceived image quality using sparse representations obtained from generic image databases through an unsupervised learning approach. A color space transformation, a mean subtraction, and a whitening operation are used to enhance descriptiveness of images by reducing spatial redundancy; a linear decoder is used to obtain sparse representations; and a thresholding stage is used to formulate suppression mechanisms in a visual system.

Paper Details

Authors:
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib
Submitted On:
1 March 2017 - 6:01pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

UNIQUE: Unsupervised Image Quality Estimation

(204 downloads)

Subscribe

[1] Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib, "UNIQUE: Unsupervised Image Quality Estimation", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1560. Accessed: Oct. 17, 2018.
@article{1560-17,
url = {http://sigport.org/1560},
author = {Dogancan Temel; Mohit Prabhushankar; and Ghassan Alregib },
publisher = {IEEE SigPort},
title = {UNIQUE: Unsupervised Image Quality Estimation},
year = {2017} }
TY - EJOUR
T1 - UNIQUE: Unsupervised Image Quality Estimation
AU - Dogancan Temel; Mohit Prabhushankar; and Ghassan Alregib
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1560
ER -
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. (2017). UNIQUE: Unsupervised Image Quality Estimation. IEEE SigPort. http://sigport.org/1560
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib, 2017. UNIQUE: Unsupervised Image Quality Estimation. Available at: http://sigport.org/1560.
Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. (2017). "UNIQUE: Unsupervised Image Quality Estimation." Web.
1. Dogancan Temel, Mohit Prabhushankar, and Ghassan Alregib. UNIQUE: Unsupervised Image Quality Estimation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1560

Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)

Paper Details

Authors:
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras
Submitted On:
1 March 2017 - 10:31am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster.pdf

(343 downloads)

Subscribe

[1] Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras, "Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1556. Accessed: Oct. 17, 2018.
@article{1556-17,
url = {http://sigport.org/1556},
author = {Lionel Pibre; Marc Chaumont; Gérard Subsol; Dino Ienco; Mustapha Derras },
publisher = {IEEE SigPort},
title = {Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)},
year = {2017} }
TY - EJOUR
T1 - Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)
AU - Lionel Pibre; Marc Chaumont; Gérard Subsol; Dino Ienco; Mustapha Derras
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1556
ER -
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. (2017). Detection of urban trees in multiple-source aerial data (optical, infrared, DSM). IEEE SigPort. http://sigport.org/1556
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras, 2017. Detection of urban trees in multiple-source aerial data (optical, infrared, DSM). Available at: http://sigport.org/1556.
Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. (2017). "Detection of urban trees in multiple-source aerial data (optical, infrared, DSM)." Web.
1. Lionel Pibre, Marc Chaumont, Gérard Subsol, Dino Ienco, Mustapha Derras. Detection of urban trees in multiple-source aerial data (optical, infrared, DSM) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1556

CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS


Automatic inspection of underwater pipelines has been a task of growing importance for the detection of a variety of events, which include inner coating exposure and presence of algae. Such inspections might benefit of machine learning techniques in order to accurately classify such occurrences. This article describes a deep convolutional neural network algorithm for the classification of underwater pipeline events. The neural network architecture and parameters that result in optimal classifier performance are selected.

Paper Details

Authors:
José Gabriel R. C. Gomes
Submitted On:
28 February 2017 - 11:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP Poster-2017_FelipePetraglia.pdf

(185 downloads)

Subscribe

[1] José Gabriel R. C. Gomes, "CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1513. Accessed: Oct. 17, 2018.
@article{1513-17,
url = {http://sigport.org/1513},
author = {José Gabriel R. C. Gomes },
publisher = {IEEE SigPort},
title = {CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS},
year = {2017} }
TY - EJOUR
T1 - CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS
AU - José Gabriel R. C. Gomes
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1513
ER -
José Gabriel R. C. Gomes. (2017). CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/1513
José Gabriel R. C. Gomes, 2017. CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/1513.
José Gabriel R. C. Gomes. (2017). "CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS." Web.
1. José Gabriel R. C. Gomes. CLASSIFICATION OF UNDERWATER PIPELINE EVENTS USING DEEP CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1513

SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS

Paper Details

Authors:
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick
Submitted On:
28 February 2017 - 7:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2017.pdf

(244 downloads)

Subscribe

[1] Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick, "SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1509. Accessed: Oct. 17, 2018.
@article{1509-17,
url = {http://sigport.org/1509},
author = {Yossi adi; Joseph Keshet; Emily Cibelli; Matthew Goldrick },
publisher = {IEEE SigPort},
title = {SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS},
year = {2017} }
TY - EJOUR
T1 - SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS
AU - Yossi adi; Joseph Keshet; Emily Cibelli; Matthew Goldrick
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1509
ER -
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. (2017). SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS. IEEE SigPort. http://sigport.org/1509
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick, 2017. SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS. Available at: http://sigport.org/1509.
Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. (2017). "SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS." Web.
1. Yossi adi, Joseph Keshet, Emily Cibelli, Matthew Goldrick. SEQUENCE SEGMENTATION USING JOINT RNN AND STRUCTURED PREDICTION MODELS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1509

Pages