Sorry, you need to enable JavaScript to visit this website.

ICASSP 2018

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The 2019 conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world. Visit website.

Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody


We describe a new application of deep-learning-based speech synthesis, namely multilingual speech synthesis for generating controllable foreign accent. Specifically, we train a DBLSTM-based acoustic model on non-accented multilingual speech recordings from a speaker native in several languages. By copying durations and pitch contours from a pre-recorded utterance of the desired prompt, natural prosody is achieved. We call this paradigm "cyborg speech" as it combines human and machine speech parameters.

Paper Details

Authors:
Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi
Submitted On:
29 April 2018 - 1:59pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Cyborg Speech presentation slides

(256)

Subscribe

[1] Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi, "Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3187. Accessed: Aug. 13, 2020.
@article{3187-18,
url = {http://sigport.org/3187},
author = {Jaime Lorenzo-Trueba; Mariko Kondo; Junichi Yamagishi },
publisher = {IEEE SigPort},
title = {Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody},
year = {2018} }
TY - EJOUR
T1 - Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody
AU - Jaime Lorenzo-Trueba; Mariko Kondo; Junichi Yamagishi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3187
ER -
Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi. (2018). Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody. IEEE SigPort. http://sigport.org/3187
Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi, 2018. Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody. Available at: http://sigport.org/3187.
Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi. (2018). "Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody." Web.
1. Jaime Lorenzo-Trueba, Mariko Kondo, Junichi Yamagishi. Cyborg Speech: Deep Multilingual Speech Synthesis for Generating Segmental Foreign Accent with Natural Prosody [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3187

Invisible Geo-Location Signature in a Single Image


Geo-tagging images of interest is increasingly important to law enforcement, national security, and journalism. Many images today do not carry location tags that are trustworthy and resilient to tampering; and the landmark-based visual clues may not be readily present in every image, especially in those taken indoors. In this paper, we exploit an invisible signature from the power grid, the Electric Network Frequency (ENF) signal, which can be inherently recorded in a sensing stream at the time of capturing and carries useful location information.

Paper Details

Authors:
Submitted On:
1 March 2019 - 1:27pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP18_presentation_v2.pdf

(253)

Subscribe

[1] , "Invisible Geo-Location Signature in a Single Image", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3186. Accessed: Aug. 13, 2020.
@article{3186-18,
url = {http://sigport.org/3186},
author = { },
publisher = {IEEE SigPort},
title = {Invisible Geo-Location Signature in a Single Image},
year = {2018} }
TY - EJOUR
T1 - Invisible Geo-Location Signature in a Single Image
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3186
ER -
. (2018). Invisible Geo-Location Signature in a Single Image. IEEE SigPort. http://sigport.org/3186
, 2018. Invisible Geo-Location Signature in a Single Image. Available at: http://sigport.org/3186.
. (2018). "Invisible Geo-Location Signature in a Single Image." Web.
1. . Invisible Geo-Location Signature in a Single Image [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3186

LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS


Information about an image's source camera model is important knowledge in many forensic investigations. In this paper we propose a system that compares two image patches to determine if they were captured by the same camera model. To do this, we first train a CNN based feature extractor to output generic, high level features which encode information about the source camera model of an image patch. Then, we learn a similarity measure that maps pairs of these features to a score indicating whether the two image patches were captured by the same or different camera models.

Paper Details

Authors:
Owen Mayer, Mathew C. Stamm
Submitted On:
27 April 2018 - 12:45pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster pdf

(224)

Subscribe

[1] Owen Mayer, Mathew C. Stamm, "LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3185. Accessed: Aug. 13, 2020.
@article{3185-18,
url = {http://sigport.org/3185},
author = {Owen Mayer; Mathew C. Stamm },
publisher = {IEEE SigPort},
title = {LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS},
year = {2018} }
TY - EJOUR
T1 - LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS
AU - Owen Mayer; Mathew C. Stamm
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3185
ER -
Owen Mayer, Mathew C. Stamm. (2018). LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS. IEEE SigPort. http://sigport.org/3185
Owen Mayer, Mathew C. Stamm, 2018. LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS. Available at: http://sigport.org/3185.
Owen Mayer, Mathew C. Stamm. (2018). "LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS." Web.
1. Owen Mayer, Mathew C. Stamm. LEARNED FORENSIC SOURCE SIMILARITY FOR UNKNOWN CAMERA MODELS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3185

Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals


In this paper, we present completely automated cardiac anomaly detection for remote screening of cardio-vascular abnormality using Phonocardiogram (PCG) or heart sound signal. Even though PCG contains significant and vital cardiac health information and cardiac abnormality signature, the presence of substantial noise does not guarantee highly effective analysis of cardiac condition. Our proposed method intelligently identifies and eliminates noisy PCG signal and consequently detects pathological abnormality condition. We further present a unified model of hybrid feature selection method.

Paper Details

Authors:
Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal
Submitted On:
27 April 2018 - 2:44am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_Paper_2030_final.pdf

(205)

Subscribe

[1] Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal, "Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3184. Accessed: Aug. 13, 2020.
@article{3184-18,
url = {http://sigport.org/3184},
author = {Arijit Ukil; Soma Bnadyopadhyay; Chetanya Puri; Rituraj Singh; Arpan Pal },
publisher = {IEEE SigPort},
title = {Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals},
year = {2018} }
TY - EJOUR
T1 - Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals
AU - Arijit Ukil; Soma Bnadyopadhyay; Chetanya Puri; Rituraj Singh; Arpan Pal
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3184
ER -
Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal. (2018). Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals. IEEE SigPort. http://sigport.org/3184
Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal, 2018. Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals. Available at: http://sigport.org/3184.
Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal. (2018). "Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals." Web.
1. Arijit Ukil, Soma Bnadyopadhyay, Chetanya Puri, Rituraj Singh, Arpan Pal. Effective Noise Removal and Unified Model of Hybrid Feature Space Optimization for Automated Cardiac Anomaly Detection using phonocardiogarm signals [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3184

ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM


This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems.
Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries.
Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform.
The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy.

Paper Details

Authors:
Chiyoun Park, Namhoon Kim, Jaewon Lee
Submitted On:
26 April 2018 - 1:10am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Icassp2018_KML_20180402_poster.pdf

(259)

Subscribe

[1] Chiyoun Park, Namhoon Kim, Jaewon Lee, "ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3183. Accessed: Aug. 13, 2020.
@article{3183-18,
url = {http://sigport.org/3183},
author = {Chiyoun Park; Namhoon Kim; Jaewon Lee },
publisher = {IEEE SigPort},
title = {ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM},
year = {2018} }
TY - EJOUR
T1 - ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM
AU - Chiyoun Park; Namhoon Kim; Jaewon Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3183
ER -
Chiyoun Park, Namhoon Kim, Jaewon Lee. (2018). ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM. IEEE SigPort. http://sigport.org/3183
Chiyoun Park, Namhoon Kim, Jaewon Lee, 2018. ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM. Available at: http://sigport.org/3183.
Chiyoun Park, Namhoon Kim, Jaewon Lee. (2018). "ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM." Web.
1. Chiyoun Park, Namhoon Kim, Jaewon Lee. ACCELERATING RECURRENT NEURAL NETWORK LANGUAGE MODEL BASED ONLINE SPEECH RECOGNITION SYSTEM [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3183

On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption


In this paper, we compare the performance of two active dereverberation techniques using a planar array of microphones and loudspeakers. The two techniques are based on a solution to the Kirchhoff-Helmholtz Integral Equation (KHIE). We adapt a Wave Field Synthesis (WFS) based method to the application of real-time 3D dereverberation by using a low-latency pre-filter design. The use of First-Order Differential (FOD) models is also proposed as an alternative method to the use of monopoles with WFS and which does not assume knowledge of the room geometry or primary sources.

Paper Details

Authors:
Christian Ritz, W. Bastiaan Kleijn
Submitted On:
26 April 2018 - 12:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster presentation

(274)

Subscribe

[1] Christian Ritz, W. Bastiaan Kleijn, "On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3182. Accessed: Aug. 13, 2020.
@article{3182-18,
url = {http://sigport.org/3182},
author = {Christian Ritz; W. Bastiaan Kleijn },
publisher = {IEEE SigPort},
title = {On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption},
year = {2018} }
TY - EJOUR
T1 - On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption
AU - Christian Ritz; W. Bastiaan Kleijn
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3182
ER -
Christian Ritz, W. Bastiaan Kleijn. (2018). On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption. IEEE SigPort. http://sigport.org/3182
Christian Ritz, W. Bastiaan Kleijn, 2018. On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption. Available at: http://sigport.org/3182.
Christian Ritz, W. Bastiaan Kleijn. (2018). "On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption." Web.
1. Christian Ritz, W. Bastiaan Kleijn. On the Comparison of Two Room Compensation / Dereverberation Methods Employing Active Acoustic Boundary Absorption [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3182

Discriminative Clustering with Cardinality Constraints


Clustering is widely used for exploratory data analysis in a variety of applications. Traditionally clustering is studied as an unsupervised task where no human inputs are provided. A recent trend in clustering is to leverage user provided side information to better infer the clustering structure in data. In this paper, we propose a probabilistic graphical model that allows user to provide as input the desired cluster sizes, namely the cardinality constraints. Our model also incorporates a flexible mechanism to inject control of the crispness of the clusters.

Paper Details

Authors:
Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern
Submitted On:
25 April 2018 - 2:00pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Discriminative Clustering with Cardinality Constraint_ICASSP2018_latest.pdf

(105)

Subscribe

[1] Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern, "Discriminative Clustering with Cardinality Constraints", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3181. Accessed: Aug. 13, 2020.
@article{3181-18,
url = {http://sigport.org/3181},
author = {Anh T. Pham; Raviv Raich; and Xiaoli Z. Fern },
publisher = {IEEE SigPort},
title = {Discriminative Clustering with Cardinality Constraints},
year = {2018} }
TY - EJOUR
T1 - Discriminative Clustering with Cardinality Constraints
AU - Anh T. Pham; Raviv Raich; and Xiaoli Z. Fern
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3181
ER -
Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern. (2018). Discriminative Clustering with Cardinality Constraints. IEEE SigPort. http://sigport.org/3181
Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern, 2018. Discriminative Clustering with Cardinality Constraints. Available at: http://sigport.org/3181.
Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern. (2018). "Discriminative Clustering with Cardinality Constraints." Web.
1. Anh T. Pham, Raviv Raich, and Xiaoli Z. Fern. Discriminative Clustering with Cardinality Constraints [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3181

JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH


We consider the joint estimation of time-varying linear prediction (TVLP) filter coefficients and the excitation signal parameters for the analysis of long-term speech segments. Traditional approaches to TVLP estimation assume linear expansion of the coefficients in a set of known basis functions only. But, excitation signal is also time-varying, which affects the estimation of TVLP filter parameters. In this paper, we propose a Bayesian approach, to incorporate the nature of excitation signal and also adapt regularization of the filter parameters.

Paper Details

Authors:
Submitted On:
25 April 2018 - 9:22am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(239)

Subscribe

[1] , "JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3180. Accessed: Aug. 13, 2020.
@article{3180-18,
url = {http://sigport.org/3180},
author = { },
publisher = {IEEE SigPort},
title = {JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH},
year = {2018} }
TY - EJOUR
T1 - JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3180
ER -
. (2018). JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH. IEEE SigPort. http://sigport.org/3180
, 2018. JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH. Available at: http://sigport.org/3180.
. (2018). "JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH." Web.
1. . JOINT BAYESIAN ESTIMATION OF TIME-VARYING LP PARAMETERS AND EXCITATION FOR SPEECH [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3180

Multiple-input neural network-based residual echo suppression


A residual echo suppressor (RES) aims to suppress the residual echo in the output of an acoustic echo canceler (AEC). Spectral-based RES approaches typically estimate the magnitude spectra of the near-end speech and the residual echo from a single input, that is either the far-end speech or the echo computed by the AEC, and derive the RES filter coefficients accordingly. These single inputs do not always suffice to discriminate the near-end speech from the remaining echo.

Paper Details

Authors:
Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert
Submitted On:
25 April 2018 - 5:13am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

posterICASSP_CARBAJAL.pdf

(236)

Subscribe

[1] Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert, "Multiple-input neural network-based residual echo suppression", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3178. Accessed: Aug. 13, 2020.
@article{3178-18,
url = {http://sigport.org/3178},
author = {Guillaume Carbajal; Romain Serizel; Emmanuel Vincent; Eric Humbert },
publisher = {IEEE SigPort},
title = {Multiple-input neural network-based residual echo suppression},
year = {2018} }
TY - EJOUR
T1 - Multiple-input neural network-based residual echo suppression
AU - Guillaume Carbajal; Romain Serizel; Emmanuel Vincent; Eric Humbert
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3178
ER -
Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert. (2018). Multiple-input neural network-based residual echo suppression. IEEE SigPort. http://sigport.org/3178
Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert, 2018. Multiple-input neural network-based residual echo suppression. Available at: http://sigport.org/3178.
Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert. (2018). "Multiple-input neural network-based residual echo suppression." Web.
1. Guillaume Carbajal, Romain Serizel, Emmanuel Vincent, Eric Humbert. Multiple-input neural network-based residual echo suppression [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3178

Deep learning for predicting image memorability


Memorability of media content such as images and videos has recently become an important research subject in computer vision. This paper presents our computation model for predicting image memorability, which is based on a deep learning architecture designed for a classification task. We exploit the use of both convolutional neural network (CNN) - based visual features and semantic features related to image captioning for the task. We train and test our model on the large-scale benchmarking memorability dataset: LaMem.

Paper Details

Authors:
Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty
Submitted On:
25 April 2018 - 4:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation_final.pdf

(407)

Subscribe

[1] Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty, "Deep learning for predicting image memorability", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3176. Accessed: Aug. 13, 2020.
@article{3176-18,
url = {http://sigport.org/3176},
author = {Hammad Squalli-Houssaini; Ngoc Q. K. Duong; Marquant Gwenaelle and Claire-Helene Demarty },
publisher = {IEEE SigPort},
title = {Deep learning for predicting image memorability},
year = {2018} }
TY - EJOUR
T1 - Deep learning for predicting image memorability
AU - Hammad Squalli-Houssaini; Ngoc Q. K. Duong; Marquant Gwenaelle and Claire-Helene Demarty
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3176
ER -
Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty. (2018). Deep learning for predicting image memorability. IEEE SigPort. http://sigport.org/3176
Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty, 2018. Deep learning for predicting image memorability. Available at: http://sigport.org/3176.
Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty. (2018). "Deep learning for predicting image memorability." Web.
1. Hammad Squalli-Houssaini, Ngoc Q. K. Duong, Marquant Gwenaelle and Claire-Helene Demarty. Deep learning for predicting image memorability [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3176

Pages