Sorry, you need to enable JavaScript to visit this website.

ICASSP 2020

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. The ICASSP 2020 conference will feature world-class presentations by internationally renowned speakers, cutting-edge session topics and provide a fantastic opportunity to network with like-minded professionals from around the world. Visit website.

A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition


This paper proposes a generalized framework for domain adaptation of Probabilistic Linear Discriminant Analysis (PLDA) in speaker recognition. It not only includes several existing supervised and unsupervised domain adaptation methods but also makes possible more flexible usage of available data in different domains. In particular, we introduce here the two new techniques described below. (1) Correlation-alignment-based interpolation and (2) covariance regularization.

Paper Details

Authors:
Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka
Submitted On:
20 May 2020 - 8:49pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

Presentation material

(7)

Subscribe

[1] Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka, "A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5422. Accessed: Jun. 06, 2020.
@article{5422-20,
url = {http://sigport.org/5422},
author = {Qiongqiong Wang; Koji Okabe; Kong Aik Lee; Takafumi Koshinaka },
publisher = {IEEE SigPort},
title = {A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition},
year = {2020} }
TY - EJOUR
T1 - A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition
AU - Qiongqiong Wang; Koji Okabe; Kong Aik Lee; Takafumi Koshinaka
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5422
ER -
Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka. (2020). A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition. IEEE SigPort. http://sigport.org/5422
Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka, 2020. A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition. Available at: http://sigport.org/5422.
Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka. (2020). "A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition." Web.
1. Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka. A Generalized Framework for Domain Adaptation of PLDA in Speaker Recognition [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5422

Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks


The present paper describes singing voice synthesis based on convolutional neural networks (CNNs). Singing voice synthesis systems based on deep neural networks (DNNs) are currently being proposed and are improving the naturalness of synthesized singing voices. As singing voices represent a rich form of expression, a powerful technique to model them accurately is required. In the proposed technique, long-term dependencies of singing voices are modeled by CNNs.

Paper Details

Authors:
Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda
Submitted On:
20 May 2020 - 8:26pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_slide_20200417b.pdf

(9)

Subscribe

[1] Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda, "Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5421. Accessed: Jun. 06, 2020.
@article{5421-20,
url = {http://sigport.org/5421},
author = {Kazuhiro Nakamura; Shinji Takaki; Kei Hashimoto; Keiichiro Oura; Yoshihiko Nankaku; Keiichi Tokuda },
publisher = {IEEE SigPort},
title = {Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks},
year = {2020} }
TY - EJOUR
T1 - Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks
AU - Kazuhiro Nakamura; Shinji Takaki; Kei Hashimoto; Keiichiro Oura; Yoshihiko Nankaku; Keiichi Tokuda
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5421
ER -
Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda. (2020). Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks. IEEE SigPort. http://sigport.org/5421
Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda, 2020. Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks. Available at: http://sigport.org/5421.
Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda. (2020). "Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks." Web.
1. Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda. Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5421

Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming


High-definition 360 videos encoded in fine quality are typically too large in size to stream in its entirety over bandwidth (BW)-constrained networks. One popular remedy is to interactively extract and send a spatial sub-region corresponding to a viewer's current field-of-view (FoV) in a head-mounted display (HMD) for more BW-efficient streaming. Due to the non-negligible round-trip-time (RTT) delay between server and client, accurate head movement prediction that foretells a viewer's future FoVs is essential.

Paper Details

Authors:
Gene Cheung, Patrick Le Callet, Jack Z. G. Tan
Submitted On:
20 May 2020 - 7:49pm
Short Link:
Type:
Event:

Document Files

2069.pdf

(5)

Subscribe

[1] Gene Cheung, Patrick Le Callet, Jack Z. G. Tan, "Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5420. Accessed: Jun. 06, 2020.
@article{5420-20,
url = {http://sigport.org/5420},
author = {Gene Cheung; Patrick Le Callet; Jack Z. G. Tan },
publisher = {IEEE SigPort},
title = {Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming},
year = {2020} }
TY - EJOUR
T1 - Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming
AU - Gene Cheung; Patrick Le Callet; Jack Z. G. Tan
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5420
ER -
Gene Cheung, Patrick Le Callet, Jack Z. G. Tan. (2020). Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming. IEEE SigPort. http://sigport.org/5420
Gene Cheung, Patrick Le Callet, Jack Z. G. Tan, 2020. Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming. Available at: http://sigport.org/5420.
Gene Cheung, Patrick Le Callet, Jack Z. G. Tan. (2020). "Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming." Web.
1. Gene Cheung, Patrick Le Callet, Jack Z. G. Tan. Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5420

Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​


The interest in deep learning methods for solving traditional signal processing tasks has been steadily growing in the last years.

Paper Details

Authors:
Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti
Submitted On:
20 May 2020 - 3:02pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation

(7)

Subscribe

[1] Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti, "Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5419. Accessed: Jun. 06, 2020.
@article{5419-20,
url = {http://sigport.org/5419},
author = {Luca Comanducci; Maximo Cobos; Fabio Antonacci; Augusto Sarti },
publisher = {IEEE SigPort},
title = {Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​},
year = {2020} }
TY - EJOUR
T1 - Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​
AU - Luca Comanducci; Maximo Cobos; Fabio Antonacci; Augusto Sarti
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5419
ER -
Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti. (2020). Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​. IEEE SigPort. http://sigport.org/5419
Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti, 2020. Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​. Available at: http://sigport.org/5419.
Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti. (2020). "Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​." Web.
1. Luca Comanducci, Maximo Cobos, Fabio Antonacci, Augusto Sarti. Time Difference of Arrival Estimation from Frequency-sliding Generalized Cross-Correlations Using Convolutional Neural Networks​ [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5419

MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA


In this paper, we estimate depth information using two defocused images from dual aperture camera. Recent advances in deep learning techniques have increased the accuracy of depth estimation. Besides, methods of using a defocused image in which an object is blurred according to a distance from a camera have been widely studied. We further improve the accuracy of the depth estimation by training the network using two images with different degrees of depth-of-field.

Paper Details

Authors:
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee
Submitted On:
20 May 2020 - 11:55am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP_MIDFD_PPT.pdf

(5)

Subscribe

[1] Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee, "MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5418. Accessed: Jun. 06, 2020.
@article{5418-20,
url = {http://sigport.org/5418},
author = {Gwangmo Song; Yumee Kim; Kukjin Chun; Kyoung Mu Lee },
publisher = {IEEE SigPort},
title = {MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA},
year = {2020} }
TY - EJOUR
T1 - MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA
AU - Gwangmo Song; Yumee Kim; Kukjin Chun; Kyoung Mu Lee
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5418
ER -
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. (2020). MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA. IEEE SigPort. http://sigport.org/5418
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee, 2020. MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA. Available at: http://sigport.org/5418.
Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. (2020). "MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA." Web.
1. Gwangmo Song, Yumee Kim, Kukjin Chun, Kyoung Mu Lee. MULTI IMAGE DEPTH FROM DEFOCUS NETWORK WITH BOUNDARY CUE FOR DUAL APERTURE CAMERA [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5418

EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION


In the last few years, social media networks have changed human life experience and behavior as it has broken down communication barriers, allowing ordinary people to actively produce multimedia content on a massive scale. On this wise, the information dissemination in social media platforms becomes increasingly common. However, misinformation is propagated with the same facility and velocity as real news, though it can result in irreversible damage to an individual or society at large.

Paper Details

Authors:
Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha
Submitted On:
20 May 2020 - 11:36am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP.pdf

(4)

Subscribe

[1] Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha , "EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5417. Accessed: Jun. 06, 2020.
@article{5417-20,
url = {http://sigport.org/5417},
author = {Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha },
publisher = {IEEE SigPort},
title = {EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION},
year = {2020} }
TY - EJOUR
T1 - EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION
AU - Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5417
ER -
Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha . (2020). EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION. IEEE SigPort. http://sigport.org/5417
Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha , 2020. EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION. Available at: http://sigport.org/5417.
Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha . (2020). "EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION." Web.
1. Stephane Schwarz ; Antônio Theóphilo ; Anderson Rocha . EMET : EMBEDDINGS FROM MULTILINGUAL- ENCODER TRANSFORMER FOR FAKE NEWS DETECTION [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5417

Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition


This paper presents a novel 3DoF+ system that allows to navigate, i.e., change position, in scene-based spatial audio content beyond the sweet spot of a Higher Order Ambisonics recording. It is one of the first such systems based on sound capturing at a single spatial position. The system uses a parametric decomposition of the recorded sound field. For the synthesis, only coarse distance information about the sources is needed as side information but not the exact number of them.

Paper Details

Authors:
Andreas Behler, Peter Jax
Submitted On:
20 May 2020 - 10:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

handout.pdf

(8)

Subscribe

[1] Andreas Behler, Peter Jax, "Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5414. Accessed: Jun. 06, 2020.
@article{5414-20,
url = {http://sigport.org/5414},
author = {Andreas Behler; Peter Jax },
publisher = {IEEE SigPort},
title = {Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition},
year = {2020} }
TY - EJOUR
T1 - Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition
AU - Andreas Behler; Peter Jax
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5414
ER -
Andreas Behler, Peter Jax. (2020). Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition. IEEE SigPort. http://sigport.org/5414
Andreas Behler, Peter Jax, 2020. Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition. Available at: http://sigport.org/5414.
Andreas Behler, Peter Jax. (2020). "Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition." Web.
1. Andreas Behler, Peter Jax. Translation of a Higher Order Ambisonics Sound Scene Based on Parametric Decomposition [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5414

Curriculum learning for speech emotion recognition from crowdsourced labels


This study introduces a method to design a curriculum for machine-learning to maximize the efficiency during the training process of deep neural networks (DNNs) for speech emotion recognition. Previous studies in other machine-learning problems have shown the benefits of training a classifier following a curriculum where samples are gradually presented in increasing level of difficulty. For speech emotion recognition, the challenge is to establish a natural order of difficulty in the training set to create the curriculum.

Paper Details

Authors:
Reza Lotfian, Carlos Busso
Submitted On:
20 May 2020 - 9:43am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

Slides of paper

(7)

Subscribe

[1] Reza Lotfian, Carlos Busso, "Curriculum learning for speech emotion recognition from crowdsourced labels", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5408. Accessed: Jun. 06, 2020.
@article{5408-20,
url = {http://sigport.org/5408},
author = {Reza Lotfian; Carlos Busso },
publisher = {IEEE SigPort},
title = {Curriculum learning for speech emotion recognition from crowdsourced labels},
year = {2020} }
TY - EJOUR
T1 - Curriculum learning for speech emotion recognition from crowdsourced labels
AU - Reza Lotfian; Carlos Busso
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5408
ER -
Reza Lotfian, Carlos Busso. (2020). Curriculum learning for speech emotion recognition from crowdsourced labels. IEEE SigPort. http://sigport.org/5408
Reza Lotfian, Carlos Busso, 2020. Curriculum learning for speech emotion recognition from crowdsourced labels. Available at: http://sigport.org/5408.
Reza Lotfian, Carlos Busso. (2020). "Curriculum learning for speech emotion recognition from crowdsourced labels." Web.
1. Reza Lotfian, Carlos Busso. Curriculum learning for speech emotion recognition from crowdsourced labels [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5408

DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES


This paper describes new reference benchmark results based on hybrid Hidden Markov Model and Deep Neural Networks (HMM-DNN) for the GlobalPhone (GP) multilingual text and speech database. GP is a multilingual database of high-quality read speech with corresponding transcriptions and pronunciation dictionaries in more than 20 languages. Moreover, we provide new results for five additional languages, namely, Amharic, Oromo, Tigrigna, Wolaytta, and Uyghur.

Paper Details

Authors:
Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz
Submitted On:
20 May 2020 - 9:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2020_DNN4GlobalPhone_Paper5018_modified.pdf

(5)

Subscribe

[1] Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz, "DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5407. Accessed: Jun. 06, 2020.
@article{5407-20,
url = {http://sigport.org/5407},
author = {Martha Yifiru Tachbelie; Ayimunishagu Abulimiti; Solomon Teferra Abate; Tanja Schultz },
publisher = {IEEE SigPort},
title = {DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES},
year = {2020} }
TY - EJOUR
T1 - DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES
AU - Martha Yifiru Tachbelie; Ayimunishagu Abulimiti; Solomon Teferra Abate; Tanja Schultz
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5407
ER -
Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz. (2020). DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES. IEEE SigPort. http://sigport.org/5407
Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz, 2020. DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES. Available at: http://sigport.org/5407.
Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz. (2020). "DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES." Web.
1. Martha Yifiru Tachbelie, Ayimunishagu Abulimiti, Solomon Teferra Abate, Tanja Schultz. DNN-BASED SPEECH RECOGNITION FOR GLOBALPHONE LANGUAGES [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5407

Semi-Supervised Optimal Transport Methods for Detecting Anomalies


Building upon advances on optimal transport and anomaly detection, we propose a generalization of an unsupervised and automatic method for detection of significant deviation from reference signals. Unlike most existing approaches for anomaly detection, our method is built on a non-parametric framework exploiting the optimal transportation to estimate deviation from an observed distribution.

Paper Details

Authors:
Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou
Submitted On:
20 May 2020 - 8:36am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Presentation slides

(4)

Subscribe

[1] Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou, "Semi-Supervised Optimal Transport Methods for Detecting Anomalies", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/5406. Accessed: Jun. 06, 2020.
@article{5406-20,
url = {http://sigport.org/5406},
author = { Amina Alaoui-Belghiti; Sylvain Chevallier; Eric Monacelli; Guillaume Bao; Eric Azabou },
publisher = {IEEE SigPort},
title = {Semi-Supervised Optimal Transport Methods for Detecting Anomalies},
year = {2020} }
TY - EJOUR
T1 - Semi-Supervised Optimal Transport Methods for Detecting Anomalies
AU - Amina Alaoui-Belghiti; Sylvain Chevallier; Eric Monacelli; Guillaume Bao; Eric Azabou
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/5406
ER -
Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou. (2020). Semi-Supervised Optimal Transport Methods for Detecting Anomalies. IEEE SigPort. http://sigport.org/5406
Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou, 2020. Semi-Supervised Optimal Transport Methods for Detecting Anomalies. Available at: http://sigport.org/5406.
Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou. (2020). "Semi-Supervised Optimal Transport Methods for Detecting Anomalies." Web.
1. Amina Alaoui-Belghiti, Sylvain Chevallier, Eric Monacelli, Guillaume Bao, Eric Azabou. Semi-Supervised Optimal Transport Methods for Detecting Anomalies [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/5406

Pages