Sorry, you need to enable JavaScript to visit this website.

Neural network learning (MLR-NNLR)

INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING


We measure the effect of small amounts of systematic and
random label noise caused by slightly misaligned ground truth
labels in a fine grained audio signal labeling task. The task
we choose to demonstrate these effects on is also known as
framewise polyphonic transcription or note quantized multi-
f0 estimation, and transforms a monaural audio signal into a
sequence of note indicator labels. It will be shown that even
slight misalignments have clearly apparent effects, demonstrating a great sensitivity of convolutional neural networks

Paper Details

Authors:
Submitted On:
23 April 2018 - 9:01pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

large_poster_96x48.pdf

(101 downloads)

Subscribe

[1] , "INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3156. Accessed: Dec. 12, 2018.
@article{3156-18,
url = {http://sigport.org/3156},
author = { },
publisher = {IEEE SigPort},
title = {INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING},
year = {2018} }
TY - EJOUR
T1 - INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3156
ER -
. (2018). INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING. IEEE SigPort. http://sigport.org/3156
, 2018. INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING. Available at: http://sigport.org/3156.
. (2018). "INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING." Web.
1. . INVESTIGATING LABEL NOISE SENSITIVITY OF CONVOLUTIONAL NEURAL NETWORKS FOR FINE GRAINED AUDIO SIGNAL LABELLING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3156

VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning


In this paper, we propose a novel virtual reality image quality assessment (VR IQA) with adversarial learning for omnidirectional images. To take into account the characteristics of the omnidirectional image, we devise deep networks including novel quality score predictor and human perception guider. The proposed quality score predictor automatically predicts the quality score of distorted image using the latent spatial and position feature.

Paper Details

Authors:
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro
Submitted On:
20 April 2018 - 8:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

VR IQA NET-ICASSP2018

(141 downloads)

Subscribe

[1] Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3102. Accessed: Dec. 12, 2018.
@article{3102-18,
url = {http://sigport.org/3102},
author = {Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro },
publisher = {IEEE SigPort},
title = {VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning
AU - Heoun-taek Lim; Hak Gu Kim; and Yong Man Ro
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3102
ER -
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. IEEE SigPort. http://sigport.org/3102
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro, 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning. Available at: http://sigport.org/3102.
Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. (2018). "VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning." Web.
1. Heoun-taek Lim, Hak Gu Kim, and Yong Man Ro. VR IQA NET: Deep Virtual Reality Image Quality Assessment using Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3102

TasNet: time-domain audio separation network for real-time, single-channel speech separation


Robust speech processing in multi-talker environments requires effective speech separation. Recent deep learning systems have made significant progress toward solving this problem, yet it remains challenging particularly in real-time, short latency applications. Most methods attempt to construct a mask for each source in time-frequency representation of the mixture signal which is not necessarily an optimal representation for speech separation.

Paper Details

Authors:
Yi Luo, Nima Mesgarani
Submitted On:
19 April 2018 - 2:11pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2018-poster.pdf

(155 downloads)

Subscribe

[1] Yi Luo, Nima Mesgarani, "TasNet: time-domain audio separation network for real-time, single-channel speech separation", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2987. Accessed: Dec. 12, 2018.
@article{2987-18,
url = {http://sigport.org/2987},
author = {Yi Luo; Nima Mesgarani },
publisher = {IEEE SigPort},
title = {TasNet: time-domain audio separation network for real-time, single-channel speech separation},
year = {2018} }
TY - EJOUR
T1 - TasNet: time-domain audio separation network for real-time, single-channel speech separation
AU - Yi Luo; Nima Mesgarani
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2987
ER -
Yi Luo, Nima Mesgarani. (2018). TasNet: time-domain audio separation network for real-time, single-channel speech separation. IEEE SigPort. http://sigport.org/2987
Yi Luo, Nima Mesgarani, 2018. TasNet: time-domain audio separation network for real-time, single-channel speech separation. Available at: http://sigport.org/2987.
Yi Luo, Nima Mesgarani. (2018). "TasNet: time-domain audio separation network for real-time, single-channel speech separation." Web.
1. Yi Luo, Nima Mesgarani. TasNet: time-domain audio separation network for real-time, single-channel speech separation [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2987

Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing


Generative adversarial network (GAN) is a new idea for training models, in which a generator and a discriminator compete against each other to improve the generation quality. Recently, GAN has shown amazing results in image generation, and a large amount and a wide variety of new ideas, techniques, and applications have been developed based on it. Although there are only few successful cases, GAN has great potential to be applied to text and speech generations to overcome limitations in the conventional methods.

Paper Details

Authors:
Hung-yi Lee, Yu Tsao
Submitted On:
16 April 2018 - 2:25am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

all ICASSP 2018 (v3).pdf

(1317 downloads)

all ICASSP 2018 (v3).pptx

(483 downloads)

Subscribe

[1] Hung-yi Lee, Yu Tsao, "Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2863. Accessed: Dec. 12, 2018.
@article{2863-18,
url = {http://sigport.org/2863},
author = {Hung-yi Lee; Yu Tsao },
publisher = {IEEE SigPort},
title = {Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing},
year = {2018} }
TY - EJOUR
T1 - Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing
AU - Hung-yi Lee; Yu Tsao
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2863
ER -
Hung-yi Lee, Yu Tsao. (2018). Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing. IEEE SigPort. http://sigport.org/2863
Hung-yi Lee, Yu Tsao, 2018. Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing. Available at: http://sigport.org/2863.
Hung-yi Lee, Yu Tsao. (2018). "Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing." Web.
1. Hung-yi Lee, Yu Tsao. Generative Adversarial Network and its Applications to Speech Signal and Natural Language Processing [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2863

MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY


We introduce a model-based reconstruction
framework with deep learned (DL) and smoothness regularization
on manifolds (STORM) priors to recover free
breathing and ungated (FBU) cardiac MRI from highly undersampled
measurements. The DL priors enable us to exploit
the local correlations, while the STORM prior enables
us to make use of the extensive non-local similarities that are
subject dependent. We introduce a novel model-based formulation
that allows the seamless integration of deep learning

Paper Details

Authors:
Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob
Submitted On:
14 April 2018 - 1:52pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp_poster_final.pptx

(87 downloads)

Subscribe

[1] Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob, "MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY ", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2845. Accessed: Dec. 12, 2018.
@article{2845-18,
url = {http://sigport.org/2845},
author = {Sampurna Biswas; Hemant K. Aggarwal; Sunrita Poddar; Mathews Jacob },
publisher = {IEEE SigPort},
title = {MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY },
year = {2018} }
TY - EJOUR
T1 - MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY
AU - Sampurna Biswas; Hemant K. Aggarwal; Sunrita Poddar; Mathews Jacob
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2845
ER -
Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob. (2018). MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY . IEEE SigPort. http://sigport.org/2845
Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob, 2018. MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY . Available at: http://sigport.org/2845.
Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob. (2018). "MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY ." Web.
1. Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, Mathews Jacob. MODEL BASED DEEP LEARNING IN FREE BREATHING, UNGATED, CARDIAC MRI RECOVERY [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2845

AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER


There is growing interest in understanding the impact of architectural parameters such as depth, width, and the type of
activation function on the performance of a neural network. We provide an upper-bound on the number of free parameters
a ReLU-type neural network needs to exactly fit the training data. Whether a net of this size generalizes to test data will
be governed by the fidelity of the training data and the applicability of the principle of Occam’s Razor. We introduce the

Paper Details

Authors:
Hossein Valavi, Peter J. Ramadge
Submitted On:
19 April 2018 - 7:01pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AN UPPER-BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER

(132 downloads)

Subscribe

[1] Hossein Valavi, Peter J. Ramadge, "AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2843. Accessed: Dec. 12, 2018.
@article{2843-18,
url = {http://sigport.org/2843},
author = {Hossein Valavi; Peter J. Ramadge },
publisher = {IEEE SigPort},
title = {AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER},
year = {2018} }
TY - EJOUR
T1 - AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER
AU - Hossein Valavi; Peter J. Ramadge
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2843
ER -
Hossein Valavi, Peter J. Ramadge. (2018). AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER. IEEE SigPort. http://sigport.org/2843
Hossein Valavi, Peter J. Ramadge, 2018. AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER. Available at: http://sigport.org/2843.
Hossein Valavi, Peter J. Ramadge. (2018). "AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER." Web.
1. Hossein Valavi, Peter J. Ramadge. AN UPPER BOUND ON THE REQUIRED SIZE OF A NEURAL NETWORK CLASSIFIER [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2843

GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION


In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function. Unlike TE2E, the GE2E loss function updates the network in a way that emphasizes examples that are difficult to verify at each step of the training process. Additionally, the GE2E loss does not require an initial stage of example selection.

Paper Details

Authors:
Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno
Submitted On:
18 April 2018 - 11:00am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP 2018 GE2E.pptx

(135 downloads)

Subscribe

[1] Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno, "GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2778. Accessed: Dec. 12, 2018.
@article{2778-18,
url = {http://sigport.org/2778},
author = {Li Wan; Quan Wang; Alan Papir; Ignacio Lopez Moreno },
publisher = {IEEE SigPort},
title = {GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION},
year = {2018} }
TY - EJOUR
T1 - GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION
AU - Li Wan; Quan Wang; Alan Papir; Ignacio Lopez Moreno
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2778
ER -
Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno. (2018). GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION. IEEE SigPort. http://sigport.org/2778
Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno, 2018. GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION. Available at: http://sigport.org/2778.
Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno. (2018). "GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION." Web.
1. Li Wan, Quan Wang, Alan Papir, Ignacio Lopez Moreno. GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2778

A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis


Our article provides a theoretical analysis of the asymptotic performance of a regression or classification task performed by a simple random neural network. This result is obtained by leveraging a new framework at the crossroads between random matrix theory and the concentration of measure theory. This approach is of utmost interest for neural network analysis at large in that it naturally dismisses the difficulty induced by the non-linear activation functions, so long that these are Lipschitz functions.

Paper Details

Authors:
Romain Couillet
Submitted On:
13 April 2018 - 5:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

conc_measure_NN_ICASSP18(3).pdf

(96 downloads)

Subscribe

[1] Romain Couillet, "A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2767. Accessed: Dec. 12, 2018.
@article{2767-18,
url = {http://sigport.org/2767},
author = {Romain Couillet },
publisher = {IEEE SigPort},
title = {A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis},
year = {2018} }
TY - EJOUR
T1 - A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis
AU - Romain Couillet
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2767
ER -
Romain Couillet. (2018). A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis. IEEE SigPort. http://sigport.org/2767
Romain Couillet, 2018. A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis. Available at: http://sigport.org/2767.
Romain Couillet. (2018). "A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis." Web.
1. Romain Couillet. A Random Matrix and Concentration Inequalities framework for Neural Networks Analysis [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2767

Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification


We present an end-to-end multi-scale Convolutional Neural
Network (CNN) framework for topic identification (topic ID).
In this work, we examined multi-scale CNN for classification
using raw text input. Topical word embeddings are learnt at
multiple scales using parallel convolutional layers. A technique
to integrate verification and identification objectives is
examined to improve topic ID performance. With this approach,
we achieved significant improvement in identification
task. We evaluated our framework on two contrasting

Final.pdf

PDF icon Final.pdf (99 downloads)

Paper Details

Authors:
Raghavendra Pappagari, Jesus Villalba, Najim Dehak
Submitted On:
13 April 2018 - 4:16pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Final.pdf

(99 downloads)

Subscribe

[1] Raghavendra Pappagari, Jesus Villalba, Najim Dehak, "Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2758. Accessed: Dec. 12, 2018.
@article{2758-18,
url = {http://sigport.org/2758},
author = {Raghavendra Pappagari; Jesus Villalba; Najim Dehak },
publisher = {IEEE SigPort},
title = {Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification},
year = {2018} }
TY - EJOUR
T1 - Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification
AU - Raghavendra Pappagari; Jesus Villalba; Najim Dehak
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2758
ER -
Raghavendra Pappagari, Jesus Villalba, Najim Dehak. (2018). Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification. IEEE SigPort. http://sigport.org/2758
Raghavendra Pappagari, Jesus Villalba, Najim Dehak, 2018. Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification. Available at: http://sigport.org/2758.
Raghavendra Pappagari, Jesus Villalba, Najim Dehak. (2018). "Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification." Web.
1. Raghavendra Pappagari, Jesus Villalba, Najim Dehak. Joint Verification-Identification in End-to-End Multi-Scale CNN Framework for Topic Identification [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2758

Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning


We introduce a novel type of representation learning to obtain a speaker invariant feature for zero-resource languages. Speaker adaptation is an important technique to build a robust acoustic model. For a zero-resource language, however, conventional model-dependent speaker adaptation methods such as constrained maximum likelihood linear regression are insufficient because the acoustic model of the target language is not accessible. Therefore, we introduce a model-independent feature extraction based on a neural network.

Paper Details

Authors:
Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi
Submitted On:
13 April 2018 - 10:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

speaker-invariant-feature-extraction-for-zero-resource-languages-with-adversarial-learning.pdf

(177 downloads)

Subscribe

[1] Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi, "Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2709. Accessed: Dec. 12, 2018.
@article{2709-18,
url = {http://sigport.org/2709},
author = {Taira Tsuchiya; Naohiro Tawara; Tetsuji Ogawa; Tetsunori Kobayashi },
publisher = {IEEE SigPort},
title = {Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning},
year = {2018} }
TY - EJOUR
T1 - Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning
AU - Taira Tsuchiya; Naohiro Tawara; Tetsuji Ogawa; Tetsunori Kobayashi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2709
ER -
Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi. (2018). Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning. IEEE SigPort. http://sigport.org/2709
Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi, 2018. Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning. Available at: http://sigport.org/2709.
Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi. (2018). "Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning." Web.
1. Taira Tsuchiya, Naohiro Tawara, Tetsuji Ogawa, Tetsunori Kobayashi. Speaker Invariant Feature Extraction for Zero-Resource Languages with Adversarial Learning [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2709

Pages