Sorry, you need to enable JavaScript to visit this website.

Speaker Recognition and Characterization (SPE-SPKR)

Text-dependent Speaker Verification and RSR2015 Speech Corpus


RSR2015 (Robust Speaker Recognition 2015) is the largest publicly available speech corpus for text-dependent robust speaker recognition. The current release includes 151 hours of short duration utterances spoken by 300 speakers. RSR2015 is developed by the Human Language Technology (HLT) department at Institute for Infocomm Research (I2R) in Singapore. This newsletter describes RSR2015 corpus that addresses the reviving interest of text-dependent speaker recognition.

RSR2015_v2.pdf

PDF icon RSR2015_v2.pdf (667 downloads)

Paper Details

Authors:
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:

Document Files

RSR2015_v2.pdf

(667 downloads)

Subscribe

[1] , "Text-dependent Speaker Verification and RSR2015 Speech Corpus", IEEE SigPort, 2014. [Online]. Available: http://sigport.org/54. Accessed: Sep. 20, 2018.
@article{54-14,
url = {http://sigport.org/54},
author = { },
publisher = {IEEE SigPort},
title = {Text-dependent Speaker Verification and RSR2015 Speech Corpus},
year = {2014} }
TY - EJOUR
T1 - Text-dependent Speaker Verification and RSR2015 Speech Corpus
AU -
PY - 2014
PB - IEEE SigPort
UR - http://sigport.org/54
ER -
. (2014). Text-dependent Speaker Verification and RSR2015 Speech Corpus. IEEE SigPort. http://sigport.org/54
, 2014. Text-dependent Speaker Verification and RSR2015 Speech Corpus. Available at: http://sigport.org/54.
. (2014). "Text-dependent Speaker Verification and RSR2015 Speech Corpus." Web.
1. . Text-dependent Speaker Verification and RSR2015 Speech Corpus [Internet]. IEEE SigPort; 2014. Available from : http://sigport.org/54

Sufficiency quantification for seamless text-independent speaker enrollment


Text-independent speaker recognition (TI-SR) requires a lengthy enrollment process that involves asking dedicated time from the user to create a reliable model of their voice. Seamless enrollment is a highly attractive feature which refers to the enrollment process that happens in the background and asks for no dedicated time from the user. One of the key problems in a fully automated seamless enrollment process is to determine the sufficiency of a given utterance collection for the purpose of TI-SR. No known metric exists in the literature to quantify sufficiency.

Paper Details

Authors:
Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal
Submitted On:
13 July 2018 - 3:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster presented at ICASSP 2018

(42 downloads)

Paper for ICASSP 2018

(34 downloads)

Subscribe

[1] Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal, "Sufficiency quantification for seamless text-independent speaker enrollment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3379. Accessed: Sep. 20, 2018.
@article{3379-18,
url = {http://sigport.org/3379},
author = {Gokcen Cilingir; Jonathan Huang; Mandar S Joshi; Narayan Biswal },
publisher = {IEEE SigPort},
title = {Sufficiency quantification for seamless text-independent speaker enrollment},
year = {2018} }
TY - EJOUR
T1 - Sufficiency quantification for seamless text-independent speaker enrollment
AU - Gokcen Cilingir; Jonathan Huang; Mandar S Joshi; Narayan Biswal
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3379
ER -
Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal. (2018). Sufficiency quantification for seamless text-independent speaker enrollment. IEEE SigPort. http://sigport.org/3379
Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal, 2018. Sufficiency quantification for seamless text-independent speaker enrollment. Available at: http://sigport.org/3379.
Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal. (2018). "Sufficiency quantification for seamless text-independent speaker enrollment." Web.
1. Gokcen Cilingir, Jonathan Huang, Mandar S Joshi, Narayan Biswal. Sufficiency quantification for seamless text-independent speaker enrollment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3379

Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition


The i-vector approach to speaker recognition has achieved good performance when the domain of the evaluation dataset is similar to that of the training dataset. However, in real-world applications, there is always a mismatch between the training and evaluation datasets, that leads to performance degradation. To address this problem, this paper proposes to learn the domain-invariant and speaker-discriminative speech representations via domain adversarial training.

Paper Details

Authors:
Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li
Submitted On:
25 April 2018 - 2:23am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

icassp2018_slides_qingwang_Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition.pdf

(97 downloads)

Subscribe

[1] Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li, "Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3173. Accessed: Sep. 20, 2018.
@article{3173-18,
url = {http://sigport.org/3173},
author = {Qing Wang; Wei Rao; Sining Sun; Lei Xie; Eng Siong Chng; Haizhou Li },
publisher = {IEEE SigPort},
title = {Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition},
year = {2018} }
TY - EJOUR
T1 - Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition
AU - Qing Wang; Wei Rao; Sining Sun; Lei Xie; Eng Siong Chng; Haizhou Li
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3173
ER -
Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li. (2018). Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition. IEEE SigPort. http://sigport.org/3173
Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li, 2018. Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition. Available at: http://sigport.org/3173.
Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li. (2018). "Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition." Web.
1. Qing Wang, Wei Rao, Sining Sun, Lei Xie, Eng Siong Chng, Haizhou Li. Unsupervised Domain Adaptation via Domain Adversarial Training for Speaker Recognition [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3173

GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION


In this paper we introduce a speaker verification system deployed on mobile devices that can be used to personalise a keyword spotter. We describe a baseline DNN system that maps an utterance to a speaker embedding, which is used to measure speaker differences via cosine similarity. We then introduce an architectural modification which uses an LSTM system where the parameters are optimised via a curriculum learning procedure to reduce the detection error and improve its generalisability across various conditions.

Paper Details

Authors:
Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle
Submitted On:
23 April 2018 - 1:24am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Siri_PHS_CurriculumLearning_ICASSP18v3.pdf

(191 downloads)

Subscribe

[1] Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle, "GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3144. Accessed: Sep. 20, 2018.
@article{3144-18,
url = {http://sigport.org/3144},
author = {Erik Marchi; Stephen Shum; Kyuyeon Hwang; Sachin Kajarekar; Siddharth Sigtia; Hywel Richards; Rob Haynes; Yoon Kim; John Bridle },
publisher = {IEEE SigPort},
title = {GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION
AU - Erik Marchi; Stephen Shum; Kyuyeon Hwang; Sachin Kajarekar; Siddharth Sigtia; Hywel Richards; Rob Haynes; Yoon Kim; John Bridle
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3144
ER -
Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle. (2018). GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION. IEEE SigPort. http://sigport.org/3144
Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle, 2018. GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION. Available at: http://sigport.org/3144.
Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle. (2018). "GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION." Web.
1. Erik Marchi, Stephen Shum, Kyuyeon Hwang, Sachin Kajarekar, Siddharth Sigtia, Hywel Richards, Rob Haynes, Yoon Kim, John Bridle. GENERALISED DISCRIMINATIVE TRANSFORM VIA CURRICULUM LEARNING FOR SPEAKER RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3144

A generative auditory model embedded neural network for speech processing


Before the era of the neural network (NN), features extracted from auditory models have been applied to various speech applications and been demonstrated more robust against noise than conventional speech-processing features. What's the role of auditory models in the current NN era? Are they obsolete? To answer this question, we construct a NN with a generative auditory model embedded to process speech signals.

Paper Details

Authors:
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi
Submitted On:
22 April 2018 - 5:54am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

A generative auditory model embedded neural network for speech processing.pdf

(64 downloads)

Subscribe

[1] Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi, "A generative auditory model embedded neural network for speech processing", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3130. Accessed: Sep. 20, 2018.
@article{3130-18,
url = {http://sigport.org/3130},
author = {Yu-Wen Lo; Yih-Liang Shen; Yuan-Fu Liao; and Tai-Shih Chi },
publisher = {IEEE SigPort},
title = {A generative auditory model embedded neural network for speech processing},
year = {2018} }
TY - EJOUR
T1 - A generative auditory model embedded neural network for speech processing
AU - Yu-Wen Lo; Yih-Liang Shen; Yuan-Fu Liao; and Tai-Shih Chi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3130
ER -
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. (2018). A generative auditory model embedded neural network for speech processing. IEEE SigPort. http://sigport.org/3130
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi, 2018. A generative auditory model embedded neural network for speech processing. Available at: http://sigport.org/3130.
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. (2018). "A generative auditory model embedded neural network for speech processing." Web.
1. Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. A generative auditory model embedded neural network for speech processing [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3130

A generative auditory model embedded neural network for speech processing


Before the era of the neural network (NN), features extracted from auditory models have been applied to various speech applications and been demonstrated more robust against noise than conventional speech-processing features. What’s the role
of auditory models in the current NN era? Are they obsolete?
To answer this question, we construct a NN with a generative auditory model embedded to process speech signals. The
generative auditory model consists of two stages, the stage of spectrum estimation in the logarithmic-frequency axis by

Paper Details

Authors:
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi
Submitted On:
22 April 2018 - 5:45am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

generative auditory model, convolutional neural network, multi-resolution, speaker identification

(65 downloads)

Subscribe

[1] Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi, "A generative auditory model embedded neural network for speech processing", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3129. Accessed: Sep. 20, 2018.
@article{3129-18,
url = {http://sigport.org/3129},
author = {Yu-Wen Lo; Yih-Liang Shen; Yuan-Fu Liao; and Tai-Shih Chi },
publisher = {IEEE SigPort},
title = {A generative auditory model embedded neural network for speech processing},
year = {2018} }
TY - EJOUR
T1 - A generative auditory model embedded neural network for speech processing
AU - Yu-Wen Lo; Yih-Liang Shen; Yuan-Fu Liao; and Tai-Shih Chi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3129
ER -
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. (2018). A generative auditory model embedded neural network for speech processing. IEEE SigPort. http://sigport.org/3129
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi, 2018. A generative auditory model embedded neural network for speech processing. Available at: http://sigport.org/3129.
Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. (2018). "A generative auditory model embedded neural network for speech processing." Web.
1. Yu-Wen Lo, Yih-Liang Shen, Yuan-Fu Liao, and Tai-Shih Chi. A generative auditory model embedded neural network for speech processing [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3129

DEEP FACTORIZATION FOR SPEECH SIGNAL


Various informative factors mixed in speech signals, leading to great difficulty when decoding any of the factors. An intuitive idea is to factorize each speech frame into individual informative factors, though it turns out to be highly difficult. Recently, we found that speaker traits, which were assumed to be long-term distributional properties, are actually short-time patterns, and can be learned by a carefully designed deep neural network (DNN). This discovery motivated a cascade deep factorization (CDF) framework that will be presented in this paper.

Paper Details

Authors:
Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng
Submitted On:
20 April 2018 - 7:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

180417-deepFactor-LLT.pptx

(65 downloads)

Subscribe

[1] Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng, "DEEP FACTORIZATION FOR SPEECH SIGNAL", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3101. Accessed: Sep. 20, 2018.
@article{3101-18,
url = {http://sigport.org/3101},
author = {Lantian Li; Dong Wang; Yixiang Chen; Ying Shi; Zhiyuan Tang; Thomas Fang Zheng },
publisher = {IEEE SigPort},
title = {DEEP FACTORIZATION FOR SPEECH SIGNAL},
year = {2018} }
TY - EJOUR
T1 - DEEP FACTORIZATION FOR SPEECH SIGNAL
AU - Lantian Li; Dong Wang; Yixiang Chen; Ying Shi; Zhiyuan Tang; Thomas Fang Zheng
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3101
ER -
Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng. (2018). DEEP FACTORIZATION FOR SPEECH SIGNAL. IEEE SigPort. http://sigport.org/3101
Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng, 2018. DEEP FACTORIZATION FOR SPEECH SIGNAL. Available at: http://sigport.org/3101.
Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng. (2018). "DEEP FACTORIZATION FOR SPEECH SIGNAL." Web.
1. Lantian Li, Dong Wang, Yixiang Chen, Ying Shi, Zhiyuan Tang, Thomas Fang Zheng. DEEP FACTORIZATION FOR SPEECH SIGNAL [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3101

FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING


In recent studies, it has shown that speaker patterns can be learned from very short speech segments (e.g., 0.3 seconds) by a carefully designed convolutional & time-delay deep neural network (CT-DNN) model. By enforcing the model to discriminate the speakers in the training data, frame-level speaker features can be derived from the last hidden layer.

Paper Details

Authors:
Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng
Submitted On:
20 April 2018 - 7:38am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

180418-Full_info-LLT.pptx

(51 downloads)

Subscribe

[1] Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng, "FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3100. Accessed: Sep. 20, 2018.
@article{3100-18,
url = {http://sigport.org/3100},
author = {Lantian Li; Zhiyuan Tang; Dong Wang; Thomas Fang Zheng },
publisher = {IEEE SigPort},
title = {FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING},
year = {2018} }
TY - EJOUR
T1 - FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING
AU - Lantian Li; Zhiyuan Tang; Dong Wang; Thomas Fang Zheng
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3100
ER -
Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng. (2018). FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING. IEEE SigPort. http://sigport.org/3100
Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng, 2018. FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING. Available at: http://sigport.org/3100.
Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng. (2018). "FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING." Web.
1. Lantian Li, Zhiyuan Tang, Dong Wang, Thomas Fang Zheng. FULL-INFO TRAINING FOR DEEP SPEAKER FEATURE LEARNING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3100

SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION

Paper Details

Authors:
Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee
Submitted On:
19 April 2018 - 8:47pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

JIANBOMA_ICASSP_2018_1.pdf

(77 downloads)

Subscribe

[1] Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee, "SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3043. Accessed: Sep. 20, 2018.
@article{3043-18,
url = {http://sigport.org/3043},
author = {Jianbo Ma; Vidhyasaharan Sethu; Eliathamby Ambikairajah; Kong Aik Lee },
publisher = {IEEE SigPort},
title = {SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION},
year = {2018} }
TY - EJOUR
T1 - SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION
AU - Jianbo Ma; Vidhyasaharan Sethu; Eliathamby Ambikairajah; Kong Aik Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3043
ER -
Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee. (2018). SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION. IEEE SigPort. http://sigport.org/3043
Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee, 2018. SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION. Available at: http://sigport.org/3043.
Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee. (2018). "SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION." Web.
1. Jianbo Ma, Vidhyasaharan Sethu, Eliathamby Ambikairajah, Kong Aik Lee. SPEAKER-PHONETIC VECTOR ESTIMATION FOR SHORT DURATION SPEAKER VERIFICATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3043

Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models

Paper Details

Authors:
Longxin Li, Man-Wai Mak
Submitted On:
19 April 2018 - 5:51pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2018icassp_latest.pdf

(51 downloads)

Subscribe

[1] Longxin Li, Man-Wai Mak, "Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3034. Accessed: Sep. 20, 2018.
@article{3034-18,
url = {http://sigport.org/3034},
author = {Longxin Li; Man-Wai Mak },
publisher = {IEEE SigPort},
title = {Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models},
year = {2018} }
TY - EJOUR
T1 - Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models
AU - Longxin Li; Man-Wai Mak
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3034
ER -
Longxin Li, Man-Wai Mak. (2018). Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models. IEEE SigPort. http://sigport.org/3034
Longxin Li, Man-Wai Mak, 2018. Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models. Available at: http://sigport.org/3034.
Longxin Li, Man-Wai Mak. (2018). "Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models." Web.
1. Longxin Li, Man-Wai Mak. Unsupervised Domain Adaptation for Gender-Aware PLDA Mixture Models [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3034

Pages