Sorry, you need to enable JavaScript to visit this website.

Multilingual Recognition and Identification (SPE-MULT)

Phoneme Level Language Models for Sequence Based Low Resource ASR


Building multilingual and crosslingual models help bring different languages together in a language universal space. It allows models to share parameters and transfer knowledge across languages, enabling faster and better adaptation to a new language. These approaches are particularly useful for low resource languages. In this paper, we propose a phoneme-level language model that can be used multilingually and for crosslingual adaptation to a target language.

Paper Details

Authors:
Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze
Submitted On:
14 May 2019 - 10:39am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PLMs_ICASSP_Poster (1).pdf

(5)

Subscribe

[1] Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze, "Phoneme Level Language Models for Sequence Based Low Resource ASR", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4511. Accessed: May. 22, 2019.
@article{4511-19,
url = {http://sigport.org/4511},
author = {Siddharth Dalmia; Xinjian Li; Alan W Black; Florian Metze },
publisher = {IEEE SigPort},
title = {Phoneme Level Language Models for Sequence Based Low Resource ASR},
year = {2019} }
TY - EJOUR
T1 - Phoneme Level Language Models for Sequence Based Low Resource ASR
AU - Siddharth Dalmia; Xinjian Li; Alan W Black; Florian Metze
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4511
ER -
Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze. (2019). Phoneme Level Language Models for Sequence Based Low Resource ASR. IEEE SigPort. http://sigport.org/4511
Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze, 2019. Phoneme Level Language Models for Sequence Based Low Resource ASR. Available at: http://sigport.org/4511.
Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze. (2019). "Phoneme Level Language Models for Sequence Based Low Resource ASR." Web.
1. Siddharth Dalmia, Xinjian Li, Alan W Black, Florian Metze. Phoneme Level Language Models for Sequence Based Low Resource ASR [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4511

Learning from the best: A teacher-student multilingual framework for low-resource languages


The traditional method of pretraining neural acoustic models in low-resource languages consists of initializing the acoustic model parameters with a large, annotated multilingual corpus and can be a drain on time and resources. In an attempt to reuse TDNN-LSTMs already pre-trained using multilingual training, we have applied Teacher-Student (TS) learning as a method of pretraining to transfer knowledge from a multilingual TDNN-LSTM to a TDNN. The pretraining time is reduced by an order of magnitude with the use of language-specific data during the teacher-student training.

Paper Details

Authors:
Deblin Bagchi and William Hartmann
Submitted On:
13 May 2019 - 5:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_2019_poster_multi_deblin_bagchi

(5)

Subscribe

[1] Deblin Bagchi and William Hartmann, "Learning from the best: A teacher-student multilingual framework for low-resource languages", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4493. Accessed: May. 22, 2019.
@article{4493-19,
url = {http://sigport.org/4493},
author = {Deblin Bagchi and William Hartmann },
publisher = {IEEE SigPort},
title = {Learning from the best: A teacher-student multilingual framework for low-resource languages},
year = {2019} }
TY - EJOUR
T1 - Learning from the best: A teacher-student multilingual framework for low-resource languages
AU - Deblin Bagchi and William Hartmann
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4493
ER -
Deblin Bagchi and William Hartmann. (2019). Learning from the best: A teacher-student multilingual framework for low-resource languages. IEEE SigPort. http://sigport.org/4493
Deblin Bagchi and William Hartmann, 2019. Learning from the best: A teacher-student multilingual framework for low-resource languages. Available at: http://sigport.org/4493.
Deblin Bagchi and William Hartmann. (2019). "Learning from the best: A teacher-student multilingual framework for low-resource languages." Web.
1. Deblin Bagchi and William Hartmann. Learning from the best: A teacher-student multilingual framework for low-resource languages [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4493

END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS


The task of automatic language identification (LID) involving multiple dialects of the same language family on short speech recordings is a challenging problem. This can be further complicated for short-duration audio snippets in the presence of noise sources. In these scenarios, the identity of the language/dialect may be reliably present only in parts of the speech embedded in the temporal sequence.

Paper Details

Authors:
Bharat Padi, Anand Mohan, Sriram Ganapathy
Submitted On:
10 May 2019 - 2:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP19_3253_poster.pdf

(5)

Subscribe

[1] Bharat Padi, Anand Mohan, Sriram Ganapathy, "END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4275. Accessed: May. 22, 2019.
@article{4275-19,
url = {http://sigport.org/4275},
author = {Bharat Padi; Anand Mohan; Sriram Ganapathy },
publisher = {IEEE SigPort},
title = {END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS},
year = {2019} }
TY - EJOUR
T1 - END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS
AU - Bharat Padi; Anand Mohan; Sriram Ganapathy
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4275
ER -
Bharat Padi, Anand Mohan, Sriram Ganapathy. (2019). END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS. IEEE SigPort. http://sigport.org/4275
Bharat Padi, Anand Mohan, Sriram Ganapathy, 2019. END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS. Available at: http://sigport.org/4275.
Bharat Padi, Anand Mohan, Sriram Ganapathy. (2019). "END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS." Web.
1. Bharat Padi, Anand Mohan, Sriram Ganapathy. END-TO-END LANGUAGE RECOGNITION USING ATTENTION BASED HIERARCHICAL GATED RECURRENT UNIT MODELS [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4275

EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING


Code Switching refers to the phenomenon of changing languages within a sentence or discourse, and it represents a challenge for conventional automatic speech recognition systems deployed to tackle a single target language. The code switching problem is complicated by the lack of multi-lingual training data needed to build new and ad hoc multi-lingual acoustic and language models. In this work, we present a prototype research code-switching speech recognition system that leverages existing monolingual acoustic and language models, i.e., no ad hoc training is needed.

Paper Details

Authors:
Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi
Submitted On:
7 May 2019 - 2:28pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

CS_final-3 copy.pdf

(10)

Subscribe

[1] Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi, "EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3942. Accessed: May. 22, 2019.
@article{3942-19,
url = {http://sigport.org/3942},
author = {Zhen Huang; Xiaodan Zhuang; Daben Liu; Xiaoqiang Xiao; Yuchen Zhang; Sabato Marco Siniscalchi },
publisher = {IEEE SigPort},
title = {EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING},
year = {2019} }
TY - EJOUR
T1 - EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING
AU - Zhen Huang; Xiaodan Zhuang; Daben Liu; Xiaoqiang Xiao; Yuchen Zhang; Sabato Marco Siniscalchi
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3942
ER -
Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi. (2019). EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING. IEEE SigPort. http://sigport.org/3942
Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi, 2019. EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING. Available at: http://sigport.org/3942.
Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi. (2019). "EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING." Web.
1. Zhen Huang, Xiaodan Zhuang, Daben Liu, Xiaoqiang Xiao, Yuchen Zhang, Sabato Marco Siniscalchi. EXPLORING RETRAINING-FREE SPEECH RECOGNITION FOR INTRA-SENTENTIAL CODE-SWITCHING [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3942

Tuplemax Loss for Language Identification


In many scenarios of a language identification task, the user will specify a small set of languages which he/she can speak instead of a large set of all possible languages. We want to model such prior knowledge into the way we train our neural networks, by replacing the commonly used softmax loss function with a novel loss function named tuplemax loss. As a matter of fact, a typical language identification system launched in North America has about 95% users who could speak no more than two languages.

Paper Details

Authors:
Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno
Submitted On:
24 April 2019 - 11:03am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(23)

Subscribe

[1] Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno, "Tuplemax Loss for Language Identification", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/3896. Accessed: May. 22, 2019.
@article{3896-19,
url = {http://sigport.org/3896},
author = {Li Wan; Prashant Sridhar; Yang Yu; Quan Wang; Ignacio Lopez Moreno },
publisher = {IEEE SigPort},
title = {Tuplemax Loss for Language Identification},
year = {2019} }
TY - EJOUR
T1 - Tuplemax Loss for Language Identification
AU - Li Wan; Prashant Sridhar; Yang Yu; Quan Wang; Ignacio Lopez Moreno
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/3896
ER -
Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno. (2019). Tuplemax Loss for Language Identification. IEEE SigPort. http://sigport.org/3896
Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno, 2019. Tuplemax Loss for Language Identification. Available at: http://sigport.org/3896.
Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno. (2019). "Tuplemax Loss for Language Identification." Web.
1. Li Wan, Prashant Sridhar, Yang Yu, Quan Wang, Ignacio Lopez Moreno. Tuplemax Loss for Language Identification [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/3896

MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL


Training a conventional automatic speech recognition (ASR) system to support multiple languages is challenging because the subword unit, lexicon and word inventories are typically language specific. In contrast, sequence-to-sequence models are well suited for multilingual ASR because they encapsulate an acoustic, pronunciation and language model jointly in a single network. In this work we present a single sequence-to-sequence ASR model trained on 9 different Indian languages, which have very little overlap in their

Paper Details

Authors:
Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao
Submitted On:
19 April 2018 - 4:43pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Multilingual end-to-end model

(132)

Subscribe

[1] Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao, "MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3024. Accessed: May. 22, 2019.
@article{3024-18,
url = {http://sigport.org/3024},
author = {Shubham Toshniwal; Tara N. Sainath; Ron J. Weiss; Bo Li; Pedro Moreno; Eugene Weinstein; Kanishka Rao },
publisher = {IEEE SigPort},
title = {MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL},
year = {2018} }
TY - EJOUR
T1 - MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL
AU - Shubham Toshniwal; Tara N. Sainath; Ron J. Weiss; Bo Li; Pedro Moreno; Eugene Weinstein; Kanishka Rao
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3024
ER -
Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao. (2018). MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL. IEEE SigPort. http://sigport.org/3024
Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao, 2018. MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL. Available at: http://sigport.org/3024.
Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao. (2018). "MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL." Web.
1. Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao. MULTILINGUAL SPEECH RECOGNITION WITH A SINGLE END-TO-END MODEL [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3024

SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION


Techniques for multi-lingual and cross-lingual speech recognition can help in low resource scenarios, to bootstrap systems and enable analysis of new languages and domains. End-to-end approaches, in particular sequence-based techniques, are attractive because of their simplicity and elegance. While it is possible to integrate traditional multi-lingual bottleneck feature extractors as front-ends, we show that end-to-end multi-lingual training of sequence models is effective on context independent models trained using Connectionist Temporal Classification (CTC) loss.

Paper Details

Authors:
Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black
Submitted On:
18 April 2018 - 3:03pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Dalmia_ICASSP_2018.pdf

(115)

Subscribe

[1] Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black, "SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2970. Accessed: May. 22, 2019.
@article{2970-18,
url = {http://sigport.org/2970},
author = {Siddharth Dalmia; Ramon Sanabria; Florian Metze; Alan W Black },
publisher = {IEEE SigPort},
title = {SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION},
year = {2018} }
TY - EJOUR
T1 - SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION
AU - Siddharth Dalmia; Ramon Sanabria; Florian Metze; Alan W Black
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2970
ER -
Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black. (2018). SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/2970
Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black, 2018. SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION. Available at: http://sigport.org/2970.
Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black. (2018). "SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION." Web.
1. Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black. SEQUENCE-BASED MULTI-LINGUAL LOW RESOURCE SPEECH RECOGNITION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2970

Towards language-universal end-to-end speech recognition

Paper Details

Authors:
Submitted On:
18 April 2018 - 5:21pm
Short Link:
Type:

Document Files

2018_icassp_presentation_4.pdf

(115)

2018_icassp_presentation_4.pdf

(111)

Subscribe

[1] , "Towards language-universal end-to-end speech recognition", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2969. Accessed: May. 22, 2019.
@article{2969-18,
url = {http://sigport.org/2969},
author = { },
publisher = {IEEE SigPort},
title = {Towards language-universal end-to-end speech recognition},
year = {2018} }
TY - EJOUR
T1 - Towards language-universal end-to-end speech recognition
AU -
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2969
ER -
. (2018). Towards language-universal end-to-end speech recognition. IEEE SigPort. http://sigport.org/2969
, 2018. Towards language-universal end-to-end speech recognition. Available at: http://sigport.org/2969.
. (2018). "Towards language-universal end-to-end speech recognition." Web.
1. . Towards language-universal end-to-end speech recognition [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2969

A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification


A novel learnable dictionary encoding layer is proposed in this paper for end-to-end language identification. It is inline with the conventional GMM i-vector approach both theoretically and practically. We imitate the mechanism of traditional GMM training and Supervector encoding procedure on the top of CNN. The proposed layer can accumulate high-order statistics from variable-length input sequence and generate an utterance level fixed-dimensional vector representation.

Paper Details

Authors:
Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li
Submitted On:
13 April 2018 - 9:37am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_weichcai_icassp2018_lde.pdf

(127)

Subscribe

[1] Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li, "A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2701. Accessed: May. 22, 2019.
@article{2701-18,
url = {http://sigport.org/2701},
author = {Weicheng Cai; Zexin Cai; Xiang Zhang; Xiaoqi Wang; Ming Li },
publisher = {IEEE SigPort},
title = {A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification},
year = {2018} }
TY - EJOUR
T1 - A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification
AU - Weicheng Cai; Zexin Cai; Xiang Zhang; Xiaoqi Wang; Ming Li
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2701
ER -
Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li. (2018). A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification. IEEE SigPort. http://sigport.org/2701
Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li, 2018. A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification. Available at: http://sigport.org/2701.
Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li. (2018). "A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification." Web.
1. Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, Ming Li. A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2701

Insights into End-to-End Learning Scheme for Language Identification


A novel interpretable end-to-end learning scheme for language identification is proposed. It is in line with the classical GMM i-vector methods both theoretically and practically. In the end-to-end pipeline, a general encoding layer is employed on top of the front-end CNN, so that it can encode the variable-length input sequence into an utterance level vector automatically. After comparing with the state-of-the-art GMM i-vector methods, we give insights into CNN, and reveal its role and effect in the whole pipeline.

Paper Details

Authors:
Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li
Submitted On:
13 April 2018 - 9:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_weichcai_icassp2018_e2e.pdf

(117)

Subscribe

[1] Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li, "Insights into End-to-End Learning Scheme for Language Identification", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2699. Accessed: May. 22, 2019.
@article{2699-18,
url = {http://sigport.org/2699},
author = {Weicheng Cai; Zexin Cai; Wenbo Liu; Xiaoqi Wang; Ming Li },
publisher = {IEEE SigPort},
title = {Insights into End-to-End Learning Scheme for Language Identification},
year = {2018} }
TY - EJOUR
T1 - Insights into End-to-End Learning Scheme for Language Identification
AU - Weicheng Cai; Zexin Cai; Wenbo Liu; Xiaoqi Wang; Ming Li
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2699
ER -
Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li. (2018). Insights into End-to-End Learning Scheme for Language Identification. IEEE SigPort. http://sigport.org/2699
Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li, 2018. Insights into End-to-End Learning Scheme for Language Identification. Available at: http://sigport.org/2699.
Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li. (2018). "Insights into End-to-End Learning Scheme for Language Identification." Web.
1. Weicheng Cai, Zexin Cai, Wenbo Liu, Xiaoqi Wang, Ming Li. Insights into End-to-End Learning Scheme for Language Identification [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2699

Pages