Sorry, you need to enable JavaScript to visit this website.

General Topics in Speech Recognition (SPE-GASR)

Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition

Paper Details

Authors:
Yan Ji, Hongcui Wang, Bruce Denby
Submitted On:
15 October 2016 - 8:47am
Short Link:
Type:

Document Files

poster-llc.pdf

(77 downloads)

Keywords

Subscribe

[1] Yan Ji, Hongcui Wang, Bruce Denby, "Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1235. Accessed: Jun. 22, 2017.
@article{1235-16,
url = {http://sigport.org/1235},
author = {Yan Ji; Hongcui Wang; Bruce Denby },
publisher = {IEEE SigPort},
title = {Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition
AU - Yan Ji; Hongcui Wang; Bruce Denby
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1235
ER -
Yan Ji, Hongcui Wang, Bruce Denby. (2016). Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition. IEEE SigPort. http://sigport.org/1235
Yan Ji, Hongcui Wang, Bruce Denby, 2016. Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition. Available at: http://sigport.org/1235.
Yan Ji, Hongcui Wang, Bruce Denby. (2016). "Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition." Web.
1. Yan Ji, Hongcui Wang, Bruce Denby. Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1235

Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition

Paper Details

Authors:
Yan Ji, Hongcui Wang, Bruce Denby
Submitted On:
15 October 2016 - 8:47am
Short Link:
Type:

Document Files

poster-llc.pdf

(77 downloads)

Keywords

Subscribe

[1] Yan Ji, Hongcui Wang, Bruce Denby, "Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1234. Accessed: Jun. 22, 2017.
@article{1234-16,
url = {http://sigport.org/1234},
author = {Yan Ji; Hongcui Wang; Bruce Denby },
publisher = {IEEE SigPort},
title = {Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition
AU - Yan Ji; Hongcui Wang; Bruce Denby
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1234
ER -
Yan Ji, Hongcui Wang, Bruce Denby. (2016). Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition. IEEE SigPort. http://sigport.org/1234
Yan Ji, Hongcui Wang, Bruce Denby, 2016. Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition. Available at: http://sigport.org/1234.
Yan Ji, Hongcui Wang, Bruce Denby. (2016). "Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition." Web.
1. Yan Ji, Hongcui Wang, Bruce Denby. Comparison of DCT and Autoencoder-based Features for DNN-HMM Multimodal Silent Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1234

FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION


Examples of subband filters learned using ConvRBM: (a) filters in time-domain (i.e., impulse responses), (b) filters in frequency-domain (i.e., frequency responses).

Convolutional Restricted Boltzmann Machine (ConvRBM) as a model for speech signal is presented in this paper. We have
developed ConvRBM with sampling from noisy rectified linear units (NReLUs). ConvRBM is trained in an unsupervised way to model speech signal of arbitrary lengths. Weights of the model can represent an auditory-like filterbank. Our

poster.pdf

PDF icon poster.pdf (286 downloads)

Paper Details

Authors:
Hardik B. Sailor, Hemant A. Patil
Submitted On:
31 March 2016 - 4:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(286 downloads)

Keywords

Subscribe

[1] Hardik B. Sailor, Hemant A. Patil, "FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1075. Accessed: Jun. 22, 2017.
@article{1075-16,
url = {http://sigport.org/1075},
author = {Hardik B. Sailor; Hemant A. Patil },
publisher = {IEEE SigPort},
title = {FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION},
year = {2016} }
TY - EJOUR
T1 - FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION
AU - Hardik B. Sailor; Hemant A. Patil
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1075
ER -
Hardik B. Sailor, Hemant A. Patil. (2016). FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/1075
Hardik B. Sailor, Hemant A. Patil, 2016. FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION. Available at: http://sigport.org/1075.
Hardik B. Sailor, Hemant A. Patil. (2016). "FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION." Web.
1. Hardik B. Sailor, Hemant A. Patil. FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1075

FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION


Examples of subband filters learned using ConvRBM: (a) filters in time-domain (i.e., impulse responses), (b) filters in frequency-domain (i.e., frequency responses).

Convolutional Restricted Boltzmann Machine (ConvRBM) as a model for speech signal is presented in this paper. We have
developed ConvRBM with sampling from noisy rectified linear units (NReLUs). ConvRBM is trained in an unsupervised way to model speech signal of arbitrary lengths. Weights of the model can represent an auditory-like filterbank. Our

poster.pdf

PDF icon poster.pdf (286 downloads)

Paper Details

Authors:
Hardik B. Sailor, Hemant A. Patil
Submitted On:
31 March 2016 - 4:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(286 downloads)

Keywords

Subscribe

[1] Hardik B. Sailor, Hemant A. Patil, "FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1074. Accessed: Jun. 22, 2017.
@article{1074-16,
url = {http://sigport.org/1074},
author = {Hardik B. Sailor; Hemant A. Patil },
publisher = {IEEE SigPort},
title = {FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION},
year = {2016} }
TY - EJOUR
T1 - FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION
AU - Hardik B. Sailor; Hemant A. Patil
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1074
ER -
Hardik B. Sailor, Hemant A. Patil. (2016). FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/1074
Hardik B. Sailor, Hemant A. Patil, 2016. FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION. Available at: http://sigport.org/1074.
Hardik B. Sailor, Hemant A. Patil. (2016). "FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION." Web.
1. Hardik B. Sailor, Hemant A. Patil. FILTERBANK LEARNING USING CONVOLUTIONAL RESTRICTED BOLTZMANN MACHINE FOR SPEECH RECOGNITION [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1074

Selection and Combination of Hypotheses for Dialectal Speech Recognition

Paper Details

Authors:
Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno
Submitted On:
21 March 2016 - 9:17pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icassp16.pdf

(133 downloads)

Keywords

Subscribe

[1] Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno, "Selection and Combination of Hypotheses for Dialectal Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/945. Accessed: Jun. 22, 2017.
@article{945-16,
url = {http://sigport.org/945},
author = {Victor Soto; Olivier Siohan; Mohamed Elfeky; Pedro Moreno },
publisher = {IEEE SigPort},
title = {Selection and Combination of Hypotheses for Dialectal Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Selection and Combination of Hypotheses for Dialectal Speech Recognition
AU - Victor Soto; Olivier Siohan; Mohamed Elfeky; Pedro Moreno
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/945
ER -
Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno. (2016). Selection and Combination of Hypotheses for Dialectal Speech Recognition. IEEE SigPort. http://sigport.org/945
Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno, 2016. Selection and Combination of Hypotheses for Dialectal Speech Recognition. Available at: http://sigport.org/945.
Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno. (2016). "Selection and Combination of Hypotheses for Dialectal Speech Recognition." Web.
1. Victor Soto, Olivier Siohan, Mohamed Elfeky, Pedro Moreno. Selection and Combination of Hypotheses for Dialectal Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/945

Divergence estimation based on deep neural networks and its use for language identification


In this paper, we propose a method to estimate statistical divergence between probability distributions by a DNN-based discriminative approach and its use for language identification tasks. Since statistical divergence is generally defined as a functional of two probability density functions, these density functions are usually represented in a parametric form. Then, if a mismatch exists between the assumed distribution and its true one, the obtained divergence becomes erroneous.

Paper Details

Authors:
Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu
Submitted On:
21 March 2016 - 8:31pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_2016.pdf

(348 downloads)

Keywords

Subscribe

[1] Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu, "Divergence estimation based on deep neural networks and its use for language identification", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/942. Accessed: Jun. 22, 2017.
@article{942-16,
url = {http://sigport.org/942},
author = {Yosuke Kashiwagi; Congying Zhang; Daisuke Saito; Nobuaki Minematsu },
publisher = {IEEE SigPort},
title = {Divergence estimation based on deep neural networks and its use for language identification},
year = {2016} }
TY - EJOUR
T1 - Divergence estimation based on deep neural networks and its use for language identification
AU - Yosuke Kashiwagi; Congying Zhang; Daisuke Saito; Nobuaki Minematsu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/942
ER -
Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu. (2016). Divergence estimation based on deep neural networks and its use for language identification. IEEE SigPort. http://sigport.org/942
Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu, 2016. Divergence estimation based on deep neural networks and its use for language identification. Available at: http://sigport.org/942.
Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu. (2016). "Divergence estimation based on deep neural networks and its use for language identification." Web.
1. Yosuke Kashiwagi, Congying Zhang, Daisuke Saito, Nobuaki Minematsu. Divergence estimation based on deep neural networks and its use for language identification [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/942

ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS


In our previous work, we developed a GPU-accelerated speech recognition engine optimized for faster than real time speech recognition on a heterogeneous CPU-GPU architecture. In this work, we focused on developing a scalable server-client architecture specifically optimized to simultaneously decode multiple users in real-time.

Paper Details

Authors:
Ian Lane
Submitted On:
20 March 2016 - 6:56pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2016_Kim_ICASSP-poster.pdf

(148 downloads)

Keywords

Subscribe

[1] Ian Lane, "ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/889. Accessed: Jun. 22, 2017.
@article{889-16,
url = {http://sigport.org/889},
author = {Ian Lane },
publisher = {IEEE SigPort},
title = {ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS},
year = {2016} }
TY - EJOUR
T1 - ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS
AU - Ian Lane
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/889
ER -
Ian Lane. (2016). ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS. IEEE SigPort. http://sigport.org/889
Ian Lane, 2016. ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS. Available at: http://sigport.org/889.
Ian Lane. (2016). "ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS." Web.
1. Ian Lane. ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/889

Progress on Phoneme Recognition with a Continuous-State HMM


Poster Thumbnail

Recent advances in automatic speech recognition have used
large corpora and powerful computational resources to train
complex statistical models from high-dimensional features, to
attempt to capture all the variability found in natural speech.
Such models are difficult to interpret and may be fragile, and
contradict or ignore knowledge of human speech produc-
tion and perception. We report progress towards phoneme
recognition using a model of speech which employs very few
parameters and which is more faithful to the dynamics and

ICASSP2016.pdf

PDF icon ICASSP2016.pdf (287 downloads)

Paper Details

Authors:
Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell
Submitted On:
14 March 2016 - 8:04am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP2016.pdf

(287 downloads)

Keywords

Subscribe

[1] Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell, "Progress on Phoneme Recognition with a Continuous-State HMM", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/672. Accessed: Jun. 22, 2017.
@article{672-16,
url = {http://sigport.org/672},
author = {Philip Weber; Linxue Bai; Steve Houghton; Peter Jancovic; Martin Russell },
publisher = {IEEE SigPort},
title = {Progress on Phoneme Recognition with a Continuous-State HMM},
year = {2016} }
TY - EJOUR
T1 - Progress on Phoneme Recognition with a Continuous-State HMM
AU - Philip Weber; Linxue Bai; Steve Houghton; Peter Jancovic; Martin Russell
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/672
ER -
Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell. (2016). Progress on Phoneme Recognition with a Continuous-State HMM. IEEE SigPort. http://sigport.org/672
Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell, 2016. Progress on Phoneme Recognition with a Continuous-State HMM. Available at: http://sigport.org/672.
Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell. (2016). "Progress on Phoneme Recognition with a Continuous-State HMM." Web.
1. Philip Weber, Linxue Bai, Steve Houghton, Peter Jancovic, Martin Russell. Progress on Phoneme Recognition with a Continuous-State HMM [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/672

Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals


Amusement Level Assessment

In this paper, we present our work on speech-smile/shaking vowels classification. An efficient classification system would be a first step towards the estimation (from speech signals only) of amusement levels beyond smile, as indeed shaking vowels represent a transition from smile to laughter superimposed to speech. A database containing examples of both classes has been collected from acted and spontaneous speech corpora. An experimental study using several acoustic feature sets is presented here, and novel features are also proposed.

Paper Details

Authors:
Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit
Submitted On:
23 February 2016 - 1:44pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

GlobalSip2015_ElHaddad_Dupont_Cakmak_Dutoit.pdf

(220 downloads)

Keywords

Subscribe

[1] Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit, "Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals", IEEE SigPort, 2015. [Online]. Available: http://sigport.org/404. Accessed: Jun. 22, 2017.
@article{404-15,
url = {http://sigport.org/404},
author = {Stéphane Dupont; Hüseyin Cakmak; Thierry Dutoit },
publisher = {IEEE SigPort},
title = {Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals},
year = {2015} }
TY - EJOUR
T1 - Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals
AU - Stéphane Dupont; Hüseyin Cakmak; Thierry Dutoit
PY - 2015
PB - IEEE SigPort
UR - http://sigport.org/404
ER -
Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit. (2015). Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals. IEEE SigPort. http://sigport.org/404
Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit, 2015. Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals. Available at: http://sigport.org/404.
Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit. (2015). "Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals." Web.
1. Stéphane Dupont, Hüseyin Cakmak, Thierry Dutoit. Shaking and Speech-smile Vowels Classification: An Attempt at Amusement Arousal Estimation from Speech Signals [Internet]. IEEE SigPort; 2015. Available from : http://sigport.org/404