Sorry, you need to enable JavaScript to visit this website.

Robust Speech Recognition (SPE-ROBU)

IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION

Paper Details

Authors:
Yan Huang; Yifan Gong
Submitted On:
7 March 2017 - 12:26am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICMMSE_Final.pptx

(61 downloads)

ICMMSE_Final.pptx

(54 downloads)

Keywords

Subscribe

[1] Yan Huang; Yifan Gong, "IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1676. Accessed: Sep. 21, 2017.
@article{1676-17,
url = {http://sigport.org/1676},
author = {Yan Huang; Yifan Gong },
publisher = {IEEE SigPort},
title = {IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION},
year = {2017} }
TY - EJOUR
T1 - IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION
AU - Yan Huang; Yifan Gong
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1676
ER -
Yan Huang; Yifan Gong. (2017). IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION. IEEE SigPort. http://sigport.org/1676
Yan Huang; Yifan Gong, 2017. IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION. Available at: http://sigport.org/1676.
Yan Huang; Yifan Gong. (2017). "IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION." Web.
1. Yan Huang; Yifan Gong. IMPROVED CEPSTRA MINIMUM-MEAN-SQUARE-ERROR NOISE REDUCTION ALGORITHM FOR ROBUST SPEECH RECOGNITION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1676

Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers


A new approach to online Speech Activity Detection (SAD) is proposed. This approach is designed for the use in a system that carries out 24/7 transcription of radio/TV broadcasts containing a large amount of non-speech segments. To improve the robustness of detection, we adopt Deep Neural Networks (DNNs) trained on artificially created mixtures of speech and non-speech signals at desired levels of Signal-to-Noise Ratio (SNR). An integral part of our approach is an online decoder based on Weighted Finite State Transducers (WFSTs); this decoder smooths the output from DNN.

poster.pdf

PDF icon poster.pdf (164 downloads)

Paper Details

Authors:
Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek
Submitted On:
28 February 2017 - 5:04am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster.pdf

(164 downloads)

Keywords

Subscribe

[1] Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek, "Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1495. Accessed: Sep. 21, 2017.
@article{1495-17,
url = {http://sigport.org/1495},
author = {Lukas Mateju; Petr Cerva; Jindrich Zdansky; Jiri Malek },
publisher = {IEEE SigPort},
title = {Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers},
year = {2017} }
TY - EJOUR
T1 - Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers
AU - Lukas Mateju; Petr Cerva; Jindrich Zdansky; Jiri Malek
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1495
ER -
Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek. (2017). Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers. IEEE SigPort. http://sigport.org/1495
Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek, 2017. Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers. Available at: http://sigport.org/1495.
Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek. (2017). "Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers." Web.
1. Lukas Mateju, Petr Cerva, Jindrich Zdansky, Jiri Malek. Speech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1495

ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS


A multi-stream framework with deep neural network (DNN) classifiers has been applied in this paper to improve automatic speech recognition (ASR) performance in environments with different reverberation characteristics. We propose a room parameter estimation model to determine the stream weights for DNN posterior probability combination with the aim of obtaining reliable log-likelihoods for decoding. The model is implemented by training a multi-layer

Paper Details

Authors:
Feifei Xiong, Stefan Goetze, Bernd T. Meyer
Submitted On:
28 February 2017 - 2:30am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_icassp17_xiongetal.pdf

(71 downloads)

Keywords

Subscribe

[1] Feifei Xiong, Stefan Goetze, Bernd T. Meyer, "ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1482. Accessed: Sep. 21, 2017.
@article{1482-17,
url = {http://sigport.org/1482},
author = {Feifei Xiong; Stefan Goetze; Bernd T. Meyer },
publisher = {IEEE SigPort},
title = {ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS},
year = {2017} }
TY - EJOUR
T1 - ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS
AU - Feifei Xiong; Stefan Goetze; Bernd T. Meyer
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1482
ER -
Feifei Xiong, Stefan Goetze, Bernd T. Meyer. (2017). ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS. IEEE SigPort. http://sigport.org/1482
Feifei Xiong, Stefan Goetze, Bernd T. Meyer, 2017. ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS. Available at: http://sigport.org/1482.
Feifei Xiong, Stefan Goetze, Bernd T. Meyer. (2017). "ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS." Web.
1. Feifei Xiong, Stefan Goetze, Bernd T. Meyer. ON DNN POSTERIOR PROBABILITY COMBINATION IN MULTI-STREAM SPEECH RECOGNITION FOR REVERBERANT ENVIRONMENTS [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1482

Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition


In earlier work we have proposed a source-filter decomposition of
speech through phase-based processing. The decomposition leads
to novel speech features that are extracted from the filter component
of the phase spectrum. This paper analyses this spectrum and the
proposed representation by evaluating statistical properties at vari-
ous points along the parametrisation pipeline. We show that speech
phase spectrum has a bell-shaped distribution which is in contrast to
the uniform assumption that is usually made. It is demonstrated that

Paper Details

Authors:
Erfan Loweimi, Jon Barker, Thomas Hain
Submitted On:
27 February 2017 - 3:08pm
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2017_0.pdf

(146 downloads)

Keywords

Subscribe

[1] Erfan Loweimi, Jon Barker, Thomas Hain, "Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1446. Accessed: Sep. 21, 2017.
@article{1446-17,
url = {http://sigport.org/1446},
author = {Erfan Loweimi; Jon Barker; Thomas Hain },
publisher = {IEEE SigPort},
title = {Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition},
year = {2017} }
TY - EJOUR
T1 - Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition
AU - Erfan Loweimi; Jon Barker; Thomas Hain
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1446
ER -
Erfan Loweimi, Jon Barker, Thomas Hain. (2017). Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition. IEEE SigPort. http://sigport.org/1446
Erfan Loweimi, Jon Barker, Thomas Hain, 2017. Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition. Available at: http://sigport.org/1446.
Erfan Loweimi, Jon Barker, Thomas Hain. (2017). "Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition." Web.
1. Erfan Loweimi, Jon Barker, Thomas Hain. Statistical Normalisation of Phase-based Feature Representation for Robust Speech Recognition [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1446

A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition


We propose a novel speaker-dependent (SD) approach to joint training of deep neural networks (DNNs) with an explicit speech separation structure for multi-talker speech recognition in a single-channel setting. First, a multi-condition training strategy is designed for a SD-DNN recognizer in multi-talker scenarios, which can significantly reduce the decoding runtime and improve the recognition accuracy over the approaches that use speaker-independent DNN models with a complicated joint decoding framework.

Paper Details

Authors:
Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee
Submitted On:
15 October 2016 - 2:48am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Yanhui_ISCSLP2016_oral.pdf

(103 downloads)

Keywords

Subscribe

[1] Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee, "A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1216. Accessed: Sep. 21, 2017.
@article{1216-16,
url = {http://sigport.org/1216},
author = {Yan-Hui Tu; Jun Du; Li-Rong Dai; Chin-Hui Lee },
publisher = {IEEE SigPort},
title = {A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition
AU - Yan-Hui Tu; Jun Du; Li-Rong Dai; Chin-Hui Lee
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1216
ER -
Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee. (2016). A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition. IEEE SigPort. http://sigport.org/1216
Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee, 2016. A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition. Available at: http://sigport.org/1216.
Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee. (2016). "A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition." Web.
1. Yan-Hui Tu, Jun Du, Li-Rong Dai, Chin-Hui Lee. A Speaker-Dependent Deep Learning Approach to Joint Speech Separation and Acoustic Modeling for Multi-Talker Automatic Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1216

Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition


In this paper, we address the problem of speech recognition in
the presence of additive noise. We investigate the applicability
and efficacy of auditory masking in devising a robust front end
for noisy features. This is achieved by introducing a masking
factor into the Vector Taylor Series (VTS) equations. The resultant
first order VTS approximation is used to compensate the parameters
of a clean speech model and a Minimum Mean Square
Error (MMSE) estimate is used to estimate the clean speech

Paper Details

Authors:
Biswajit Das, Ashish Panda
Submitted On:
14 October 2016 - 8:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Paper17_BD.pdf

(98 downloads)

Keywords

Subscribe

[1] Biswajit Das, Ashish Panda, "Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1193. Accessed: Sep. 21, 2017.
@article{1193-16,
url = {http://sigport.org/1193},
author = {Biswajit Das; Ashish Panda },
publisher = {IEEE SigPort},
title = {Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition
AU - Biswajit Das; Ashish Panda
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1193
ER -
Biswajit Das, Ashish Panda. (2016). Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition. IEEE SigPort. http://sigport.org/1193
Biswajit Das, Ashish Panda, 2016. Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition. Available at: http://sigport.org/1193.
Biswajit Das, Ashish Panda. (2016). "Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition." Web.
1. Biswajit Das, Ashish Panda. Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1193

Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition


In this paper, we address the problem of speech recognition in
the presence of additive noise. We investigate the applicability
and efficacy of auditory masking in devising a robust front end
for noisy features. This is achieved by introducing a masking
factor into the Vector Taylor Series (VTS) equations. The resultant
first order VTS approximation is used to compensate the parameters
of a clean speech model and a Minimum Mean Square
Error (MMSE) estimate is used to estimate the clean speech

Paper17_BD.pdf

PDF icon Paper17_BD.pdf (113 downloads)

Paper Details

Authors:
Ashish Panda
Submitted On:
14 October 2016 - 8:15am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Paper17_BD.pdf

(113 downloads)

Keywords

Subscribe

[1] Ashish Panda, "Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1192. Accessed: Sep. 21, 2017.
@article{1192-16,
url = {http://sigport.org/1192},
author = {Ashish Panda },
publisher = {IEEE SigPort},
title = {Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition
AU - Ashish Panda
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1192
ER -
Ashish Panda. (2016). Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition. IEEE SigPort. http://sigport.org/1192
Ashish Panda, 2016. Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition. Available at: http://sigport.org/1192.
Ashish Panda. (2016). "Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition." Web.
1. Ashish Panda. Vector Taylor Series Expansion with Auditory Masking for Noise Robust Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1192

Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition

Paper Details

Authors:
Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung
Submitted On:
14 October 2016 - 6:28am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ISCSLP_2016.pdf

(101 downloads)

Keywords

Subscribe

[1] Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung, "Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1189. Accessed: Sep. 21, 2017.
@article{1189-16,
url = {http://sigport.org/1189},
author = {Hsin-Ju Hsieh; Berlin Chen; Jeih-weih Hung },
publisher = {IEEE SigPort},
title = {Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition},
year = {2016} }
TY - EJOUR
T1 - Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition
AU - Hsin-Ju Hsieh; Berlin Chen; Jeih-weih Hung
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1189
ER -
Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung. (2016). Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition. IEEE SigPort. http://sigport.org/1189
Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung, 2016. Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition. Available at: http://sigport.org/1189.
Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung. (2016). "Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition." Web.
1. Hsin-Ju Hsieh, Berlin Chen, Jeih-weih Hung. Employing Median Filtering to Enhance the Complex-valued Acoustic Spectrograms in Modulation Domain for Noise-robust Speech Recognition [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1189

Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder


Ever since the deep neural network (DNN)-based acoustic model appeared, the recognition performance of automatic peech recognition has been greatly improved. Due to this achievement, various researches on DNN-based technique for noise robustness are also in progress. Among these approaches, the noise-aware training (NAT) technique which aims to improve the inherent robustness of DNN using noise estimates has shown remarkable performance. However, despite the great performance, we cannot be certain whether NAT is an optimal method for sufficiently utilizing the inherent robustness of DNN.

Paper Details

Authors:
Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim
Submitted On:
17 March 2016 - 1:45am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ICASSP2016_포스터_이강현_그래프2.pdf

(0)

Keywords

Subscribe

[1] Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim, "Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/734. Accessed: Sep. 21, 2017.
@article{734-16,
url = {http://sigport.org/734},
author = {Shin Jae Kang; Woo Hyun Kang; Nam Soo Kim },
publisher = {IEEE SigPort},
title = {Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder},
year = {2016} }
TY - EJOUR
T1 - Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder
AU - Shin Jae Kang; Woo Hyun Kang; Nam Soo Kim
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/734
ER -
Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim. (2016). Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder. IEEE SigPort. http://sigport.org/734
Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim, 2016. Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder. Available at: http://sigport.org/734.
Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim. (2016). "Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder." Web.
1. Shin Jae Kang, Woo Hyun Kang, Nam Soo Kim. Two-Stage Noise Aware Training Using Asymmetric Deep Denoising Autoencoder [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/734

SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION


In practical situations, the emotional speech utterances are often collected from different devices and conditions, which will obviously affect the recognition performance. To address this issue, in this paper, a novel transfer non-negative matrix factorization (TNMF) method is presented for cross-corpus speech emotion recognition. First, the NMF algorithm is adopted to learn a latent common feature space for the source and target datasets.

Paper Details

Authors:
Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao
Submitted On:
18 March 2016 - 10:46pm
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

SRC_TNMF_PengSong.pdf

(187 downloads)

Keywords

Subscribe

[1] Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao, "SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/700. Accessed: Sep. 21, 2017.
@article{700-16,
url = {http://sigport.org/700},
author = {Peng Song; Shifeng Ou; Wenming Zheng; Yun Jin; Li Zhao },
publisher = {IEEE SigPort},
title = {SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION},
year = {2016} }
TY - EJOUR
T1 - SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION
AU - Peng Song; Shifeng Ou; Wenming Zheng; Yun Jin; Li Zhao
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/700
ER -
Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao. (2016). SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION. IEEE SigPort. http://sigport.org/700
Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao, 2016. SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION. Available at: http://sigport.org/700.
Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao. (2016). "SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION." Web.
1. Peng Song, Shifeng Ou, Wenming Zheng, Yun Jin, Li Zhao. SPEECH EMOTION RECOGNITION USING TRANSFER NON-NEGATIVE MATRIX FACTORIZATION [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/700