Sorry, you need to enable JavaScript to visit this website.

Audio and Acoustic Signal Processing

A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin


This paper presents a multi-channel/multi-speaker 3D audiovisual
corpus for Mandarin continuous speech recognition and
other fields, such as speech visualization and speech synthesis.
This corpus consists of 24 speakers with about 18k utterances,
about 20 hours in total. For each utterance, the audio
streams were recorded by two professional microphones in
near-field and far-field respectively, while a marker-based 3D
facial motion capturing system with six infrared cameras was

Paper Details

Authors:
Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou
Submitted On:
14 October 2016 - 10:40am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

3D Audio-Visual Speech Corpus in Mandarin

(90 downloads)

Keywords

Subscribe

[1] Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou, "A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1200. Accessed: Jun. 29, 2017.
@article{1200-16,
url = {http://sigport.org/1200},
author = {Jun Yu; Rongfeng Su; Lan Wang; Wenpeng Zhou },
publisher = {IEEE SigPort},
title = {A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin},
year = {2016} }
TY - EJOUR
T1 - A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin
AU - Jun Yu; Rongfeng Su; Lan Wang; Wenpeng Zhou
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1200
ER -
Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou. (2016). A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin. IEEE SigPort. http://sigport.org/1200
Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou, 2016. A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin. Available at: http://sigport.org/1200.
Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou. (2016). "A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin." Web.
1. Jun Yu, Rongfeng Su, Lan Wang, Wenpeng Zhou. A multi-channel/multi-speaker interactive 3D Audio-Visual Speech Corpus in Mandarin [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1200

Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification


In the conventional frame feature based music genre
classification methods, the audio data is represented by
independent frames and the sequential nature of audio is totally
ignored. If the sequential knowledge is well modeled and
combined, the classification performance can be significantly
improved. The long short-term memory(LSTM) recurrent
neural network (RNN) which uses a set of special memory
cells to model for long-range feature sequence, has been
successfully used for many sequence labeling and sequence

Paper Details

Authors:
Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu
Submitted On:
14 October 2016 - 9:18am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

ISCSLP2016_JiaDai_pptA4.pdf

(111 downloads)

Keywords

Additional Categories

Subscribe

[1] Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu, "Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1195. Accessed: Jun. 29, 2017.
@article{1195-16,
url = {http://sigport.org/1195},
author = {Jia Dai; Shan Liang; Wei Xue; Chongjia Ni; Wenju Liu },
publisher = {IEEE SigPort},
title = {Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification},
year = {2016} }
TY - EJOUR
T1 - Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification
AU - Jia Dai; Shan Liang; Wei Xue; Chongjia Ni; Wenju Liu
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1195
ER -
Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu. (2016). Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification. IEEE SigPort. http://sigport.org/1195
Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu, 2016. Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification. Available at: http://sigport.org/1195.
Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu. (2016). "Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification." Web.
1. Jia Dai, Shan Liang, Wei Xue, Chongjia Ni, Wenju Liu. Long Short-term Memory Recurrent Neural Network based Segment Features for Music Genre Classification [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1195

Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM


The increasing profusion of commercial automatic speech recognition technology applications has been driven by big-data techniques, making use of high quality labelled speech datasets. Children’s speech displays greater time and frequency domain variability than typical adult speech, lacks the depth and breadth of training material, and presents difficulties relating to capture quality. All of these factors act to reduce the achievable performance of systems that recognise children’s speech.

Paper Details

Authors:
Ian McLoughlin, Wu Guo, Lirong Dai
Submitted On:
14 October 2016 - 5:48am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ISCSLP_poster(MengjieQian) .pdf

(64 downloads)

Keywords

Additional Categories

Subscribe

[1] Ian McLoughlin, Wu Guo, Lirong Dai, "Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1186. Accessed: Jun. 29, 2017.
@article{1186-16,
url = {http://sigport.org/1186},
author = {Ian McLoughlin; Wu Guo; Lirong Dai },
publisher = {IEEE SigPort},
title = {Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM},
year = {2016} }
TY - EJOUR
T1 - Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM
AU - Ian McLoughlin; Wu Guo; Lirong Dai
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1186
ER -
Ian McLoughlin, Wu Guo, Lirong Dai. (2016). Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM. IEEE SigPort. http://sigport.org/1186
Ian McLoughlin, Wu Guo, Lirong Dai, 2016. Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM. Available at: http://sigport.org/1186.
Ian McLoughlin, Wu Guo, Lirong Dai. (2016). "Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM." Web.
1. Ian McLoughlin, Wu Guo, Lirong Dai. Mismatched Training Data Enhancement for Automatic Recognition of Children’s Speech using DNN-HMM [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1186

Detection of Mood Disorder Using Speech Emotion Profiles and LSTM


In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects’ emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician.

Paper Details

Authors:
Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su
Submitted On:
14 October 2016 - 9:30pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ISCSLP-2016-1014-1.pdf

(81 downloads)

Keywords

Subscribe

[1] Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su, "Detection of Mood Disorder Using Speech Emotion Profiles and LSTM", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1183. Accessed: Jun. 29, 2017.
@article{1183-16,
url = {http://sigport.org/1183},
author = {Tsung-Hsien Yang; Kun-Yi Huang; and Ming-Hsiang Su },
publisher = {IEEE SigPort},
title = {Detection of Mood Disorder Using Speech Emotion Profiles and LSTM},
year = {2016} }
TY - EJOUR
T1 - Detection of Mood Disorder Using Speech Emotion Profiles and LSTM
AU - Tsung-Hsien Yang; Kun-Yi Huang; and Ming-Hsiang Su
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1183
ER -
Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su. (2016). Detection of Mood Disorder Using Speech Emotion Profiles and LSTM. IEEE SigPort. http://sigport.org/1183
Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su, 2016. Detection of Mood Disorder Using Speech Emotion Profiles and LSTM. Available at: http://sigport.org/1183.
Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su. (2016). "Detection of Mood Disorder Using Speech Emotion Profiles and LSTM." Web.
1. Tsung-Hsien Yang, Kun-Yi Huang, and Ming-Hsiang Su. Detection of Mood Disorder Using Speech Emotion Profiles and LSTM [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1183

The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words


In Mandarin language speaking, some consonant and vowel pairs are hard to be distinguished and pronounced clearly even for some native speakers. This study investigates the signal distance between consonants compared in pairs from the signal processing point of view to reveal the correlation of signal distance and consonant pronunciation. Some popular speech quality objective measures are innovatively applied to obtain the signal distance.

Paper Details

Authors:
Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN
Submitted On:
14 October 2016 - 12:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ISCSLP Poster_Correlation Between Signal Distance.pdf

(68 downloads)

Keywords

Subscribe

[1] Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN, "The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1180. Accessed: Jun. 29, 2017.
@article{1180-16,
url = {http://sigport.org/1180},
author = {Huijun Ding; Chenxi XIE; Lei ZENG; Yang XU; Guo DAN },
publisher = {IEEE SigPort},
title = {The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words},
year = {2016} }
TY - EJOUR
T1 - The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words
AU - Huijun Ding; Chenxi XIE; Lei ZENG; Yang XU; Guo DAN
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1180
ER -
Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN. (2016). The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words. IEEE SigPort. http://sigport.org/1180
Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN, 2016. The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words. Available at: http://sigport.org/1180.
Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN. (2016). "The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words." Web.
1. Huijun Ding, Chenxi XIE, Lei ZENG, Yang XU, Guo DAN. The Correlation Between Signal Distance and Consonant Pronunciation in Mandarin Words [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1180

Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning

Paper Details

Authors:
Submitted On:
13 October 2016 - 9:31am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster-v2.pdf

(72 downloads)

Keywords

Subscribe

[1] , "Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1175. Accessed: Jun. 29, 2017.
@article{1175-16,
url = {http://sigport.org/1175},
author = { },
publisher = {IEEE SigPort},
title = {Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning},
year = {2016} }
TY - EJOUR
T1 - Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning
AU -
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1175
ER -
. (2016). Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning. IEEE SigPort. http://sigport.org/1175
, 2016. Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning. Available at: http://sigport.org/1175.
. (2016). "Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning." Web.
1. . Pronunciation Error Detection using DNN Articulatory Model based on Multi-lingual and Multi-task Learning [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1175

Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables


Speech production requires control for coordination among different articulatory organs. During the natural speech, the articulatory co-variation is more common rather than compensation, but the studies supporting this view are few. In this study, the coordination of lip and tongue articulation was examined during speech using articulatory data. Native speakers of Chinese served as subjects. Speech materials consisted of short Chinese sentences, which include words having the cardinal vowels at different locations in sentences with and without emphasis.

Paper Details

Authors:
Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang
Submitted On:
13 October 2016 - 2:06am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ISCSLP2016_POSTER

(73 downloads)

Keywords

Additional Categories

Subscribe

[1] Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang, "Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1166. Accessed: Jun. 29, 2017.
@article{1166-16,
url = {http://sigport.org/1166},
author = {Ju Zhang; Kiyoshi Honda; Jianguo Wei; Jianrong Wang; Jianwu Dang },
publisher = {IEEE SigPort},
title = {Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables},
year = {2016} }
TY - EJOUR
T1 - Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables
AU - Ju Zhang; Kiyoshi Honda; Jianguo Wei; Jianrong Wang; Jianwu Dang
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1166
ER -
Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang. (2016). Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables. IEEE SigPort. http://sigport.org/1166
Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang, 2016. Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables. Available at: http://sigport.org/1166.
Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang. (2016). "Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables." Web.
1. Ju Zhang, Kiyoshi Honda, Jianguo Wei, Jianrong Wang, Jianwu Dang. Spatial Co-variation of Lip and Tongue at Strong and Weak Syllables [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1166

The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students


This study aims at examination on the relationship between the
perception and production of Mandarin tone by Kazak minor
learners from China. The eight-day perceptual training course
of Mandarin tone is designed. Perception is assessed by means
of identification test. Production data is collected both at
pretest and post-test, and evaluated by native speakers of
Mandarin Chinese. The results from the perception at pretest
and post-test reveal that training Kazak learners to perceive
Mandarin tones has been shown to be effective, with

ISCSLP168.pdf

PDF icon iscslp168 (75 downloads)

Paper Details

Authors:
Zihou MENG
Submitted On:
13 October 2016 - 1:42am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

iscslp168

(75 downloads)

Keywords

Additional Categories

Subscribe

[1] Zihou MENG, "The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1165. Accessed: Jun. 29, 2017.
@article{1165-16,
url = {http://sigport.org/1165},
author = {Zihou MENG },
publisher = {IEEE SigPort},
title = {The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students},
year = {2016} }
TY - EJOUR
T1 - The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students
AU - Zihou MENG
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1165
ER -
Zihou MENG. (2016). The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students. IEEE SigPort. http://sigport.org/1165
Zihou MENG, 2016. The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students. Available at: http://sigport.org/1165.
Zihou MENG. (2016). "The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students." Web.
1. Zihou MENG. The Examination of the Relationship between Perception and Production of Mandarin tone of Kazak Students [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1165

Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis


Directions into Velocities of Articulators (DIVA) model is a kind of self-adaptive neural network model which controls movements of a simulated vocal tract to produce words, syllables or phonemes. However, DIVA model lacks of emotion functions. To implement the emotion function in DIVA model, we investigate the process of affective speech production based on the combination of fundamental frequency (F0) and formant frequencies, as well as the relations between F0 and formants of emotional speech.

Paper Details

Authors:
Bogu Li, Zhilei Liu, Jianwu Dang
Submitted On:
11 October 2016 - 12:11am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(78 downloads)

Keywords

Additional Categories

Subscribe

[1] Bogu Li, Zhilei Liu, Jianwu Dang, "Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1155. Accessed: Jun. 29, 2017.
@article{1155-16,
url = {http://sigport.org/1155},
author = {Bogu Li; Zhilei Liu; Jianwu Dang },
publisher = {IEEE SigPort},
title = {Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis},
year = {2016} }
TY - EJOUR
T1 - Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis
AU - Bogu Li; Zhilei Liu; Jianwu Dang
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1155
ER -
Bogu Li, Zhilei Liu, Jianwu Dang. (2016). Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis. IEEE SigPort. http://sigport.org/1155
Bogu Li, Zhilei Liu, Jianwu Dang, 2016. Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis. Available at: http://sigport.org/1155.
Bogu Li, Zhilei Liu, Jianwu Dang. (2016). "Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis." Web.
1. Bogu Li, Zhilei Liu, Jianwu Dang. Study on the Relation of Fundamental and Formant Frequencies for Affective Speech Synthesis [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1155

A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis


The main object of this study is voice quality after total thyroidectomy (which involves complete removal of the thyroid gland) or isthmolobectomie (which involves removal of the half, right or left, portions of the gland). This often causes degradation of voice quality permanently or temporarily. Voice quality will be studied using aerodynamic cues. From an aerodynamic point of view, oral airflow (Oaf) and maximum phonation time (TMP) were observed.

Paper Details

Authors:
Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK
Submitted On:
7 October 2016 - 6:52am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

Für Tianjin 2016.pdf

(0)

Keywords

Additional Categories

Subscribe

[1] Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK , "A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis", IEEE SigPort, 2016. [Online]. Available: http://sigport.org/1151. Accessed: Jun. 29, 2017.
@article{1151-16,
url = {http://sigport.org/1151},
author = {Ming XIU; Camille FAUTH; Béatrice VAXELAIRE; Jean-François RODIER; Pierre-Philippe Volkmar; Rudolph SOCK },
publisher = {IEEE SigPort},
title = {A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis},
year = {2016} }
TY - EJOUR
T1 - A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis
AU - Ming XIU; Camille FAUTH; Béatrice VAXELAIRE; Jean-François RODIER; Pierre-Philippe Volkmar; Rudolph SOCK
PY - 2016
PB - IEEE SigPort
UR - http://sigport.org/1151
ER -
Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK . (2016). A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis. IEEE SigPort. http://sigport.org/1151
Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK , 2016. A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis. Available at: http://sigport.org/1151.
Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK . (2016). "A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis." Web.
1. Ming XIU, Camille FAUTH, Béatrice VAXELAIRE, Jean-François RODIER, Pierre-Philippe Volkmar, Rudolph SOCK . A post-thyroidectomy voice quality study in patients suffering or not from Recurrent Laryngeal paralysis [Internet]. IEEE SigPort; 2016. Available from : http://sigport.org/1151

Pages