Sorry, you need to enable JavaScript to visit this website.

CONTINUOUS ULTRASOUND BASED TONGUE MOVEMENT VIDEO SYNTHESIS FROM SPEECH

Citation Author(s):
Jianrong Wang; Yalong Yang, Jianguo Wei, Ju Zhang
Submitted by:
Yalong Yang
Last updated:
29 March 2016 - 10:25pm
Document Type:
Poster
Document Year:
2016
Event:
Presenters:
Ju Zhang
Paper Code:
IVMSP-P9.5
 

The movement of tongue plays an important role in pronunciation. Visualizing the movement of tongue can improve speech intelligibility and also helps learning a second language. However, hardly any research has been investigated for this topic. In this paper, a framework to synthesize continuous ultrasound tongue movement video from speech is presented. Two different mapping methods are introduced as the most important parts of the framework. The objective evaluation and subjective opinions show that the Gaussian Mixture Model (GMM) based method has a better result for synthesizing static image and Vector Quantization (VQ) based method produces more stable continuous video. Meanwhile, the participants of evaluation state that the results of both methods are visual-understandable.

up
0 users have voted: