Sorry, you need to enable JavaScript to visit this website.

3D VISUAL SPEECH ANIMATION USING 2D VIDEOS

Citation Author(s):
Rabab Algadhy , Yoshihiko Gotoh , Steve Maddock
Submitted by:
Rabab Algadhy
Last updated:
8 May 2019 - 8:26am
Document Type:
Poster
Document Year:
2019
Event:
Presenters:
Rabab Algadhy
Paper Code:
2318
 

In visual speech animation, lip motion accuracy is of paramount importance for speech intelligibility, especially for the hard of hearing or foreign language learners. We present an approach for visual speech animation that uses tracked lip motion in front-view 2D videos of a real speaker to drive the lip motion of a synthetic 3D head. This makes use of a 3D morphable model (3DMM), built using 3D synthetic head poses, with corresponding landmarks identified in the 2D videos and the 3DMM. We show that using a wider range of synthetic head poses for different phoneme intensities to create a 3DMM, as well as a combination of front and side photographs of the real speakers rather than just front photographs to produce initial neutral 3D synthetic head poses, gives better animation results when compared to ground truth data consisting of front-view 2D videos of real speakers.

up
0 users have voted: