Sorry, you need to enable JavaScript to visit this website.

Can DNNs Learn to Lipread Full Sentences ?

Citation Author(s):
George Sterpu, Christian Saam, Naomi Harte
Submitted by:
George Sterpu
Last updated:
8 October 2018 - 1:50am
Document Type:
Presentation Slides
Document Year:
2018
Event:
Presenters:
George Sterpu
Paper Code:
MA.L1.4
 

Finding visual features and suitable models for lipreading tasks that are more complex than a well-constrained vocabulary has proven challenging. This paper explores state-of-the-art Deep Neural Network architectures for lipreading based on a Sequence to Sequence Recurrent Neural Network. We report results for both hand-crafted and 2D/3D Convolutional Neural Network visual front-ends, online monotonic attention, and a joint Connectionist Temporal Classification-Sequence-to-Sequence loss. The system is evaluated on the publicly available TCD-TIMIT dataset, with 59 speakers and a vocabulary of over 6000 words. Results show a major improvement on a Hidden Markov Model framework. A fuller analysis of performance across visemes demonstrates that the network is not only learning the language model, but actually learning to lipread.

up
0 users have voted: