Documents
Presentation Slides
Can DNNs Learn to Lipread Full Sentences ?
- Citation Author(s):
- Submitted by:
- George Sterpu
- Last updated:
- 8 October 2018 - 1:50am
- Document Type:
- Presentation Slides
- Document Year:
- 2018
- Event:
- Presenters:
- George Sterpu
- Paper Code:
- MA.L1.4
- Categories:
- Log in to post comments
Finding visual features and suitable models for lipreading tasks that are more complex than a well-constrained vocabulary has proven challenging. This paper explores state-of-the-art Deep Neural Network architectures for lipreading based on a Sequence to Sequence Recurrent Neural Network. We report results for both hand-crafted and 2D/3D Convolutional Neural Network visual front-ends, online monotonic attention, and a joint Connectionist Temporal Classification-Sequence-to-Sequence loss. The system is evaluated on the publicly available TCD-TIMIT dataset, with 59 speakers and a vocabulary of over 6000 words. Results show a major improvement on a Hidden Markov Model framework. A fuller analysis of performance across visemes demonstrates that the network is not only learning the language model, but actually learning to lipread.