Documents
Poster
Key Action And Joint CTC-Attention Based Sign Language Recognition
- Citation Author(s):
- Submitted by:
- Li Haibo
- Last updated:
- 16 April 2020 - 11:39am
- Document Type:
- Poster
- Document Year:
- 2020
- Event:
- Presenters:
- Liqing Gao
- Paper Code:
- ICASSP-5717
- Categories:
- Keywords:
- Log in to post comments
Sign Language Recognition (SLR) translates sign language video into natural language. In practice, sign language video, owning a large number of redundant frames, is necessary to be selected the essential. However, unlike common video that describes actions, sign language video is characterized as continuous and dense action sequence, which is difficult to capture key actions corresponding to meaningful sentence. In this paper, we propose to hierarchically search key actions by a pyramid BiLSTM. Specifically, we first construct three BiLSTMs to produce temporal relationships among input video sequence. Then, we associate these BiLSTMs by searching the salient responses in two groups of fixed-scale sliding window and capture key actions. Additionally, in order to balance the sequence alignment and dependency, we propose to jointly train Connectionist Temporal Classification (CTC) and Long Short-Term Memory (LSTM). Experimental results demonstrate the effectiveness of the proposed method