Documents
Presentation Slides
RECOGNIZING HIGHLY VARIABLE AMERICAN SIGN LANGUAGE IN VIRTUAL REALITY
- Citation Author(s):
- Submitted by:
- Md Shahinur Alam
- Last updated:
- 1 June 2023 - 12:27am
- Document Type:
- Presentation Slides
- Document Year:
- 2023
- Event:
- Presenters:
- Md Shahinur Alam
- Categories:
- Keywords:
- Log in to post comments
Recognizing signs in virtual reality (VR) is challenging; here, we developed an American Sign Language (ASL) recognition system in a VR environment. We collected a dataset of 2,500 ASL numerical digits (0-10) and 500 instances of the ASL sign for TEA from 10 participants using an Oculus Quest 2. Participants produced ASL signs naturally, resulting in significant variability in location, orientation, duration, and motion trajectory. Additionally, the ten signers in this initial study were diverse in age, sex, ASL proficiency, and hearing status, with most being deaf lifelong ASL users. We report the accuracy results of the recognition model trained on this dataset and highlight three primary contributions of this work: 1) intentionally using highly-variable ASL production, 2) involving deaf ASL signers on the project team, and 3) analyzing the typical confusions of the recognition system.