Documents
Poster
ACCELERATING MULTI-USER LARGE VOCABULARY CONTINUOUS SPEECH RECOGNITION ON HETEROGENEOUS CPU-GPU PLATFORMS
- Citation Author(s):
- Submitted by:
- Jungsuk Kim
- Last updated:
- 20 March 2016 - 6:56pm
- Document Type:
- Poster
- Document Year:
- 2016
- Event:
- Presenters:
- Jungsuk Kim
- Paper Code:
- SP-P2.3
- Categories:
- Log in to post comments
In our previous work, we developed a GPU-accelerated speech recognition engine optimized for faster than real time speech recognition on a heterogeneous CPU-GPU architecture. In this work, we focused on developing a scalable server-client architecture specifically optimized to simultaneously decode multiple users in real-time.
In order to efficiently support real-time speech recognition for multiple users, a "producer/consumer" design pattern was applied to decouple speech processes that run at different rates in order to handle multiple processes at the same time. Furthermore, we divided the speech recognition process into multiple consumers in order to maximize hardware utilization. As a result, our platform architecture was able to process more than 45 real-time audio streams with an average latency of less than 0.3 seconds using one-million-word vocabulary language models.