Sorry, you need to enable JavaScript to visit this website.

Reducing Latency and Bandwidth for Video Streaming Using Keypoint Extraction and Digital Puppetry

Citation Author(s):
Roshan Prabhakar, Shubham Chandak, Carina Chiu, Renee Liang, Huong Nguyen, Kedar Tatwawadi, Tsachy Weissman
Submitted by:
Shubham Chandak
Last updated:
27 February 2021 - 11:41pm
Document Type:
Presentation Slides
Document Year:
2021
Event:
Presenters Name:
Roshan Prabhakar
Paper Code:
136
Categories:

Abstract 

Abstract: 

COVID-19 has made video communication one of the most important modes of information exchange. While extensive research has been conducted on the optimization of the video streaming pipeline, in particular the development of novel video codecs, further improvement in the video quality and latency is required, especially under poor network conditions. This paper proposes an alternative to the conventional codec through the implementation of a keypoint-centric encoder relying on the transmission of keypoint information from within a video feed. The decoder uses the streamed keypoints to generate a reconstruction preserving the semantic features in the input feed. Focusing on video calling applications, we detect and transmit the body pose and face mesh information through the network, which are displayed at the receiver in the form of animated puppets. Using efficient pose and face mesh detection in conjunction with skeleton-based animation, we demonstrate a prototype requiring lower than 35 kbps bandwidth, an order of magnitude reduction over typical video calling systems. The added computational latency due to the mesh extraction and animation is below 120ms on a standard laptop, showcasing the potential of this framework for real-time applications. The code for this work is available at https://github.com/shubhamchandak94/digital-puppetry.

up
1 user has voted: Petra Wolf

Dataset Files

Detailed slides

(57)