Sorry, you need to enable JavaScript to visit this website.

On Intra Video Coding and In-Loop Filtering for Neural Object Detection Networks

Citation Author(s):
Kristian Fischer, Christian Herglotz, André Kaup
Submitted by:
Kristian Fischer
Last updated:
9 November 2020 - 4:02am
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters:
Kristian Fischer
Paper Code:
1436
Categories:
 

Classical video coding for satisfying humans as the final user is a widely investigated field of studies for visual content, and common video codecs are all optimized for the human visual system (HVS). But are the assumptions and optimizations also valid when the compressed video stream is analyzed by a machine? To answer this question, we compared the performance of two state-of-the-art neural detection networks when being fed with deteriorated input images coded with HEVC and VVC in an autonomous driving scenario using intra coding. Additionally, the impact of the three VVC in-loop filters when coding images for a neural network is examined. The results are compared using the mean average precision metric to evaluate the object detection performance for the compressed inputs. Throughout these tests, we found that the Bjøntegaard Delta Rate savings with respect to PSNR of 22.2 % using VVC instead of HEVC cannot be reached when coding for object detection networks with only 13.6 % in the best case. Besides, it is shown that disabling the VVC in-loop filters SAO and ALF results in bitrate savings of 6.4 % compared to the standard VTM at the same mean average precision.

up
0 users have voted: