Sorry, you need to enable JavaScript to visit this website.

Image coding for machines: an end-to-end learned approach

Citation Author(s):
Nam Le, Honglei Zhang, Francesco Cricri, Ramin Ghaznavi-Youvalari, Esa Rahtu
Submitted by:
Nam Le
Last updated:
24 June 2021 - 12:34pm
Document Type:
Poster
Document Year:
2021
Event:
Presenters Name:
Nam Le
Paper Code:
3195
Categories:

Abstract 

Abstract: 

Over recent years, deep learning-based computer vision systems have been applied to images at an ever-increasing pace, oftentimes representing the only type of consumption for those images. Given the dramatic explosion in the number of images generated per day, a question arises: how much better would an image codec targeting machine-consumption perform against state-of-the-art codecs targeting human-consumption? In this paper, we propose an image codec for machines which is neural network (NN) based and end-to-end learned. In particular, we propose a set of training strategies that address the delicate problem of balancing competing loss functions, such as computer vision task losses, image distortion losses, and rate loss. Our experimental results show that our NN-based codec outperforms the state-of-the-art Versa-tile Video Coding (VVC) standard on the object detection and instance segmentation tasks, achieving -37.87% and -32.90% of BD-rate gain, respectively, while being fast thanks to its compact size. To the best of our knowledge, this is the first end-to-end learned machine-targeted image codec.

up
0 users have voted:

Dataset Files

ICASSP-Poster.pdf

(321)