Sorry, you need to enable JavaScript to visit this website.

Learned Disentangled Latent Representations for Scalable Image Coding for Humans and Machines

Citation Author(s):
Ezgi Ozyilkan, Mateen Ulhaq, Hyomin Choi, Fabien Racape
Submitted by:
Mateen Ulhaq
Last updated:
1 March 2023 - 10:26am
Document Type:
Presentation Slides
Document Year:
Mateen Ulhaq
Paper Code:

As an increasing amount of image and video content will be analyzed by machines, there is demand for a new codec paradigm that is capable of compressing visual input primarily for the purpose of computer vision inference, while secondarily supporting input reconstruction. In this work, we propose a learned compression architecture that can be used to build such a codec. We introduce a novel variational formulation that explicitly takes feature data relevant to the desired inference task as input at the encoder side. As such, our learned scalable image codec encodes and transmits two disentangled latent representations for object detection and input reconstruction. We note that compared to relevant benchmarks, our proposed scheme yields a more compact latent representation that is specialized for the inference task. Our experiments show that our proposed system achieves a bit rate savings of 40.6% on the primary object detection task compared to the current state-of-the-art, albeit with some degradation in performance for the secondary input reconstruction task.

0 users have voted: