Sorry, you need to enable JavaScript to visit this website.

Super-resolution of Omnidirectional Images Using Adversarial Learning

Primary tabs

Citation Author(s):
Aakanksha Rana, Aljosa Smolic
Submitted by:
cagri ozcinar
Last updated:
30 September 2019 - 3:45am
Document Type:
Poster
Document Year:
2019
Event:

Abstract 

Abstract: 

An omnidirectional image (ODI) enables viewers to look in every direction from a fixed point through a head-mounted display providing an immersive experience compared to that of a standard image. Designing immersive virtual reality systems with ODIs is challenging as they require high resolution content. In this paper, we study super-resolution for ODIs and propose an improved generative adversarial network based model which is optimized to handle the artifacts obtained in the spherical observational space. Specifically, we propose to use a fast PatchGAN discriminator, as it needs fewer parameters and improves the super-resolution at a fine scale. We also explore the generative models with adversarial learning by introducing a spherical-content specific loss function, called 360-SS. To train and test the performance of our proposed model we prepare a dataset of 4500 ODIs. Our results demonstrate the efficacy of the proposed method and identify new challenges in ODI super-resolution for future investigations.

up
0 users have voted:

Dataset Files

vsense_poster_template (3).pdf

(152)