Sorry, you need to enable JavaScript to visit this website.

DEPTH ESTIMATION FROM SINGLE IMAGE AND SEMANTIC PRIOR

Citation Author(s):
Praful Hambarde, Akshay Dudhane, Prashant W. Patil, Subrahmanyam Murala and Abhinav Dhall
Submitted by:
Praful Hambarde
Last updated:
2 November 2020 - 11:42am
Document Type:
Presentation Slides
Document Year:
2020
Event:
Presenters Name:
Praful Hambarde

Abstract 

Abstract: 

The multi-modality sensor fusion technique is an active research
area in scene understating. In this work, we explore
the RGB image and semantic-map fusion methods for depth
estimation. The LiDARs, Kinect, and TOF depth sensors are
unable to predict the depth-map at illuminate and monotonous
pattern surface. In this paper, we propose a semantic-to-depth
generative adversarial network (S2D-GAN) for depth estimation
from RGB image and its semantic-map. In the first stage,
the proposed S2D-GAN estimates the coarse level depthmap
using a semantic-to-coarse-depth generative adversarial
network (S2CD-GAN) while the second stage estimates the
fine-level depth-map using a cascaded multi-scale spatial
pooling network. The experimental analysis of the proposed
S2D-GAN performed on NYU-Depth-V2 dataset shows that
the proposed S2D-GAN gives outstanding result over existing
single image depth estimation and RGB with sparse samples
methods. The proposed S2D-GAN also gives efficient results
on the real-world indoor and outdoor image depth estimation.
Index Terms— Depth estimation, Single image, Semantic
map, Coarse-level depth-map, Fine-level depth-map.

up
0 users have voted:

Dataset Files

Virtual_PPT_ICIP20.pdf

(36)