Sorry, you need to enable JavaScript to visit this website.

Convolutional Gated Recurrent Networks for Video Segmentation

Citation Author(s):
Submitted by:
Mennatullah Siam
Last updated:
11 September 2017 - 11:14pm
Document Type:
Poster
Document Year:
2017
Event:
Presenters Name:
Mennatullah Siam

Abstract 

Abstract: 

Semantic segmentation has recently witnessed major progress, but most of the previous work focused on improving single image segmentation. In this paper, we introduce a novel approach to implicitly utilize temporal data in videos for online segmentation. This design receives a sequence of consecutive video frames and outputs the segmentation of the last frame. Convolutional gated recurrent networks are used for the recurrent part to preserve spatial connectivities in the image. This architecture is tested for both binary and semantic video segmentation tasks. Experiments are conducted on the recent benchmarks in SegTrack V2, Davis, Camvid, and Synthia. Using recurrent fully convolutional networks improved the baseline network performance in all of our experiments. Namely, 5% and 3% improvement of F-measure in SegTrack2 and Davis respectively, 5.7% and 1.6% improvement in mean IoU in Synthia and Camvid. Thus, RFCN networks can be seen as a method to improve any baseline segmentation network by embedding them into a recurrent module that utilizes temporal data.

up
0 users have voted:

Dataset Files

ICIP17.pdf

(474)