Sorry, you need to enable JavaScript to visit this website.

Deformable VisTR: Spatio temporal deformable attention for video instance segmentation

Citation Author(s):
Submitted by:
Sudhir Yarram
Last updated:
6 May 2022 - 2:55am
Document Type:
Presentation Slides
Document Year:
2022
Paper Code:
MLSP-7.5
 

Video instance segmentation (VIS) task requires classifying, segmenting, and tracking object instances over all frames in a video clip. Recently, VisTR \cite{vistr} has been proposed as end-to-end transformer-based VIS framework, while demonstrating state-of-the-art performance. However, VisTR is slow to converge during training, requiring around 1000 GPU hours due to the high computational cost of its transformer attention module. To improve the training efficiency, we propose Deformable VisTR, leveraging spatio-temporal deformable attention module that only attends to a small fixed set of key spatio-temporal sampling points around a reference point. This enables Deformable VisTR to achieve linear computation in the size of spatio-temporal feature maps. Moreover, it can achieve on par performance as the original VisTR with 10x less GPU training hours. We validate the effectiveness of our method on the Youtube-VIS benchmark. Code is available at https://github.com/skrya/DefVIS.

up
0 users have voted: