Sorry, you need to enable JavaScript to visit this website.

Leveraging Local Temporal Information For Multimodal Scene Classification

Citation Author(s):
Saurabh Sahu, Palash Goyal
Submitted by:
Palash Goyal
Last updated:
4 May 2022 - 4:40pm
Document Type:
Presentation Slides
Document Year:
2022
Event:
Presenters:
Palash Goyal
Paper Code:
IVMSP-8.3
 

Robust video scene classification models should capture the spatial (pixel-wise) and temporal (frame-wise) characteristics of a video effectively. Transformer models with self-attention which are designed to get contextualized representations for individual tokens given a sequence of tokens, are becoming increasingly popular in many computer vision tasks. However, the use of Transformer based models for video under-standing is still relatively unexplored. Moreover, these models fail to exploit the strong temporal relationships between the neighboring video frames to get potent frame-level representations. In this paper, we propose a novel self-attention block that leverages both local and global temporal relation-ships between the video frames to obtain better contextualized representations for the individual frames. This enables the model to understand the video at various granularities. We illustrate the performance of our models on the large-scale YoutTube-8M data set on the task of video categorization and further analyze the results to showcase improvement.

up
0 users have voted: