Sorry, you need to enable JavaScript to visit this website.

Semantic Role Aware Correlation Transformer For Text To Video Retrieval

Citation Author(s):
Burak Satar, Hongyuan Zhu, Xavier Bresson, Joo Hwee Lim
Submitted by:
Burak Satar
Last updated:
24 September 2021 - 3:31am
Document Type:
Poster
Document Year:
2021
Event:
Presenters:
Burak Satar
Paper Code:
MLR-APPL-IVASR-6.11
 

With the emergence of social media, voluminous video clips are uploaded every day, and retrieving the most relevant visual content with a language query becomes critical. Most approaches aim to learn a joint embedding space for plain textual and visual contents without adequately exploiting their intra-modality structures and inter-modality correlations. This paper proposes a novel transformer which explicitly disentangles the text and video into semantic roles of objects, spatial contexts and temporal contexts with attention scheme to learn the intra- and inter-role correlations among these three roles to discover discriminative features for matching at different levels. The preliminary results on popular YouCook2 indicate that our approach surpasses state-of-the-arts with a high margin.

up
0 users have voted: