Sorry, you need to enable JavaScript to visit this website.

Multi-Layer Content Interaction Through Quaternion Product for Visual Question Answering

Citation Author(s):
Lei shi,Shijie Geng, Kai Shuang,Chiori Hori, Songxiang Liu, Peng Gao, Sen Su
Submitted by:
Lei Shi
Last updated:
16 May 2020 - 1:27am
Document Type:
Poster
Document Year:
2020
Event:
Paper Code:
MMSP-P1.5 [5942]
 

Multi-modality fusion technologies have greatly improved the performance of neural network-based Video Description/Caption, Visual Question Answering (VQA) and Audio Visual Scene-aware Di-alog (AVSD) over the recent years. Most previous approaches only explore the last layers of multiple layer feature fusion while omit-ting the importance of intermediate layers. To solve the issue for the intermediate layers, we propose an efficient Quaternion Block Net-work (QBN) to learn interaction not only for the last layer but also for all intermediate layers simultaneously. In our proposed QBN, we use the holistic text features to guide the update of visual features. In the meantime, Hamilton quaternion products can efficiently perform information flow from higher layers to lower layers for both visual and text modalities. The evaluation results show our QBN improved the performance on VQA 2.0, furthermore surpassed the approach us-ing large scale BERT or visual BERT pre-trained models. Extensiveablation study has been carried out to examine the influence of each proposed module in this study.

up
0 users have voted: