Documents
Presentation Slides
Retrieval-Augmented Natural Language Reasoning for Explainable Visual Question Answering
- Citation Author(s):
- Submitted by:
- Hyeon Bae Kim
- Last updated:
- 8 November 2024 - 10:06am
- Document Type:
- Presentation Slides
- Document Year:
- 2024
- Presenters:
- Hyeon Bae Kim
- Categories:
- Log in to post comments
Visual Question Answering with Natural Language Explanation (VQA-NLE) task is challenging due to its high demand for reasoning-based inference. Recent VQA-NLE studies focus on enhancing model networks to amplify the model’s reasoning capability but this approach is resource consuming and unstable. In this work, we introduce a new VQA-NLE model, ReRe (Retrieval-augmented natural language Reasoning), using leverage retrieval information from the memory to aid in generating accurate answers and persuasive explanations without relying on complex networks and extra datasets. ReRe is an encoder-decoder architecture
model using a pre-trained clip vision encoder and a pretrained GPT-2 language model as a decoder. Cross-attention layers are added in the GPT-2 for processing retrieval feature. ReRe outperforms previous methods in VQA accuracy and explanation score and shows improvement in NLE with more persuasive, reliability.