Documents
Poster
Poster
Learning Semantics-Guided Visual Attention for Few-shot Image Classification
- Citation Author(s):
- Submitted by:
- Wen-Hsuan Chu
- Last updated:
- 8 October 2018 - 2:56pm
- Document Type:
- Poster
- Document Year:
- 2018
- Event:
- Paper Code:
- 1619
- Categories:
- Keywords:
- Log in to post comments
We propose a deep learning framework for few-shot image classification, which exploits information across label semantics and image domains, so that regions of interest can be properly attended for improved classification. The proposed semantics-guided attention module is able to focus on most relevant regions in an image, while the attended image samples allow data augmentation and alleviate possible overfitting during FSL training. Promising performances are presented in our experiments, in which we consider both closed and open-world settings. The former considers the test input belong to the categories of few shots only, while the latter requires recognition of all categories of interest.