Documents
Presentation Slides
Street-to-Shop Shoe Retrieval with Multi-Scale Viewpoint Invariant Triplet Network
- Citation Author(s):
- Submitted by:
- huijing zhan
- Last updated:
- 15 September 2017 - 1:04am
- Document Type:
- Presentation Slides
- Document Year:
- 2017
- Event:
- Presenters:
- Boxin Shi
- Paper Code:
- TP-L6.2
- Categories:
- Log in to post comments
In this paper, we aim to find exactly the same shoes given a daily shoe photo (street scenario) that matches the online shop shoe photo (shop scenario). There are large visual differences between the street and shop scenario shoe images. To handle the discrepancy of different scenarios, we learn a feature embedding for shoes via a viewpoint-invariant triplet network, the feature activations of which reflect the inherent similarity between any two shoe images. Specifically, we propose a new loss function that minimizes the distances between images of the same shoes captured from different viewpoints. Moreover, we train the proposed triplet network at two different scales so that the representation of shoes incorporates different levels of invariance at different scales. To support training the multi-scale triplet networks, we collect a large dataset with shoe images from the daily life and online shopping websites. Experiments on the dataset show excellence over state-of-the-art approaches, which demonstrate the effectiveness of our proposed method.