Sorry, you need to enable JavaScript to visit this website.

SSPP-DAN: Deep Domain Adaptation Network for Face Recognition with Single Sample Per Person

Citation Author(s):
Sungeun Hong, Woobin Im, Jongbin Ryu, Hyun S. Yang
Submitted by:
Sungeun Hong
Last updated:
15 September 2017 - 11:37am
Document Type:
Presentation Slides
Document Year:
2017
Event:
Presenters:
Sungeun Hong
Paper Code:
ICIP1701
 

Real-world face recognition using a single sample per person (SSPP) is a challenging task. The problem is exacerbated if the conditions under which the gallery image and the probe set are captured are completely different. To address these issues from the perspective of domain adaptation, we introduce an SSPP domain adaptation network (SSPP-DAN). In the proposed approach, domain adaptation, feature extraction, and classification are performed jointly using a deep architecture with domain-adversarial training. However, the SSPP characteristic of one training sample per class is insufficient to train the deep architecture. To overcome this shortage, we generate synthetic images with varying poses using a 3D face model. Experimental evaluations using a realistic SSPP dataset show that deep domain adaptation and image synthesis complement each other and dramatically improve accuracy. Experiments on a benchmark dataset using the proposed approach show state-of-the-art performance.

up
0 users have voted: