Documents
Poster
CLEANING ADVERSARIAL PERTURBATIONS VIA RESIDUAL GENERATIVE NETWORK FOR FACE VERIFICATION
- Citation Author(s):
- Submitted by:
- Yuying Su
- Last updated:
- 8 May 2019 - 4:59am
- Document Type:
- Poster
- Document Year:
- 2019
- Event:
- Presenters:
- Yuying Su and Guangling Sun
- Paper Code:
- ICASSP19005
- Categories:
- Log in to post comments
Deep neural networks (DNNs) have recently achieved impressive performances on various applications. However, recent researches show that DNNs are vulnerable to adversarial perturbations injected into input samples. In this paper, we investigate a defense method for face verification: a deep residual generative network (ResGN) is learned to clean adversarial perturbations. We propose a novel training framework composed of ResGN, pre-trained VGG-Face network and FaceNet network. The parameters of ResGN are optimized by minimizing a joint loss consisting of a pixel loss, a texture loss and a verification loss, in which they measure content errors, subjective visual perception errors and verification task errors between cleaned image and legitimate image respectively. Specially, the latter two are provided by VGG-Face and FaceNet respectively and have essential contributions for improving verification performance of cleaned image. Empirical experiment results validate the effectiveness of the proposed defense method on the Labeled Faces in the Wild (LFW) benchmark dataset.