Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

On the Transferability of Adversarial Examples Against CNN-Based Image Forensics

Abstract: 

Recent studies have shown that Convolutional Neural Networks (CNN) are relatively easy to attack through the generation of so-called adversarial examples. Such vulnerability also affects CNN-based image forensic tools. Research in deep learning has shown that adversarial examples exhibit a certain degree of transferability, i.e., they maintain part of their effectiveness even against CNN models other than the one targeted by the attack. This is a very strong property undermining the usability of CNN’s in security-oriented applications. In this paper, we investigate if attack transferability also holds in image forensics applications. With specific reference to the case of manipulation detection, we analyse the results of several experiments considering different sources of mismatch between the CNN used to build the adversarial examples and the one adopted by the forensic analyst. The analysis ranges from cases in which the mismatch involves only the training dataset, to cases in which the attacker and the forensic analyst adopt different architectures. The results of our experiments show that, in the majority of the cases, the attacks are not transferable, thus easing the design of proper countermeasures at least when the attacker does not have a perfect knowledge of the target detector.

up
0 users have voted:

Paper Details

Authors:
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi
Submitted On:
30 January 2020 - 12:18pm
Short Link:
Type:
Research Manuscript
Event:
Presenter's Name:
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi
Document Year:
2019
Cite

Document Files

ICASSP 2019.pdf

(66)

Subscribe

[1] Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi, "On the Transferability of Adversarial Examples Against CNN-Based Image Forensics", IEEE SigPort, 2020. [Online]. Available: http://sigport.org/4969. Accessed: Jun. 04, 2020.
@article{4969-20,
url = {http://sigport.org/4969},
author = {Mauro Barni; Kassem Kallas; Ehsan Nowroozi; Benedetta Tondi },
publisher = {IEEE SigPort},
title = {On the Transferability of Adversarial Examples Against CNN-Based Image Forensics},
year = {2020} }
TY - EJOUR
T1 - On the Transferability of Adversarial Examples Against CNN-Based Image Forensics
AU - Mauro Barni; Kassem Kallas; Ehsan Nowroozi; Benedetta Tondi
PY - 2020
PB - IEEE SigPort
UR - http://sigport.org/4969
ER -
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi. (2020). On the Transferability of Adversarial Examples Against CNN-Based Image Forensics. IEEE SigPort. http://sigport.org/4969
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi, 2020. On the Transferability of Adversarial Examples Against CNN-Based Image Forensics. Available at: http://sigport.org/4969.
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi. (2020). "On the Transferability of Adversarial Examples Against CNN-Based Image Forensics." Web.
1. Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi. On the Transferability of Adversarial Examples Against CNN-Based Image Forensics [Internet]. IEEE SigPort; 2020. Available from : http://sigport.org/4969