Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Abstract: 

Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide a first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning.
Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.

up
0 users have voted:

Comments

Updated

Updated

Paper Details

Authors:
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma
Submitted On:
21 September 2019 - 7:34am
Short Link:
Type:
Presentation Slides
Event:
Presenter's Name:
Chao Han Yang
Paper Code:
3211
Document Year:
2019
Cite

Document Files

Oral_ICIP_2019_Adversarial_Causality_0927.pdf

(17)

Subscribe

[1] Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4806. Accessed: Nov. 15, 2019.
@article{4806-19,
url = {http://sigport.org/4806},
author = {Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma },
publisher = {IEEE SigPort},
title = {When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks},
year = {2019} }
TY - EJOUR
T1 - When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
AU - Chao-Han Huck Yang; Yi-Chieh Liu; Pin-Yu Chen; Yi-Chang James Tsai; Xiaoli Ma
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4806
ER -
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. IEEE SigPort. http://sigport.org/4806
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma, 2019. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks. Available at: http://sigport.org/4806.
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. (2019). "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks." Web.
1. Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma. When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4806