Sorry, you need to enable JavaScript to visit this website.

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Citation Author(s):
Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Yi-Chang James Tsai, Xiaoli Ma
Submitted by:
Yi Chieh Liu
Last updated:
21 September 2019 - 7:34am
Document Type:
Presentation Slides
Document Year:
2019
Event:
Presenters:
Chao Han Yang
Paper Code:
3211
 

Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide a first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning.
Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.

up
0 users have voted:

Comments

Updated

Updated