Sorry, you need to enable JavaScript to visit this website.

Backdoor Attacks on Neural Network Operations

Citation Author(s):
Yingjie Lao
Submitted by:
Joseph Clements
Last updated:
22 November 2018 - 6:26pm
Document Type:
Presentation Slides
Document Year:
Presenters Name:
Joseph Clements
Paper Code:



Machine learning is a rapidly growing field that has been expanding into various aspects of technology and science in recent years. Unfortunately, it has been shown recently that machine learning models are highly vulnerable to well-crafted adversarial attacks. This paper develops a novel method for maliciously inserting a backdoor into a well-trained neural network causing misclassification that is only active under rare input keys. As opposed to the existing backdoor attacks on neural networks that alter the weights of the network, the proposed approach targets the computing operations for malicious behavior injection. Our experiments show that the proposed methodology achieves above 99% success rate on average for altering the neural network into the desired predictions given the selected input keys, while remaining undetectable under normal testing data.

0 users have voted:

Dataset Files