Documents
Presentation Slides
Backdoor Attacks on Neural Network Operations
- Citation Author(s):
- Submitted by:
- Joseph Clements
- Last updated:
- 22 November 2018 - 6:26pm
- Document Type:
- Presentation Slides
- Document Year:
- 2018
- Event:
- Presenters:
- Joseph Clements
- Paper Code:
- 1484
- Categories:
- Log in to post comments
Machine learning is a rapidly growing field that has been expanding into various aspects of technology and science in recent years. Unfortunately, it has been shown recently that machine learning models are highly vulnerable to well-crafted adversarial attacks. This paper develops a novel method for maliciously inserting a backdoor into a well-trained neural network causing misclassification that is only active under rare input keys. As opposed to the existing backdoor attacks on neural networks that alter the weights of the network, the proposed approach targets the computing operations for malicious behavior injection. Our experiments show that the proposed methodology achieves above 99% success rate on average for altering the neural network into the desired predictions given the selected input keys, while remaining undetectable under normal testing data.