Sorry, you need to enable JavaScript to visit this website.


Backdoor Attacks on Neural Network Operations


Machine learning is a rapidly growing field that has been expanding into various aspects of technology and science in recent years. Unfortunately, it has been shown recently that machine learning models are highly vulnerable to well-crafted adversarial attacks. This paper develops a novel method for maliciously inserting a backdoor into a well-trained neural network causing misclassification that is only active under rare input keys. As opposed to the existing backdoor attacks on neural networks that alter the weights of the network, the proposed approach targets the computing operations for malicious behavior injection. Our experiments show that the proposed methodology achieves above 99% success rate on average for altering the neural network into the desired predictions given the selected input keys, while remaining undetectable under normal testing data.

0 users have voted:

Paper Details

Yingjie Lao
Submitted On:
22 November 2018 - 6:26pm
Short Link:
Presentation Slides
Presenter's Name:
Joseph Clements
Paper Code:
Document Year:

Document Files




[1] Yingjie Lao, "Backdoor Attacks on Neural Network Operations", IEEE SigPort, 2018. [Online]. Available: Accessed: Sep. 17, 2019.
url = {},
author = {Yingjie Lao },
publisher = {IEEE SigPort},
title = {Backdoor Attacks on Neural Network Operations},
year = {2018} }
T1 - Backdoor Attacks on Neural Network Operations
AU - Yingjie Lao
PY - 2018
PB - IEEE SigPort
UR -
ER -
Yingjie Lao. (2018). Backdoor Attacks on Neural Network Operations. IEEE SigPort.
Yingjie Lao, 2018. Backdoor Attacks on Neural Network Operations. Available at:
Yingjie Lao. (2018). "Backdoor Attacks on Neural Network Operations." Web.
1. Yingjie Lao. Backdoor Attacks on Neural Network Operations [Internet]. IEEE SigPort; 2018. Available from :