Documents
Presentation Slides
BLOCK-SPARSE ADVERSARIAL ATTACK TO FOOL TRANSFORMER-BASED TEXT CLASSIFIERS
- Citation Author(s):
- Submitted by:
- Seyedeh Sahar S...
- Last updated:
- 11 May 2022 - 6:02am
- Document Type:
- Presentation Slides
- Document Year:
- 2022
- Event:
- Presenters:
- Sahar Sadrizadeh
- Paper Code:
- SPE-63.6
- Categories:
- Keywords:
- Log in to post comments
Recently, it has been shown that, in spite of the significant performance of deep neural networks in different fields, those are vulnerable to adversarial examples. In this paper, we propose a gradient-based adversarial attack against transformer-based text classifiers. The adversarial perturbation in our method is imposed to be block-sparse so that the resultant adversarial example differs from the original sentence in only a few words. Due to the discrete nature of textual data, we perform gradient projection to find the minimizer of our proposed optimization problem. Experimental results demonstrate that, while our adversarial attack maintains the semantics of the sentence, it can reduce the accuracy of GPT-2 to less than 5% on different datasets (AG News, MNLI, and Yelp Reviews). Furthermore, the block-sparsity constraint of the proposed optimization problem results in small perturbations in the adversarial example.