Sorry, you need to enable JavaScript to visit this website.

Is Ordered Weighed L1 Regularized Regression Robust to Adversarial Perturbation ? A Case Study on OSCAR

Citation Author(s):
Pin-Yu Chen, Bhanukiran Vinzamuri and Sijia Liu
Submitted by:
Bhanukiran Vinzamuri
Last updated:
23 November 2018 - 1:03pm
Document Type:
Poster
Document Year:
2018
Event:
 

Many state-of-the-art machine learning models such as deep neural networks have recently shown to be vulnerable to adversarial perturbations, especially in classification tasks. Motivated by adversarial machine learning, in this paper we investigate the robustness of sparse regression models with strongly correlated covariates to adversarially designed measurement noises. Specifically, we consider the family of ordered weighted L1 (OWL) regularized regression methods and study the case of OSCAR (octagonal shrinkage clustering algorithm for regression) in the adversarial setting. Under a norm-bounded threat model, we formulate the process of finding a maximally disruptive noise for OWL-regularized regression as an optimization problem and illustrate the steps towards finding such a noise in the case of OSCAR. Experimental results demonstrate that the regression performance of grouping strongly correlated features can be severely degraded under our adversarial setting, even when the noise budget is significantly smaller than the ground-truth signals.

up
0 users have voted: