Sorry, you need to enable JavaScript to visit this website.

Recently, it has been shown that, in spite of the significant performance of deep neural networks in different fields, those are vulnerable to adversarial examples. In this paper, we propose a gradient-based adversarial attack against transformer-based text classifiers. The adversarial perturbation in our method is imposed to be block-sparse so that the resultant adversarial example differs from the original sentence in only a few words. Due to the discrete nature of textual data, we perform gradient projection to find the minimizer of our proposed optimization problem.


Inspired by deep learning applications in structural mechanics, we focus on how to train two predictors to model the relation between the vibrational response of a prescribed point of a wooden plate and its material properties. In particular, the eigenfrequencies of the plate are estimated via multilinear regression, whereas their amplitude is predicted by a feedforward neural network.


Real-world point clouds usually have inconsistent orientations and often suffer from data missing issues. To solve this problem, we design a neural network, CF-Net, to address challenges in rotation invariant completion. In our network, we modify and integrate complementary operators to extract features that are robust against rotation and incompleteness. Our CF-Net can achieve competitive results both geometrically and semantically as demonstrated in this paper.


Approximating a matrix by a product of few sparse factors whose supports possess the butterfly structure, which is common to many fast transforms, is key to learn fast transforms and speed up algorithms for inverse problems. We introduce a hierarchical approach that recursively factorizes the considered matrix into two factors. Using recent advances on the well-posedness and tractability of the two-factor fixed-support sparse matrix factorization problem, the proposed algorithm is endowed with exact recovery guarantees.