Sorry, you need to enable JavaScript to visit this website.

Feature selection has been explored in two ways, global feature selection and instance-wise feature selection. Global feature selection picks the same feature selector for the entire dataset, while instance-wise feature selection allows different feature selectors for different data instances. We propose group-wise feature selection, a new setting that sits between global and instance-wise feature selections.

Categories:
13 Views

Even with trained operators and cooperative subjects, it is still possible to capture off-angle iris images. Considering the recent demands for stand-off iris biometric systems and the trend towards ”on-the-move-acquisition”, off-angle iris recognition became a hot topic within the biometrics community. In this work, CNNs trained with the triplet loss function are applied to extract features for iris recognition.

Categories:
4 Views

The aim of this work is to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named as Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of autoencoders before the point-wise non-linearities. Then, we propose a novel end-to-end training procedure that learns different graph representations per each layer, jointly with the GCN weights and auto-encoder parameters.

Categories:
1 Views

The aim of this work is to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named as Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of autoencoders before the point-wise non-linearities. Then, we propose a novel end-to-end training procedure that learns different graph representations per each layer, jointly with the GCN weights and auto-encoder parameters.

Categories:
6 Views

We propose a self-supervised learning method using multiple sampling strategies to obtain general-purpose audio representation. Multiple sampling strategies are used in the proposed method to construct contrastive losses from different perspectives and learn representations based on them. In this study, in addition to the widely used clip-level sampling strategy, we introduce two new strategies, a frame-level strategy and a task-specific strategy.

Categories:
14 Views

This paper introduces neural architecture search (NAS) for the automatic discovery of end-to-end keyword spotting (KWS) models in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) operating on raw audio waveforms. After a suitable KWS model is found with NAS, we conduct quantization of weights and activations to reduce the memory footprint. We conduct extensive experiments on the Google speech commands dataset.

Categories:
4 Views

Pages