Feature selection has been explored in two ways, global feature selection and instance-wise feature selection. Global feature selection picks the same feature selector for the entire dataset, while instance-wise feature selection allows different feature selectors for different data instances. We propose group-wise feature selection, a new setting that sits between global and instance-wise feature selections.
- Categories:
- Read more about DEEP LEARNING BASED OFF-ANGLE IRIS RECOGNITION
- Log in to post comments
Even with trained operators and cooperative subjects, it is still possible to capture off-angle iris images. Considering the recent demands for stand-off iris biometric systems and the trend towards ”on-the-move-acquisition”, off-angle iris recognition became a hot topic within the biometrics community. In this work, CNNs trained with the triplet loss function are applied to extract features for iris recognition.
- Categories:
- Read more about Graph Convolutional Networks with Autoencoder-Based Compression and Multi-Layer Graph Learning
- Log in to post comments
The aim of this work is to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named as Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of autoencoders before the point-wise non-linearities. Then, we propose a novel end-to-end training procedure that learns different graph representations per each layer, jointly with the GCN weights and auto-encoder parameters.
- Categories:
- Read more about Graph Convolutional Networks with Autoencoder-Based Compression and Multi-Layer Graph Learning
- Log in to post comments
The aim of this work is to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named as Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of autoencoders before the point-wise non-linearities. Then, we propose a novel end-to-end training procedure that learns different graph representations per each layer, jointly with the GCN weights and auto-encoder parameters.
- Categories:
- Read more about SELF-SUPERVISED LEARNING METHOD USING MULTIPLE SAMPLING STRATEGIES FOR GENERAL-PURPOSE AUDIO REPRESENTATION
- Log in to post comments
We propose a self-supervised learning method using multiple sampling strategies to obtain general-purpose audio representation. Multiple sampling strategies are used in the proposed method to construct contrastive losses from different perspectives and learn representations based on them. In this study, in addition to the widely used clip-level sampling strategy, we introduce two new strategies, a frame-level strategy and a task-specific strategy.
- Categories:
- Read more about NEAREST SUBSPACE SEARCH IN THE SIGNED CUMULATIVE DISTRIBUTION TRANSFORM SPACE FOR 1D SIGNAL CLASSIFICATION
- Log in to post comments
This paper presents a new method to classify 1D signals using the signed cumulative distribution transform (SCDT). The proposed method exploits certain linearization properties of
- Categories:
- Read more about End-to-end Keyword Spotting using Neural Architecture Search and Quantization
- Log in to post comments
This paper introduces neural architecture search (NAS) for the automatic discovery of end-to-end keyword spotting (KWS) models in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) operating on raw audio waveforms. After a suitable KWS model is found with NAS, we conduct quantization of weights and activations to reduce the memory footprint. We conduct extensive experiments on the Google speech commands dataset.
icassp_2022_poster.pdf
- Categories: