Sorry, you need to enable JavaScript to visit this website.

The growing edge computing paradigm, notably the vision of the internet-of-things (IoT), calls for a new epitome of lightweight algorithms. Currently, the most successful models that learn from temporal data, which is prevalent in IoT applications, stem from the field of deep learning. However, these models evince extended training times and heavy resource requirements, prohibiting training in constrained environments. To address these concerns, we employ deep stochastic neural networks from the reservoir computing paradigm.

Categories:
61 Views

Unit sphere-constrained quadratic optimization has been studied extensively over the past decades. While state-of-art algorithms for solving this problem often rely on relaxation or approximation techniques, there has been little research into scalable first-order methods that tackle the problem in its original form. These first-order methods are often more well-suited for the big data setting. In this paper, we provide a novel analysis of the simple projected gradient descent method for minimizing a quadratic over a sphere.

Categories:
149 Views

Online banking activities are constantly growing and are likely to become even more common as digital banking platforms evolve. One side effect of this trend is the rise in attempted fraud. However, there is very little work in the literature on online banking fraud detection. We propose an attention based architecture for classifying online banking transactions as either fraudulent or genuine. The proposed method allows transparency to its decision by identifying the most important transactions in the sequence and the most informative features in each transaction.

Categories:
50 Views

Compressive spectral imaging (CSI) acquires random projections of a spectral scene. Typically, before applying any post-processing task, e.g. clustering, it is required a computationally expensive reconstruction of the underlying 3D scene. Therefore, several works focus on improving the reconstruction quality by adaptively designing the sensing matrix aiming at better post-processing results. Instead, this paper proposes a hierarchical adaptive approach to design a sensing matrix of the single-pixel camera, such that pixel clustering can be performed in the compressed domain.

Categories:
41 Views

The supervised learning paradigm is limited by the cost - and sometimes the impracticality - of data collection and labeling in multiple domains. Self-supervised learning, a paradigm which exploits the structure of unlabeled data to create learning problems that can be solved with standard supervised approaches, has shown great promise as a pretraining or feature learning approach in fields like computer vision and time series processing. In this work, we present self-supervision strategies that can be used to learn informative representations from multivariate time series.

Categories:
82 Views

Communities (also referred to as clusters) are essential building blocks of all networks. Hierarchical clustering methods are common graph-based approaches for graph clustering. Traditional hierarchical clustering algorithms proceed in a bottom-up or top-down fashion to encode global information in the graph and cluster according to the global modularity of the graph.

Categories:
45 Views

After their triumph in various classification, recognition and segmentation problems, deep learning and convolutional networks are now making great strides in different inverse problems of imaging. Magnetic resonance image (MRI) reconstruction is an important imaging inverse problem, where deep learning methodologies are starting to make impact. In this work we will develop a new Convolutional Neural Network (CNN) based variant for MRI reconstruction. The developed algorithm is based on the recently proposed deep cascaded CNN (DC-CNN) structure.

Categories:
113 Views

A machine learning approach to detecting unknown signals in time-correlated noise is presented. In the proposed approach, a linear dynamical system (LDS) model is trained to represent the background noise via expectation-maximization (EM). The negative log-likelihood (NLL) of test data under the learned background noise LDS is computed via the Kalman filter recursions, and an unknown signal is detected if the NLL exceeds a threshold. The proposed detection scheme is derived as a generalized likelihood ratio test (GLRT) for an unknown deterministic signal in LDS noise.

Categories:
308 Views

In this study, we propose an efficient approach for modelling and compressing large-scale datasets. The main idea is to subdivide each sample into smaller partitions where each partition constitutes a particular subset of attributes and then apply PCA to each partition separately. This simple approach enjoys several key advantages over the traditional holistic scheme in terms of reduced computational cost and enhanced reconstruction quality.

Categories:
47 Views

Pages