Sorry, you need to enable JavaScript to visit this website.

Reinforcement Learning enables to train an agent via interaction with the environment. However, in the majority of real-world scenarios, the extrinsic feedback is sparse or not sufficient, thus intrinsic reward formulations are needed to successfully train the agent. This work investigates and extends the paradigm of curiosity-driven exploration. First, a probabilistic approach is taken to exploit the advantages of the attention mechanism, which is successfully applied in other domains of Deep Learning.


Non-intrusive load monitoring (a.k.a. power disaggregation) refers to identifying and extracting the consumption patterns of individual appliances from the mains which records the whole-house energy consumption. Recently, deep learning has been shown to be a promising method to solve this problem and many approaches based on it have been proposed.


While there is now a significant literature on sparse inverse covariance estimation, all that literature, with only a couple of exceptions, has dealt only with univariate (or scalar) net- works where each node carries a univariate signal. However in many, perhaps most, applications, each node may carry multivariate signals representing multi-attribute data, possibly of different dimensions. Modelling such multivariate (or vector) networks requires fitting block-sparse inverse covariance matrices. Here we achieve maximal block sparsity by maximizing a block-l0-sparse penalized likelihood.


Time-series clustering involves grouping homogeneous time series together based on certain similarity measures. The mixture AR model (MxAR) has already been developed for time series clustering, as has an associated EM algorithm. How- ever, this EM clustering algorithm fails to perform satisfactorily in large-scale applications due to its high computational complexity. This paper proposes a new algorithm, k-ARs, which is a limiting version of the existing EM algorithm.