In Financial Signal Processing, multiple time series such as financial indicators, stock prices and exchange rates are strongly coupled due to their dependence on the latent state of the market and therefore they are required to be jointly analysed. We focus on learning the relationships among financial time series by modelling them through a multi-output Gaussian process (MOGP) with expressive covariance functions. Learning these market dependencies among financial series is crucial for the imputation and prediction of financial observations.
- Categories:
- Read more about EXTENDED VARIATIONAL INFERENCE FOR PROPAGATING UNCERTAINTY IN CONVOLUTIONAL NEURAL NETWORKS
- Log in to post comments
Model confidence or uncertainty is critical in autonomous systems as they directly tie to the safety and trustworthiness of
the system. The quantification of uncertainty in the output decisions of deep neural networks (DNNs) is a challenging
problem. The Bayesian framework enables the estimation of the predictive uncertainty by introducing probability distributions
over the (unknown) network weights; however, the propagation of these high-dimensional distributions through
- Categories:
- Read more about Minimax Active Learning via Minimal Model Capacity
- Log in to post comments
Active learning is a form of machine learning which combines supervised learning and feedback to minimize the training set size, subject to low generalization errors. Since direct optimization of the generalization error is difficult, many heuristics have been developed which lack a firm theoretical foundation. In this paper, a new information theoretic criterion is proposed based on a minimax log-loss regret formulation of the active learning problem. In the first part of this paper, a Redundancy Capacity theorem for active learning is derived along with an optimal learner.
- Categories:
- Read more about Regularized state estimation and parameter learning via augmented Lagrangian Kalman smoother method
- Log in to post comments
In this article, we address the problem of estimating the state and learning of the parameters in a linear dynamic system with generalized $L_1$-regularization. Assuming a sparsity prior on the state, the joint state estimation and parameter learning problem is cast as an unconstrained optimization problem. However, when the dimensionality of state or parameters is large, memory requirements and computation of learning algorithms are generally prohibitive.
- Categories:
- Read more about Sparse Bayesian Learning for Robust PCA
- Log in to post comments
SBL_RPCA.pdf
- Categories:
- Read more about Adversarial variational Bayes methods for Tweedie compound Poisson mixed models
- Log in to post comments
The Tweedie Compound Poisson-Gamma model is routinely used for modeling non-negative continuous data with a discrete probability mass at zero. Mixed models with random effects account for the covariance structure related to the grouping hierarchy in the data. An important application of Tweedie mixed models is pricing the insurance policies, e.g. car insurance. However, the intractable likelihood function, the unknown variance function, and the hierarchical structure of mixed effects have presented considerable challenges for drawing inferences on Tweedie.
poster.pdf
- Categories:
- Read more about UNMIXING DYNAMIC PET IMAGES: COMBINING SPATIAL HETEROGENEITY AND NON-GAUSSIAN NOISE
- Log in to post comments
- Categories:
- Read more about Missing Data In Traffic Estimation: A Variational Autoencoder Imputation Method
- Log in to post comments
Road traffic forecasting systems are in scenarios where sensor or system failure occur. In those scenarios, it is known that missing values negatively affect estimation accuracy although it is being often underestimate in current deep neural network approaches. Our assumption is that traffic data can be generated from a latent space. Thus, we propose an online unsupervised data imputation method based on learning the data distribution using a variational autoencoder (VAE).
- Categories:
- Read more about ENLLVM: Ensemble based Nonlinear Bayesian Filtering using Linear Latent Variable Models
- Log in to post comments
Real-time nonlinear Bayesian filtering algorithms are overwhelmed by data volume, velocity and increasing complexity of computational models. In this paper, we propose a novel ensemble based nonlinear Bayesian filtering approach which only requires a small number of simulations and can be applied to high-dimensional systems in the presence of intractable likelihood functions.
- Categories:
- Read more about Variational and Hierarchical Recurrent Autoencoder
- Log in to post comments
Despite a great success in learning representation for image data, it is challenging to learn the stochastic latent features from natural language based on variational inference. The difficulty in stochastic sequential learning is due to the posterior collapse caused by an autoregressive decoder which is prone to be too strong to learn sufficient latent information during optimization. To compensate this weakness in learning procedure, a sophisticated latent structure is required to assure good convergence so that random features are sufficiently captured for sequential decoding.
- Categories: