Sorry, you need to enable JavaScript to visit this website.

Random distortion testing (RDT) addresses the problem of testing whether or not a random signal deviates by more than a specified tolerance from a fixed value. The test is non-parametric in the sense that the distribution of the signal under each hypothesis is assumed to be unknown. The signal is observed in independent and identically distributed (i.i.d) additive noise. The need to control the probabilities of false alarm and missed de- tection while reducing the number of samples required to make a decision leads to the SeqRDT approach.

Categories:
11 Views

The emerging paradigm of Human-Machine Inference Networks (HuMaINs) combines complementary cognitive strengths of humans and machines in an intelligent manner to tackle various inference tasks and achieves higher performance than either humans or machines by themselves. While inference performance optimization techniques for human-only or sensor-only networks are quite mature, HuMaINs require novel signal processing and machine learning solutions.

Categories:
3 Views

The Bayesian information criterion is generic in the sense that it does not include information about the specific model selection problem at hand. Nevertheless, it has been widely used to estimate the number of data clusters in cluster analysis. We have recently derived a Bayesian cluster enumeration criterion from first principles which maximizes the posterior probability of the candidate models given observations. But, in the finite sample regime, the asymptotic assumptions made by the criterion, to arrive at a computationally simple penalty term, are violated.

Categories:
1 Views

The Bayesian information criterion is generic in the sense that it does not include information about the specific model selection problem at hand. Nevertheless, it has been widely used to estimate the number of data clusters in cluster analysis. We have recently derived a Bayesian cluster enumeration criterion from first principles which maximizes the posterior probability of the candidate models given observations. But, in the finite sample regime, the asymptotic assumptions made by the criterion, to arrive at a computationally simple penalty term, are violated.

Categories:
Views

The Bayesian information criterion is generic in the sense that it does not include information about the specific model selection problem at hand. Nevertheless, it has been widely used to estimate the number of data clusters in cluster analysis. We have recently derived a Bayesian cluster enumeration criterion from first principles which maximizes the posterior probability of the candidate models given observations. But, in the finite sample regime, the asymptotic assumptions made by the criterion, to arrive at a computationally simple penalty term, are violated.

Categories:
Views

Particle filters has become a standard tool for state estimation in nonlinear systems. However, their performance usually deteriorates if the dimension of state space is high or the measurements are highly informative. A major challenge is to construct a proposal density that is well matched to the posterior distribution. Particle flow methods are a promising option for addressing this task. In this paper, we develop a particle flow particle filter algorithm to address the case where both the process noise and the measurement noise are distributed as mixtures of Gaussians.

Categories:
15 Views

We introduce a novel type of representation learning to obtain a speaker invariant feature for zero-resource languages. Speaker adaptation is an important technique to build a robust acoustic model. For a zero-resource language, however, conventional model-dependent speaker adaptation methods such as constrained maximum likelihood linear regression are insufficient because the acoustic model of the target language is not accessible. Therefore, we introduce a model-independent feature extraction based on a neural network.

Categories:
12 Views

This paper addresses structured covariance matrix estimation under t-distribution. Covariance matrices frequently reveal a particular structure due to the considered application and taking into account this structure usually improves estimation accuracy. In the framework of robust estimation, the $t$-distribution is particularly suited to describe heavy-tailed observation. In this context, we propose an efficient estimation procedure for covariance matrices with convex structure under t-distribution.

V1.pdf

PDF icon V1.pdf (232)
Categories:
3 Views

Pages