Sorry, you need to enable JavaScript to visit this website.

For their analysis with conventional signal processing tools, non-stationary signals are assumed to be stationary (or at least wide-sense stationary) in short intervals. While this approach allows them to be studied, it disregards the temporal evolution of their statistics. As such, to analyze this type of signals, it is desirable to use a representation that registers and characterizes the temporal changes in the frequency content of the signals, as these changes may occur in single or multiple periodic ways.


Compressive Sensing (CS) is a new paradigm for the efficient acquisition of signals that have sparse representation in a certain domain. Traditionally, CS has provided numerous methods for signal recovery over an orthonormal basis. However, modern applications have sparked the emergence of related methods for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary, particularly due to their potential to generate different kinds of sparse representation of signals.


Crowdsourcing platforms often want to incentivize workers to finish tasks with high quality and truthfully report their solutions. A high-quality solution requires a worker to exert effort; a platform can motivate such effort exertion and truthful reporting by providing a reward.


Real-world recognition or classification tasks in computer vision are not apparent in controlled environments and often get involved in open set. Previous research work on real-world recognition problem is knowledge- and labor-intensive to pursue good performance for there are numbers of task domains. Auto Machine Learning (AutoML) approaches supply an easier way to apply advanced machine learning technologies, reduce the demand for experienced human experts and improve classification performance on close set.


Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optimization, signal processing, and machine learning. They are similar to stochastic gradient descent (SGD), in that they perform updates along the negative gradient of an instantaneous (or stochastically chosen) loss function. However, rather than update the parameter (or weight) vector directly, they update it in a "mirrored" domain whose transformation is given by the gradient of a strictly convex differentiable potential function.