Sorry, you need to enable JavaScript to visit this website.

We propose a novel problem formulation for sparsity-aware adaptive filtering based on the nonconvex minimax concave (MC) penalty, aiming to obtain a sparse solution with small estimation bias. We present two algorithms: the first algorithm uses a single firm-shrinkage operation, while the second one uses double soft-shrinkage operations. The twin soft-shrinkage operations compensate each other, promoting sparsity while avoiding a serious increase of biases. The whole cost function is convex in certain parameter settings, while the instantaneous cost function is always nonconvex.


Altitude estimation is important for successful control and navigation of unmanned aerial vehicles (UAVs). UAVs do not have indoor access to GPS signals and can only use on-board sensors for reliable estimation of altitude. Unfortunately, most existing navigation schemes are not robust to the presence of abnormal obstructions above and below the UAV.


There is a growing research interest in proposing new techniques to detect and exploit signals/systems sparsity. Recently, the idea of hidden sparsity has been proposed, and it has been shown that, in many cases, sparsity is not explicit, and some tools are required to expose hidden sparsity. In this paper, we propose the Feature Affine Projection (F-AP) algorithm to reveal hidden sparsity in unknown systems. Indeed, first, the hidden sparsity is revealed using the feature matrix, then it is exploited using some sparsity-promoting penalty function.


The huge volume of data that are available today requires data-
selective processing approaches that avoid the costs in computa-
tional complexity via appropriately treating the non-innovative data.
In this paper, extensions of the well-known adaptive filtering LMS-
Newton and LMS-Quasi-Newton Algorithms are developed that
enable data selection while also addressing the censorship of out-
liers that emerge due to high measurement errors. The proposed
solutions allow the prescription of how often the acquired data are


Stochastic mirror descent (SMD) algorithms have recently garnered a great deal of attention in optimization, signal processing, and machine learning. They are similar to stochastic gradient descent (SGD), in that they perform updates along the negative gradient of an instantaneous (or stochastically chosen) loss function. However, rather than update the parameter (or weight) vector directly, they update it in a "mirrored" domain whose transformation is given by the gradient of a strictly convex differentiable potential function.


A class of algorithms known as feature least-mean-square (FLMS) has been proposed recently to exploit hidden sparsity