Sorry, you need to enable JavaScript to visit this website.

Feature LMS Algorithms

Citation Author(s):
Paulo Sergio Ramirez Diniz, Hamed Yazdanpanah, Markus Vinicius Santos Lima
Submitted by:
Hamed Yazdanpanah
Last updated:
14 April 2018 - 11:53pm
Document Type:
Presentation Slides
Document Year:
2018
Event:
Presenters:
Paulo Sergio Ramirez Diniz
Paper Code:
3575
 

In recent years, there is a growing effort in the learning algorithms area to propose new strategies to detect and exploit sparsity in the model parameters. In many situations, the sparsity is hidden in the relations among these coefficients so that some suitable tools are required to reveal the potential sparsity. This work proposes a set of LMS-type algorithms, collectively called Feature LMS (F-LMS) algorithms, setting forth a hidden feature of the unknown parameters, which ultimately would improve convergence speed and steady-state mean-squared error. The key idea is to apply linear transformations, by means of the so-called feature matrices, to reveal the sparsity hidden in the coefficient vector, followed by a sparsity-promoting penalty function to exploit such sparsity. Some F-LMS algorithms for lowpass and highpass systems are also introduced by using simple feature matrices that require only trivial operations. Simulation results demonstrate that the proposed F-LMS algorithms bring about several performance
improvements whenever the hidden sparsity of the parameters is exposed.

up
1 user has voted: Paulo Diniz