Sorry, you need to enable JavaScript to visit this website.

Onsager-corrected deep learning for sparse linear inverse problems

Citation Author(s):
Mark Borgerding and Philip Schniter
Submitted by:
Philip Schniter
Last updated:
6 December 2016 - 10:30am
Document Type:
Presentation Slides
Document Year:
2016
Event:
Presenters:
Mark Borgerding
Paper Code:
1388
 

Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem encountered in compressive sensing, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. We show numerically that our "learned AMP" network significantly improves upon Gregor and LeCun's "learned ISTA" when both use soft-thresholding shrinkage. We then show that additional improvements result from jointly learning the shrinkage functions together with the linear transforms. Finally, we propose a network design inspired by an unfolding of the recently proposed "vector AMP" (VAMP) algorithm, and show that it outperforms all previously considered networks. Interestingly, the linear transforms and shrinkage functions prescribed by VAMP coincide with the values learned through backpropagation, yielding an intuitive explanation for the design of this deep network.

up
0 users have voted: