Sorry, you need to enable JavaScript to visit this website.

facebooktwittermailshare

The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems

Abstract: 

In this work we investigate the practicability of stochastic gradient descent and recently introduced variants with variance-reduction techniques in imaging inverse problems, such as space-varying image deblurring. Such algorithms have been shown in machine learning literature to have optimal complexities in theory, and provide great improvement empirically over the full gradient methods. Surprisingly, in some tasks such as image deblurring, many of such methods fail to converge faster than the accelerated full gradient method (FISTA), even in terms of epoch counts. We investigate this phenomenon and propose a theory-inspired mechanism to characterize whether a given inverse problem should be preferred to be solved by stochastic optimization technique with a known sampling pattern. Furthermore, to overcome another key bottleneck of stochastic optimization which is the heavy computation of proximal operators while maintaining fast convergence, we propose an accelerated primal-dual SGD algorithm and demonstrate the effectiveness of our approach in image deblurring experiments.

https://ieeexplore.ieee.org/document/8683368

up
0 users have voted:

Paper Details

Authors:
Junqi Tang, Karen Egiazarian, Mike Davies
Submitted On:
14 May 2019 - 6:05pm
Short Link:
Type:
Presentation Slides
Event:
Presenter's Name:
Junqi Tang
Paper Code:
CI-L1.4
Document Year:
2019
Cite

Document Files

ICASSP_Junqi

(77)

Subscribe

[1] Junqi Tang, Karen Egiazarian, Mike Davies, "The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems", IEEE SigPort, 2019. [Online]. Available: http://sigport.org/4207. Accessed: Sep. 25, 2020.
@article{4207-19,
url = {http://sigport.org/4207},
author = {Junqi Tang; Karen Egiazarian; Mike Davies },
publisher = {IEEE SigPort},
title = {The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems},
year = {2019} }
TY - EJOUR
T1 - The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems
AU - Junqi Tang; Karen Egiazarian; Mike Davies
PY - 2019
PB - IEEE SigPort
UR - http://sigport.org/4207
ER -
Junqi Tang, Karen Egiazarian, Mike Davies. (2019). The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems. IEEE SigPort. http://sigport.org/4207
Junqi Tang, Karen Egiazarian, Mike Davies, 2019. The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems. Available at: http://sigport.org/4207.
Junqi Tang, Karen Egiazarian, Mike Davies. (2019). "The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems." Web.
1. Junqi Tang, Karen Egiazarian, Mike Davies. The Limitation and Practical Acceleration of Stochastic Gradient Algorithms in Inverse Problems [Internet]. IEEE SigPort; 2019. Available from : http://sigport.org/4207