# Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

@article{Williamson2020SublinearCO, title={Sub-linear convergence of a stochastic proximal iteration method in Hilbert space}, author={Maans Williamson and Monika Eisenmann and Tony Stillfjord}, journal={ArXiv}, year={2020}, volume={abs/2010.12348} }

We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types…

## One Citation

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

- Computer Science, MathematicsArXiv
- 2021

It is rigorously proved (optimal) sub-linear convergence of the scheme for strongly convex objective functions on an abstract Hilbert space, which illustrates the good stability properties of the tamed stochastic gradient descent method.

## References

SHOWING 1-10 OF 42 REFERENCES

Stochastic proximal iteration: A non-asymptotic improvement upon stochastic gradient descent

- www.math.ucla.edu/eryu/papers/spi.pdf
- 2016

Towards Stability and Optimality in Stochastic Gradient Descent

- Mathematics, Computer ScienceAISTATS
- 2016

A new iterative procedure termed averaged implicit SGD (AI-SGD), which employs an implicit update at each iteration, which is related to proximal operators in optimization and achieves competitive performance with other state-of-the-art procedures.

Stochastic gradient descent methods for estimation with large data sets

- Mathematics
- 2015

We develop methods for parameter estimation in settings with large-scale data sets, where traditional methods are no longer tenable. Our methods rely on stochastic approximations, which are…

The proximal Robbins–Monro method

- Mathematics
- 2015

The need for parameter estimation with massive datasets has reinvigorated interest in stochastic optimization and iterative estimation procedures. Stochastic approximations are at the forefront of…

Stochastic proximal splitting algorithm for composite minimization

- Computer Science, MathematicsOptim. Lett.
- 2021

This note tackles composite optimization problems, where the access only to stochastic information on both smooth and nonsmooth components is assumed, using a stochastically proximal first-order scheme with stochastics proximal updates.

Convergence of Stochastic Proximal Gradient Algorithm

- MathematicsApplied Mathematics & Optimization
- 2019

We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the…

Methods for the temporal approximation of nonlinear, nonautonomous evolution equations

- Mathematics
- 2019

Differential equations are an important building block for modeling processes in physics, biology, and social sciences. Usually, their exact solution is not known explicitly though. Therefore,…

Modeling simple structures and geometry for better stochastic optimization algorithms

- Computer ScienceAISTATS
- 2019

Model-based methods for stochastic optimization problems are developed, introducing the approximate-proximal point, or aProx, family, which includes stochastically subgradient, proximal point and bundle methods, which enjoy stronger convergence and robustness guarantees than classical approaches.

On a Randomized Backward Euler Method for Nonlinear Evolution Equations with Time-Irregular Coefficients

- Mathematics, Computer ScienceFound. Comput. Math.
- 2019

A randomized version of the backward Euler method that is applicable to stiff ordinary differential equations and nonlinear evolution equations with time-irregular coefficients is introduced and the convergence to the exact solution is proved.

Proximal-Proximal-Gradient Method

- Mathematics, ChemistryJournal of Computational Mathematics
- 2019

In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient method…