Corpus ID: 225062036

Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

  title={Sub-linear convergence of a stochastic proximal iteration method in Hilbert space},
  author={Maans Williamson and Monika Eisenmann and Tony Stillfjord},
We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types… Expand
1 Citations

Figures from this paper

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space
It is rigorously proved (optimal) sub-linear convergence of the scheme for strongly convex objective functions on an abstract Hilbert space, which illustrates the good stability properties of the tamed stochastic gradient descent method. Expand


Stochastic proximal iteration: A non-asymptotic improvement upon stochastic gradient descent
  • 2016
Towards Stability and Optimality in Stochastic Gradient Descent
A new iterative procedure termed averaged implicit SGD (AI-SGD), which employs an implicit update at each iteration, which is related to proximal operators in optimization and achieves competitive performance with other state-of-the-art procedures. Expand
Stochastic gradient descent methods for estimation with large data sets
We develop methods for parameter estimation in settings with large-scale data sets, where traditional methods are no longer tenable. Our methods rely on stochastic approximations, which areExpand
The proximal Robbins–Monro method
The need for parameter estimation with massive datasets has reinvigorated interest in stochastic optimization and iterative estimation procedures. Stochastic approximations are at the forefront ofExpand
Stochastic proximal splitting algorithm for composite minimization
This note tackles composite optimization problems, where the access only to stochastic information on both smooth and nonsmooth components is assumed, using a stochastically proximal first-order scheme with stochastics proximal updates. Expand
Convergence of Stochastic Proximal Gradient Algorithm
We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by theExpand
Methods for the temporal approximation of nonlinear, nonautonomous evolution equations
Differential equations are an important building block for modeling processes in physics, biology, and social sciences. Usually, their exact solution is not known explicitly though. Therefore,Expand
Modeling simple structures and geometry for better stochastic optimization algorithms
Model-based methods for stochastic optimization problems are developed, introducing the approximate-proximal point, or aProx, family, which includes stochastically subgradient, proximal point and bundle methods, which enjoy stronger convergence and robustness guarantees than classical approaches. Expand
On a Randomized Backward Euler Method for Nonlinear Evolution Equations with Time-Irregular Coefficients
A randomized version of the backward Euler method that is applicable to stiff ordinary differential equations and nonlinear evolution equations with time-irregular coefficients is introduced and the convergence to the exact solution is proved. Expand
Proximal-Proximal-Gradient Method
In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient methodExpand