Corpus ID: 235458085

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

  title={Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space},
  author={Monika Eisenmann and Tony Stillfjord},
In this paper, we introduce the tamed stochastic gradient descent method (TSGD) for optimization problems. Inspired by the tamed Euler scheme, which is a commonly used method within the context of stochastic differential equations, TSGD is an explicit scheme that exhibits stability properties similar to those of implicit schemes. As its computational cost is essentially equivalent to that of the well-known stochastic gradient descent method (SGD), it constitutes a very competitive alternative… Expand

Figures from this paper


Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
This work introduces a new variant of the SPP method for solving stochastic convex problems subject to (in)finite intersection of constraints satisfying a linear regularity condition, and proves new nonasymptotic convergence results for convex Lipschitz continuous objective functions. Expand
Dynamical Behavior of a Stochastic Forward–Backward Algorithm Using Random Monotone Operators
It is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudotrajectory in the sense of Benaïm and Hirsch of the differential inclusion involving the sum of the mean operators. Expand
Towards Stability and Optimality in Stochastic Gradient Descent
A new iterative procedure termed averaged implicit SGD (AI-SGD), which employs an implicit update at each iteration, which is related to proximal operators in optimization and achieves competitive performance with other state-of-the-art procedures. Expand
Stochastic model-based minimization of weakly convex functions
This work shows that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate of $O(k^{-1/4})$. Expand
Adam: A Method for Stochastic Optimization
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
Explicit stabilised gradient descent for faster strongly convex optimisation
This paper introduces the Runge–Kutta Chebyshev descent method (RKCD) for strongly convex optimisation problems. This new algorithm is based on explicit stabilised integrators for stiff differentialExpand
Proximal-Proximal-Gradient Method
In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient methodExpand
Asymptotic and finite-sample properties of estimators based on stochastic gradients
Stochastic gradient descent procedures have gained popularity for parameter estimation from large data sets. However, their statistical properties are not well understood, in theory. And in practice,Expand
A note on tamed Euler approximations
Strong convergence results on tamed Euler schemes, which approximate stochastic differential equations with superlinearly growing drift coefficients that are locally one-sided Lipschitz continuous,Expand
Ergodic Convergence of a Stochastic Proximal Point Algorithm
  • P. Bianchi
  • Mathematics, Computer Science
  • SIAM J. Optim.
  • 2016
The weighted averaged sequence of iterates is shown to converge weakly to a zero of the Aumann expectation ${\mathbb E}(A(\xi_1,\,.\,)) under the assumption that the latter is maximal. Expand