Corpus ID: 119577748

Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization

@article{Patrascu2017NonasymptoticCO,
  title={Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization},
  author={Andrei Patrascu and Ion Necoara},
  journal={arXiv: Optimization and Control},
  year={2017}
}
A very popular approach for solving stochastic optimization problems is the stochastic gradient descent method (SGD). Although the SGD iteration is computationally cheap and the practical performance of this method may be satisfactory under certain circumstances, there is recent evidence of its convergence difficulties and instability for unappropriate parameters choice. To avoid these drawbacks naturally introduced by the SGD scheme, the stochastic proximal point algorithms have been recently… Expand

Figures from this paper

Convergence analysis of stochastic higher-order majorization-minimization algorithms
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such aExpand
General Convergence Analysis of Stochastic First-Order Methods for Composite Optimization
  • I. Necoara
  • Mathematics, Computer Science
  • J. Optim. Theory Appl.
  • 2021
TLDR
Stochastic composite convex optimization problems with the objective function satisfying a stochastic bounded gradient condition, with or without a quadratic functional growth property are considered, covering a large class of objective functions. Expand
Random gradient algorithms for convex minimization over intersection of simple sets
  • I. Necoara
  • Computer Science
  • 2019 18th European Control Conference (ECC)
  • 2019
TLDR
This paper considers constrained convex optimization problems, where the constraints are described as the intersection of a finite number of convex sets, and derives gradient methods with random projections that can easily address streaming settings and optimization problems having a very large number of given constraints. Expand
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximalExpand
Randomized projection methods for convex feasibility problems: conditioning and convergence rates
Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents the main modeling strategy for many computational problems. In this paperExpand
Minibatch stochastic subgradient-based projection algorithms for solving convex inequalities
This paper deals with the convex feasibility problem, where the feasible set is given as the intersection of a (possibly infinite) number of closed convex sets. We assume that each set is specifiedExpand
Linear convergence of dual coordinate descent on non-polyhedral convex problems
This paper deals with constrained convex problems, where the objective function is smooth strongly convex and the feasible set is given as the intersection of a large number of closed convexExpand
Random minibatch projection algorithms for convex feasibility problems
  • A. Nedić, I. Necoara
  • Computer Science
  • 2019 IEEE 58th Conference on Decision and Control (CDC)
  • 2019
TLDR
This work is the first proving that random minibatch subgradient based projection updates have a better complexity than their single-sample variants, and derives asymptotic convergence results and proves linear convergence rate. Expand
Stable Robbins-Monro approximations through stochastic proximal updates
The need for parameter estimation with massive data has reinvigorated interest in iterative estimation procedures. Stochastic approximations, such as stochastic gradient descent, are at the forefrontExpand
OR-SAGA: Over-relaxed stochastic average gradient mapping algorithms for finite sum minimization
In this paper we derive a family of stochastic average gradient methods for solving unconstrained convex problems with the objective function expressed as a finite sum, that uses a proximal operatorExpand
...
1
2
...

References

SHOWING 1-10 OF 31 REFERENCES
Regularized Iterative Stochastic Approximation Methods for Stochastic Variational Inequality Problems
TLDR
This work introduces two classes of stochastic approximation methods, each of which requires exactly one projection step at every iteration, and provides convergence analysis for each of them. Expand
Convergence of Stochastic Proximal Gradient Algorithm
We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by theExpand
RES: Regularized Stochastic BFGS Algorithm
TLDR
Convergence results show that lower and upper bounds on the Hessian eigenvalues of the sample functions are sufficient to guarantee almost sure convergence of a subsequence generated by RES and convergence of the sequence in expectation to optimal arguments. Expand
Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning
TLDR
This work provides a non-asymptotic analysis of the convergence of two well-known algorithms, stochastic gradient descent as well as a simple modification where iterates are averaged, suggesting that a learning rate proportional to the inverse of the number of iterations, while leading to the optimal convergence rate, is not robust to the lack of strong convexity or the setting of the proportionality constant. Expand
Stochastic Three-Composite Convex Minimization
TLDR
This work proves the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term of the sum of three convex functions. Expand
Random algorithms for convex minimization problems
  • A. Nedić
  • Computer Science, Mathematics
  • Math. Program.
  • 2011
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as theExpand
Robust Stochastic Approximation Approach to Stochastic Programming
TLDR
It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. Expand
Randomized projection methods for convex feasibility problems: conditioning and convergence rates
Finding a point in the intersection of a collection of closed convex sets, that is the convex feasibility problem, represents the main modeling strategy for many computational problems. In this paperExpand
Towards Stability and Optimality in Stochastic Gradient Descent
TLDR
A new iterative procedure termed averaged implicit SGD (AI-SGD), which employs an implicit update at each iteration, which is related to proximal operators in optimization and achieves competitive performance with other state-of-the-art procedures. Expand
Linear convergence of first order methods for non-strongly convex optimization
TLDR
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. Expand
...
1
2
3
4
...