# Robust Stochastic Approximation Approach to Stochastic Programming

@article{Nemirovski2009RobustSA, title={Robust Stochastic Approximation Approach to Stochastic Programming}, author={Arkadi Nemirovski and Anatoli B. Juditsky and Guanghui Lan and Alexander Shapiro}, journal={SIAM J. Optim.}, year={2009}, volume={19}, pages={1574-1609} }

In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both…

## 1,865 Citations

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

- Computer Science, MathematicsSIAM J. Optim.
- 2013

A multistage AC-SA algorithm is introduced, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points.

An Overview of Stochastic Approximation

- Computer Science
- 2015

This chapter provides an overview of stochastic approximation (SA) methods in the context of simulation optimization, and presents some of the most well-known variants such as Kesten’s rule, iterate averaging, varying bounds, and simultaneous perturbation stochastics.

An optimal method for stochastic composite optimization

- Computer Science, MathematicsMath. Program.
- 2012

The accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP is introduced, and it is shown that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO.

Ecient Methods for Stochastic Composite Optimization

- Computer Science, Mathematics
- 2008

An accelerated scheme is proposed, which can achieve, uniformly in dimension, the theoretically optimal rate of convergence for solving this class of problems, and the signicant advantages of the accelerated scheme over the existing algorithms are illustrated.

Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation

- MathematicsArXiv
- 2020

Stochastic operator extrapolation (SOE) achieves the optimal complexity for solving a fundamental problem, i.e., stochastic smooth and strongly monotone VI, for the first time in the literature.

Stochastic subgradient projection methods for composite optimization with functional constraints

- Computer Science, Mathematics
- 2022

It is shown that the algorithm converges linearly when the objective function has a linear least-square form and the constraints are polyhedral, and sublinear convergence rates are proved for this stochastic subgradient algorithm.

A stochastic approximation method for chance-constrained nonlinear programs

- Computer Science
- 2018

This work proposes a stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs that converges to local solutions of a smooth approximation of the original problem, thereby avoiding poor local solutions that may be an artefact of a fixed sample.

Averaging and derivative estimation within Stochastic Approximation algorithms

- Computer ScienceProceedings Title: Proceedings of the 2012 Winter Simulation Conference (WSC)
- 2012

This article presents two results which characterize SA's convergence rates when both (i) and (ii) are be applied simultaneously, and should be seen as simply providing a theoretical basis for applying ideas that seem reasonable in practice.

Stochastic subgradient for composite convex optimization with functional constraints

- Computer Science, Mathematics
- 2022

It is shown that the algorithm converges linearly when the objective function has a linear least-square form and the constraints are polyhedral, and sublinear convergence rates are proved for this stochastic subgradient algorithm.

Stochastic quasi-Newton methods for non-strongly convex problems: Convergence and rate analysis

- Mathematics, Computer Science2016 IEEE 55th Conference on Decision and Control (CDC)
- 2016

This work allows the objective function to be merely convex and develop a regularized SQN method, and shows that the function value converges to its optimal value in both an almost sure and an expected-value sense.

## References

SHOWING 1-10 OF 33 REFERENCES

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study

- Computer Science, MathematicsComput. Optim. Appl.
- 2003

This work presents a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems and finds provably near-optimal solutions to these difficult Stochastic programs using only a moderate amount of computation time.

Primal-dual subgradient methods for convex problems

- Mathematics, Computer ScienceMath. Program.
- 2009

A new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure that is primal-dual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem.

The Sample Average Approximation Method for Stochastic Discrete Optimization

- MathematicsSIAM J. Optim.
- 2002

A Monte Carlo simulation--based approach to stochastic discrete optimization problems, where a random sample is generated and the expected value function is approximated by the corresponding sample average function.

Stochastic quasigradient methods and their application to system optimization

- Computer Science
- 1983

Stochastic quasigradient methods generalize the well-known stochastic approximation methods for uncnstrained optimization of the expectation of a random function to problems involving general constraints for deterministic nonlinear optimization problems.

Non-euclidean restricted memory level method for large-scale convex optimization

- Mathematics, Computer ScienceMath. Program.
- 2005

A new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple” domains, allowing for flexible handling of accumulated information and tradeoff between the level of utilizing this information and iteration’s complexity.

Monte Carlo bounding techniques for determining solution quality in stochastic programs

- MathematicsOper. Res. Lett.
- 1999

Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control (Spall, J.C.

- Computer Science
- 2007

This comprehensive book offers 504 main pages divided into 17 chapters, covering multivariate analysis, basic tests in statistics, probability theory and convergence, random number generators and Markov processes, and over 250 exercises.

The empirical behavior of sampling methods for stochastic programming

- Computer ScienceAnn. Oper. Res.
- 2006

A recently developed software tool executing on a computational grid is used to solve many large instances of these problems, allowing for high-quality solutions and to verify optimality and near-optimality of the computed solutions in various ways.

On Complexity of Stochastic Programming Problems

- Computer Science
- 2005

It is argued that two-stage (linear) stochastic programming problems with recourse can be solved with a reasonable accuracy by using Monte Carlo sampling techniques, while multistage Stochastic programs, in general, are intractable.