A Regularized Sample Average Approximation Method for Stochastic Mathematical Programs with Nonsmooth Equality Constraints

@article{Meng2006ARS,
  title={A Regularized Sample Average Approximation Method for Stochastic Mathematical Programs with Nonsmooth Equality Constraints},
  author={Fanwen Meng and Huifu Xu},
  journal={SIAM J. Optim.},
  year={2006},
  volume={17},
  pages={891-919}
}
We investigate a class of two stage stochastic programs where the second stage problem is subject to nonsmooth equality constraints parameterized by the first stage variant and a random vector. We consider the case when the parametric equality constraints have more than one solution. A regularization method is proposed to deal with the multiple solution problem, and a sample average approximation method is proposed to solve the regularized problem. We then investigate the convergence of… 

Tables from this paper

Convergence Analysis of Sample Average Approximation Methods for a Class of Stochastic Mathematical Programs with Equality Constraints

TLDR
A uniform Strong Law of Large Numbers for random compact set-valued mappings is derived and used to investigate the convergence of Karush-Kuhn-Tucker points of SAA programs as the sample size increases.

Stability Analysis of Two-Stage Stochastic Mathematical Programs with Complementarity Constraints via NLP Regularization

TLDR
A detailed stability analysis is carried out of the approximated problems, including continuity and local Lipschitz continuity of optimal value functions and outer semicontinuity and continuity of the set of optimal solutions and stationary points.

Penalized Sample Average Approximation Methods for Stochastic Mathematical Programs with Complementarity Constraints

TLDR
It is shown under some moderate conditions that the statistical estimators obtained from solving the penalized SAA problems converge almost surely to its true counterpart as the sample size increases.

A sample average approximation method based on a D-gap function for stochastic variational inequality problems

Sample average approximation method is one of the well-behaved methods in the stochastic optimization. This paper presents a sample average approximation method based on a D-gap function for

Sample average approximation method for a class of stochastic variational inequality problems

TLDR
The authors formulate the problems as constrained optimization problems and then propose a sample average approximation method for solving the problems and investigate the limiting behavior of the optimal values and the optimal solutions of the approximation problems.

A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method

TLDR
Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases.

Smooth sample average approximation of stationary points in nonsmooth stochastic optimization and applications

TLDR
A smoothing scheme for a general class of nonsmooth stochastic problems is considered and the convergence of stationary points of the smoothed sample average approximation problem as sample size increases is investigated and an error bound on approximate stationary points is obtained.

Quantitative stability of two-stage stochastic linear variational inequality problems with fixed recourse

ABSTRACT This paper focus on the quantitative stability of a class of two-stage stochastic linear variational inequality problems whose second stage problems are stochastic linear complementarity

Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach

TLDR
It is shown under moderate conditions that an accumulation point of the SAA stationary points satisfies a relaxed stationary condition for the true problem and further that, with probability approaching one exponentially fast with increasing sample size, a stationary point of SAA converges to the set of relaxed stationary points.

References

SHOWING 1-10 OF 33 REFERENCES

Convergence Analysis of Sample Average Approximation Methods for a Class of Stochastic Mathematical Programs with Equality Constraints

TLDR
A uniform Strong Law of Large Numbers for random compact set-valued mappings is derived and used to investigate the convergence of Karush-Kuhn-Tucker points of SAA programs as the sample size increases.

An Implicit Programming Approach for a Class of Stochastic Mathematical Programs with Complementarity Constraints

TLDR
This paper investigates the existence, uniqueness, and differentiability of the lower level equilibrium defined by the complementarity constraints, and its dependence using a nonsmooth version of implicit function theorem, and studies the differentiability and convexity of the objective function which implicitly depends upon the lowerlevel equilibrium.

SMOOTHING IMPLICIT PROGRAMMING APPROACHES FOR STOCHASTIC MATHEMATICAL PROGRAMS WITH LINEAR COMPLEMENTARITY CONSTRAINTS

TLDR
For the lower-level wait-and-see model, a smoothing implicit programming method is proposed and a comprehensive convergence theory is established and it is shown that the two methods possess similar convergence properties.

Convergence theory for nonconvex stochastic programming with an application to mixed logit

TLDR
This work allows for local SAA minimizers of possibly nonconvex problems and proves, under suitable conditions, almost sure convergence of local second- order solutions of the SAA problem to second-order critical points of the true problem.

Simulation-Based Solution of Stochastic Mathematical Programs with Complementarity Constraints: Sample-Path Analysis

We consider a class of stochastic mathematical programs with complementarity constraints, in which both the objective and the constraints involve limit functions or expectations that need to be

A Regularized Smoothing Newton Method for Box Constrained Variational Inequality Problems with P0-Functions

  • H. Qi
  • Mathematics
    SIAM J. Optim.
  • 2000
TLDR
Under CD-regularity, this work proves that the proposed regularized smoothing Newton method for the box constrained variational inequality problem with P0-function has a superlinear (quadratic) convergence rate without requiring strict complementarity conditions.

Stochastic convex programming: Kuhn-Tucker conditions

Refinements of necessary optimality conditions in nondifferentiable programming I

In this study, we develop general optimality conditions of both Fritz John and Kuhn-Tucker type for an optimization problem with nondifferentiable data. The already known conditions are sharpened by

On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs

TLDR
It is shown that if the corresponding random functions are convex piecewise linear and the distribution is discrete, then an optimal solution of the approximating problem provides an exact optimal solution to the true problem with probability one for sufficiently large sample size.

Stochastic mathematical programs with equilibrium constraints