Corpus ID: 233210383

Sample Average Approximations of Strongly Convex Stochastic Programs in Hilbert Spaces

@inproceedings{Milz2021SampleAA,
  title={Sample Average Approximations of Strongly Convex Stochastic Programs in Hilbert Spaces},
  author={Johannes Milz},
  year={2021}
}
We analyze the tail behavior of solutions to sample average approximations (SAAs) of stochastic programs posed in Hilbert spaces. We require that the integrand be strongly convex with the same convexity parameter for each realization. Combined with a standard condition from the literature on stochastic programming, we establish non-asymptotic exponential tail bounds for the distance between the SAA solutions and the stochastic program’s solution, without assuming compactness of the feasible set… Expand
1 Citations
Asymptotic Properties of Monte Carlo Methods in Elliptic PDE-Constrained Optimization under Uncertainty
Monte Carlo approximations for random linear elliptic PDE constrained optimization problems are studied. We use empirical process theory to obtain best possible mean convergence rates O(n− 1 2 ) forExpand

References

SHOWING 1-10 OF 56 REFERENCES
On quantitative stability in infinitedimensional optimization under uncertainty
  • Optim. Lett. (2021)
  • 2021
An Interior-Point Approach for Solving Risk-Averse PDE-Constrained Optimization Problems with Coherent Risk Measures
TLDR
A method for solving PDE-constrained optimization problems in which the risk measures are convex combinations of the mean and conditional value-at-risk (CVaR) combinations, and a log-barrier technique is suggested to approximate the risk measure. Expand
Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces
For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood,Expand
A Stochastic Gradient Method With Mesh Refinement for PDE-Constrained Optimization Under Uncertainty
TLDR
This paper focuses on the efficient numerical minimization of a convex and smooth tracking-type functional subject to a linear partial differential equation with random coefficients and box constraints based on stochastic approximation. Expand
Approximations of semicontinuous functions with applications to stochastic optimization and statistical estimation
  • J. Royset
  • Mathematics, Computer Science
  • Math. Program.
  • 2020
TLDR
It is established that every usc function is the limit of a hypo-converging sequence of piecewise affine functions of the difference-of-max type and resulting algorithmic possibilities in the context of approximate solution of infinite-dimensional optimization problems are illustrated. Expand
First-order and Stochastic Optimization Methods for Machine Learning
A quasi-Monte Carlo Method for an Optimal Control Problem Under Uncertainty
TLDR
It is shown that under moderate assumptions on the decay of the input random field, the error rate obtained by using a specially designed, randomly shifted rank-1 lattice quadrature rule is essentially inversely proportional to the number of quadratures, and the overall discretization error of the problem is derived in detail. Expand
G.Ch.: Projected Stochastic Gradients for Convex Constrained Problems in Hilbert Spaces
  • SIAM J. Optim
  • 2019
On rates of convergence for sample average approximations in the almost sure sense and in mean
We study the rates at which optimal estimators in the sample average approximation approach converge to their deterministic counterparts in the almost sure sense and in mean. To be able to quantifyExpand
Projected Stochastic Gradients for Convex Constrained Problems in Hilbert Spaces
TLDR
An application to a class of PDE constrained problems with a convex objective, convex constraint and random elliptic PDE constraints is shown and convergence of a projected stochastic gradient algorithm is demonstrated. Expand
...
1
2
3
4
5
...