Approximation to Optimization Problems: An Elementary Review

  title={Approximation to Optimization Problems: An Elementary Review},
  author={Peter Kall},
  journal={Math. Oper. Res.},
  • P. Kall
  • Published 1 February 1986
  • Mathematics
  • Math. Oper. Res.
During the last two decades the concept of epi-convergence was introduced and then was used in various investigations in optimization and related areas. The aim of this review is to show in an elementary way how closely the arguments in the epi-convergence approach are related to those of the classical theory of convergence of functions. 

Figures from this paper

On continuous convergence and epi-convergence of random functions. Part I: Theory and relations

The paper investigates “almost surely” and “in probability” versions of these convergence notions in more detail and presents definitions and theoretical results.

A primal-dual approach to inexact subgradient methods

  • K. Au
  • Mathematics, Computer Science
    Math. Program.
  • 1996
Alternative solution procedures are developed when the primal-dual information of IXS is utilized, especially useful when the projection operation onto the feasible set is difficult.

Viscosity Solutions of Minimization Problems

It is proved, in a rather large setting, that the solutions of the approximate problems converge to a ``viscosity solution'' of the original problem, that is, a solution that is minimal among all the solutions with respect to some viscosity criteria.

Bounds for and Approximations to Stochastic Linear Programs with Recourse — Tutorial —

The objective of stochastic linear programs with recourse contains a multivariate integral 2(x) = ∫Ξ Q(x,ξ)P(dξ), in general, which is usually replaced by successively improved lower and upper bounding functions more amenable to optimization procedures.

Solving Stochastic Programs

First, decomposition methods that exploit the special structure of stochastic programs are discussed, and approximate solution methods based on Monte Carlo sampling and bounding techniques are introduced.

On consistency of bounding operations in deterministic global optimization

This technical comment refers to the discussion of strong consistency of several bounding procedures in Lemma 2.1 and Proposition 2.1 of Ref. 1. A necessary clarification is given of the notion of

On the Convergence of Algorithms with Implications for Stochastic and Nondifferentiable Optimization

It is shown that under relatively lenient conditions, "stage-dependent descent" not necessarily monotonic is sufficient to guarantee convergence, and the notion of ∂-compatibility is introduced, and several results that permit relaxations of conditions imposed by previous approaches to algorithmic convergence are proved.

First Order Convergence Analysis for Sparse Grid Method in Stochastic Two-Stage Linear Optimization Problem

This paper proves the first order convergence rate of the sparse grid method for this important stochastic optimization model, utilizing convexity analysis and measure theory and extends the convergence theory of sparse grid integration method to piecewise linear and convex functions.

On the global minimization of the value-at-risk

Upper and lower bounds for the minimum VaR are developed and it is shown how the combined bounding procedures can be used to compute the latter value to global optimality.



Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse

Various approximation schemes for stochastic optimization problems involving either approximates of the probability measures and/or approximates of the objective functional, are investigated. We

On the convergence of sequences of convex sets in finite dimensions

Four types of convergence for sequences of convex sets are investigated. Their interrelationships are explored.

Uniform convergence of convex optimization problems

Convergence of sequences of convex sets, cones and functions. II