Rockafellian Relaxation in Optimization under Uncertainty: Asymptotically Exact Formulations

  title={Rockafellian Relaxation in Optimization under Uncertainty: Asymptotically Exact Formulations},
  author={Louis L. Chen and Johannes O. Royset},
. In practice, optimization models are often prone to unavoidable inaccuracies due to lack of data and dubious assumptions. Traditionally, this placed special emphasis on risk-based and robust formulations, and their focus on “conservative” decisions. We develop, in contrast, an “optimistic” framework based on Rockafellian relaxations in which optimization is conducted not only over the original decision space but also jointly with a choice of model perturbation. The framework enables us to… 


Optimistic Robust Optimization With Applications To Machine Learning
This paper provides a new interpretation for popular sparsity inducing non-convex regularization schemes and shows that DCA or DCA-like optimization approaches can be intuitive and efficient.
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
It is demonstrated that the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs.
Robust Solutions of Optimization Problems Affected by Uncertain Probabilities
The robust counterpart of a linear optimization problem with φ-divergence uncertainty is tractable for most of the choices of φ typically considered in the literature and extended to problems that are nonlinear in the optimization variables.
Mathematical Foundations of Robust and Distributionally Robust Optimization
Robust and distributionally robust optimization are modeling paradigms for decision-making under uncertainty where the uncertain parameters are only known to reside in an uncertainty set or are
Distributionally Robust Convex Optimization
A unifying framework for modeling and solving distributionally robust optimization problems and introduces standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold.
Variational Theory for Optimization under Stochastic Ambiguity
This work provides a novel, unifying perspective on optimization under stochastic ambiguity that rests on two pillars: a metric for the space of distribution functions based on the hypo-distance between upper semicontinuous functions and a metric consistent with convergence in distribution (= weak$^\star$ convergence) of the associated probability measures.
An exact penalization viewpoint of constrained optimization
In their seminal papers Eremin [Soviet Mathematics Doklady, 8 (1966), pp. 459–462] and Zangwill [Management Science, 13 (1967), pp. 344–358] introduce a notion of exact penalization for use in the
Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach
A generalized empirical likelihood framework—based on distributional uncertainty sets constructed from nonparametric f-divergence balls—for Hadamard differentiable functionals, and in particular, stochastic optimization problems, is developed.
Approximations and solution estimates in optimization
  • J. Royset
  • Computer Science, Mathematics
    Math. Program.
  • 2018
A broad framework for quantifying approximations is laid out by viewing finite- and infinite-dimensional constrained minimization problems as instances of extended real-valued lower semicontinuous functions defined on a general metric space and shows that near-optimal and near-feasible solutions are effectively Lipschitz continuous with modulus one in this distance.
Stability and Sensitivity of Stochastic Dominance Constrained Optimization Models
The notion of a shadow utility, which determines the changes of the optimal value when the underlying random variables are perturbed, is introduced and a limit theorem for the optimal values of empirical approximations of dominance constrained optimization models is derived.