Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations

@article{Esfahani2018DatadrivenDR,
  title={Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations},
  author={Peyman Mohajerin Esfahani and D. Kuhn},
  journal={Mathematical Programming},
  year={2018},
  volume={171},
  pages={115-166}
}
We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting… Expand
Wasserstein Distributionally Robust Stochastic Control: A Data-Driven Approach
  • Insoon Yang
  • Computer Science, Mathematics
  • IEEE Transactions on Automatic Control
  • 2021
TLDR
This article characterize an explicit form of the optimal control policy and the worst-case distribution policy for linear-quadratic problems with Wasserstein penalty and shows that the contraction property of associated Bellman operators extends a single-stage out-of-sample performance guarantee to the corresponding multistage guarantee without any degradation in the confidence level. Expand
Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning
TLDR
This tutorial argues that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others. Expand
Statistical Analysis of Wasserstein Distributionally Robust Estimators
TLDR
This tutorial considers statistical methods which invoke a min-max distributionally robust formulation to extract good out-of-sample performance in data-driven optimization and learning problems and presents a central limit theorem for the DRO estimator and a recipe for constructing compatible confidence regions that are useful for uncertainty quantification. Expand
Finite-Sample Guarantees for Wasserstein Distributionally Robust Optimization: Breaking the Curse of Dimensionality
Wasserstein distributionally robust optimization (DRO) aims to find robust and generalizable solutions by hedging against data perturbations in Wasserstein distance. Despite its recent empiricalExpand
A Robust Learning Algorithm for Regression Models Using Distributionally Robust Optimization under the Wasserstein Metric
We present a Distributionally Robust Optimization (DRO) approach to estimate a robustified regression plane in a linear regression setting, when the observed samples are potentially contaminated withExpand
Data-driven Stochastic Programming with Distributionally Robust Constraints Under Wasserstein Distance: Asymptotic Properties
Distributionally robust optimization is a dominant paradigm for decision-making problems where the distribution of random variables is unknown. We investigate a distributionally robust optimizationExpand
Distributionally Robust Optimization with Markovian Data
TLDR
A data-driven distributionally robust optimization model is proposed to estimate the problem’s objective function and optimal solution by leveraging results from large deviations theory to derive statistical guarantees on the quality of these estimators. Expand
Regularization via Mass Transportation
TLDR
This paper introduces new regularization techniques using ideas from distributionally robust optimization, and gives new probabilistic interpretations to existing techniques to minimize the worst-case expected loss, where the worst case is taken over the ball of all distributions that have a bounded transportation distance from the empirical distribution. Expand
Data-Driven Optimization with Distributionally Robust Second-Order Stochastic Dominance Constraints
Optimization with stochastic dominance constraints has recently received an increasing amount of attention in the quantitative risk management literature. Instead of requiring that the probabilisticExpand
A Robust Learning Approach for Regression Models Based on Distributionally Robust Optimization
TLDR
This work presents a Distributionally Robust Optimization (DRO) approach to estimate a robustified regression plane in a linear regression setting, when the observed samples are potentially contaminated with adversarially corrupted outliers, and establishes two types of performance guarantees for the solution to the formulation under mild conditions. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 77 REFERENCES
Computationally Tractable Counterparts of Distributionally Robust Constraints on Risk Measures
TLDR
This paper shows that the derivation of a tractable robust counterpart can be split into two parts: one corresponding to the risk measure and the other to the uncertainty set, and provides the computational tractability status for each of the uncertaintySet-risk measure pairs that the authors could solve. Expand
Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems
TLDR
This paper proposes a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix) and demonstrates that for a wide range of cost functions the associated distributionally robust stochastic program can be solved efficiently. Expand
Robust Data-Driven Dynamic Programming
TLDR
A robust data-driven DP scheme is proposed, which replaces the expectations in the DP recursions with worst-case expectations over a set of distributions close to the best estimate, and it is shown that the arising min-max problems in theDP recursions reduce to tractable conic programs. Expand
Kullback-Leibler divergence constrained distributionally robust optimization
TLDR
The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming. Expand
Distributionally Robust Convex Optimization
TLDR
A unifying framework for modeling and solving distributionally robust optimization problems and introduces standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. Expand
Robust Solutions of Optimization Problems Affected by Uncertain Probabilities
TLDR
The robust counterpart of a linear optimization problem with φ-divergence uncertainty is tractable for most of the choices of φ typically considered in the literature and extended to problems that are nonlinear in the optimization variables. Expand
Worst-Case Value-At-Risk and Robust Portfolio Optimization: A Conic Programming Approach
TLDR
The problem of computing and optimizing the worst-case VaR and various other partial information on the distribution, including uncertainty in factor models, support constraints, and relative entropy information can be cast as semidefinite programs. Expand
Robustifying Convex Risk Measures for Linear Portfolios: A Nonparametric Approach
  • D. Wozabal
  • Mathematics, Computer Science
  • Oper. Res.
  • 2014
TLDR
A framework for robustifying convex, law invariant risk measures is introduced and it is shown that under mild conditions, the infinite dimensional optimization problem of finding the worst-case risk can be solved analytically and closed-form expressions for the robust risk measures are obtained. Expand
Tractable Robust Expected Utility and Risk Models for Portfolio Optimization
Expected utility models in portfolio optimization are based on the assumption of complete knowledge of the distribution of random returns. In this paper, we relax this assumption to the knowledge ofExpand
Models for Minimax Stochastic Linear Optimization Problems with Risk Aversion
TLDR
The minimax solutions guarantee to hedge against these worst possible distributions and provide a natural distribution to stress test stochastic optimization problems under distributional ambiguity and closer to that of data-driven solutions under the multivariate normal distribution and better under extremal distributions. Expand
...
1
2
3
4
5
...