• Corpus ID: 235658012

Bayesian Joint Chance Constrained Optimization: Approximations and Statistical Consistency

@article{Jaiswal2021BayesianJC,
  title={Bayesian Joint Chance Constrained Optimization: Approximations and Statistical Consistency},
  author={Prateek Jaiswal and Harsha Honnappa and Vinayak A. Rao},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12199}
}
This paper considers data-driven chance-constrained stochastic optimization problems in a Bayesian framework. Bayesian posteriors afford a principled mechanism to incorporate data and prior knowledge into stochastic optimization problems. However, the computation of Bayesian posteriors is typically an intractable problem, and has spawned a large literature on approximate Bayesian computation. Here, in the context of chanceconstrained optimization, we focus on the question of statistical… 

Figures from this paper

References

SHOWING 1-10 OF 46 REFERENCES

Convex Approximations of Chance Constrained Programs

TLDR
A large deviation-type approximation, referred to as “Bernstein approximation,” of the chance constrained problem is built that is convex and efficiently solvable and extended to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set.

Risk-Sensitive Variational Bayes: Formulations and Bounds

TLDR
The key methodological innovation in this paper is to leverage a dual representation of the risk measure to introduce an optimization-based framework for approximately computing the posterior risk-sensitive objective, as opposed to using standard sampling based methods such as Markov Chain Monte Carlo.

A Bayesian Risk Approach to Data-driven Stochastic Optimization: Formulations and Asymptotics

TLDR
This work proposes a Bayesian risk optimization (BRO) framework for parametric underlying distributions, which is to optimize a risk functional applied to the posterior distribution of an unknown distribution parameter.

A Sample Approximation Approach for Optimization with Probabilistic Constraints

TLDR
This work studies approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample to obtain a lower bound to the true optimal value.

Computational study of a chance constrained portfolio selection problem

TLDR
The scenario approximation method gives a conservative approximation to the original problem and a method to approximate a sum of lognormals that allows to find a closed expression for the chance constrained problem and compute an efficient frontier for the lognormal case is discussed.

A Bayesian Risk Approach to MDPs with Parameter Uncertainty

TLDR
A Bayesian risk approach to MDPs with parameter uncertainty, where a risk functional is applied in nested form to the expected discounted total cost with respect to the Bayesian posterior distributions of the unknown parameters in each time stage is proposed.

Variational Inference: A Review for Statisticians

TLDR
Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.

Solving Bayesian risk optimization via nested stochastic gradient estimation

TLDR
Nested stochastic gradient estimators are derived and proposed for Bayesian Risk Optimization and it is shown that they are asymptotically unbiased and consistent, and that the algorithms converge asymPTotically.

Data-Driven Chance Constrained Optimization under Wasserstein Ambiguity Sets

TLDR
This work reformulates DRCCPs under data-driven Wasserstein ambiguity sets and a general class of constraint functions and presents a convex reformulation of the program and shows its tractability when the constraint function is affine in both the decision variable and the uncertainty.

Frequentist Consistency of Variational Bayes

  • Yixin WangD. Blei
  • Mathematics, Computer Science
    Journal of the American Statistical Association
  • 2018
TLDR
It is proved that the VB posterior converges to the Kullback–Leibler (KL) minimizer of a normal distribution, centered at the truth and the corresponding variational expectation of the parameter is consistent and asymptotically normal.