Data-Pooling in Stochastic Optimization

@article{Gupta2019DataPoolingIS,
  title={Data-Pooling in Stochastic Optimization},
  author={Vishal Gupta and Nathan Kallus},
  journal={ERN: Statistical Decision Theory; Operations Research (Topic)},
  year={2019}
}
  • Vishal Gupta, Nathan Kallus
  • Published 2019
  • Mathematics, Computer Science
  • ERN: Statistical Decision Theory; Operations Research (Topic)
Managing large-scale systems often involves simultaneously solving thousands of unrelated stochastic optimization problems, each with limited data. Intuition suggests one can decouple these unrelated problems and solve them separately without loss of generality. We propose a novel data-pooling algorithm called Shrunken-SAA that disproves this intuition. In particular, we prove that combining data across problems can outperform decoupling, even when there is no a priori structure linking the… Expand
Meta Dynamic Pricing: Transfer Learning Across Experiments
TLDR
A meta dynamic pricing algorithm that learns an unknown prior online while solving a sequence of Thompson sampling pricing experiments for N different products, and shows that it significantly speeds up learning compared to prior-independent algorithms. Expand
On the Impossibility of Statistically Improving Empirical Optimization: A Second-Order Stochastic Dominance Perspective
When the underlying probability distribution in a stochastic optimization is observed only through data, various data-driven formulations have been studied to obtain approximate optimal solutions. WeExpand
How Big Should Your Data Really Be? Data-Driven Newsvendor and the Transient of Learning
We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not knowExpand
Integrated Conditional Estimation-Optimization
  • Paul Grigas, Meng Qi, Zuo-Jun Shen
  • Mathematics, Computer Science
  • 2021
Many real-world optimization problems involve uncertain parameters with probability distributions that can be estimated using contextual feature information. In contrast to the standard approach ofExpand

References

SHOWING 1-10 OF 35 REFERENCES
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
TLDR
It is demonstrated that the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Expand
Small-Data, Large-Scale Linear Optimization with Uncertain Objectives
Optimization applications often depend upon a huge number of uncertain parameters. In many contexts, however, the amount of relevant data per parameter is small, and hence, we may have only impreciseExpand
Robust sample average approximation
TLDR
This paper proposes a modification of SAA, which retains SAA’s tractability and asymptotic properties and, additionally, enjoys strong finite-sample performance guarantees, and presents examples from inventory management and portfolio allocation, demonstrating numerically that this approach outperforms other data-driven approaches in these applications. Expand
Learnability, Stability and Uniform Convergence
TLDR
This paper considers the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases, and identifies stability as the key necessary and sufficient condition for learnability. Expand
The Data-Driven Newsvendor Problem: New Bounds and Insights
TLDR
This paper analyzes the sample average approximation SAA approach for the data-driven newsvendor problem and obtains a new analytical bound on the probability that the relative regret of the SAA solution exceeds a threshold. Expand
From estimation to optimization via shrinkage
TLDR
A class of quadratic stochastic programs where the distribution of random variables has unknown parameters is studied, and it is shown that an estimator that shrinks the Mle towards an arbitrary vector yields a uniformly better risk than the MLE. Expand
A Geometrical Explanation of Stein Shrinkage
Shrinkage estimation has become a basic tool in the analysis of high-dimensional data. Historically and conceptually a key develop- ment toward this was the discovery of the inadmissibility of theExpand
INADMISSIBILITY OF THE USUAL ESTIMATOR FOR THE MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION
If one observes the real random variables Xi, X,, independently normally distributed with unknown means ti, *, {n and variance 1, it is customary to estimate (i by Xi. If the loss is the sum ofExpand
The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators
More than 30 years ago, Charles Stein discovered that in three or more dimensions, the ordinary estimator of the vector of means of a multivariate normal distribution is inadmissible. This articleExpand
Size Matters: Optimal Calibration of Shrinkage Estimators for Portfolio Selection
We carry out a comprehensive investigation of shrinkage estimators for asset allocation, and we find that size matters—the shrinkage intensity plays a significant role in the performance of theExpand
...
1
2
3
4
...