Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

@inproceedings{Srinivas2010GaussianPO,
  title={Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design},
  author={Niranjan Srinivas and Andreas Krause and S. Kakade and M. Seeger},
  booktitle={ICML},
  year={2010}
}
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. [...] Key Method We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds…Expand
On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization
Regret Bounds for Gaussian-Process Optimization in Large Domains
Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration
On Kernelized Multi-armed Bandits
Regret Bounds for Noise-Free Bayesian Optimization
Lenient Regret and Good-Action Identification in Gaussian Process Bandits
No-Regret Algorithms for Time-Varying Bayesian Optimization
  • Xingyu Zhou, N. Shroff
  • Computer Science
  • 2021 55th Annual Conference on Information Sciences and Systems (CISS)
  • 2021
Weighted Gaussian Process Bandits for Non-stationary Environments
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
Regret Bounds for Gaussian Process Bandit Problems
Stochastic Linear Optimization under Bandit Feedback
Online Optimization in X-Armed Bandits
Linearly Parameterized Bandits
The Price of Bandit Information for Online Optimization
Near-optimal Nonmyopic Value of Information in Graphical Models
Using Confidence Bounds for Exploitation-Exploration Trade-offs
  • P. Auer
  • Mathematics, Computer Science
  • J. Mach. Learn. Res.
  • 2002
Multi-armed bandits in metric spaces
Finite-time Analysis of the Multiarmed Bandit Problem
An Exact Algorithm for Maximum Entropy Sampling
...
1
2
3
4
...