• Corpus ID: 208636940

# Ordinal Bayesian Optimisation

@article{Picheny2019OrdinalBO,
title={Ordinal Bayesian Optimisation},
author={Victor Picheny and Sattar Vakili and Artem Artemev},
journal={ArXiv},
year={2019},
volume={abs/1912.02493}
}
• Published 5 December 2019
• Computer Science, Mathematics
• ArXiv
Bayesian optimisation is a powerful tool to solve expensive black-box problems, but fails when the stationary assumption made on the objective function is strongly violated, which is the case in particular for ill-conditioned or discontinuous objectives. We tackle this problem by proposing a new Bayesian optimisation framework that only considers the ordering of variables, both in the input and output spaces, to fit a Gaussian process in a latent space. By doing so, our approach is agnostic to…
3 Citations
Optimal Order Simple Regret for Gaussian Process Bandits
• Computer Science, Mathematics
ArXiv
• 2021
This work proves an Õ( √ γN/N) bound on the simple regret performance of a pure exploration algorithm that is significantly tighter than the existing bounds and is order optimal up to logarithmic factors for the cases where a lower bound on regret is known.
A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance
This is the first GP-based algorithm with an order-optimal regret guarantee and reduces computational complexity by a factor of O(T 2d−1) (where T is the time horizon and d the dimension of the function domain).
On Information Gain and Regret Bounds in Gaussian Process Bandits
• Mathematics, Computer Science
AISTATS
• 2021
General bounds on $\gamma_T$ are provided based on the decay rate of the eigenvalues of the GP kernel, whose specialisation for commonly used kernels, improves the existing bounds on $T$ and consequently the regret bounds relying on $gamma-T$ under numerous settings are provided.

## References

SHOWING 1-10 OF 23 REFERENCES
Preferential Bayesian Optimization
• Computer Science, Mathematics
ICML
• 2017
Preference Bayesian Optimization is presented, which allows us to find the optimum of a latent function that can only be queried through pairwise comparisons, the so-called duels, and the way of modeling correlations in PBO is key in obtaining this advantage.
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
• Computer Science, Mathematics
ICML
• 2010
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
On Kernelized Multi-armed Bandits
• Computer Science, Mathematics
ICML
• 2017
This work provides two new Gaussian process-based algorithms for continuous bandit optimization-Improved GP-UCB and GP-Thomson sampling (GP-TS) and derive corresponding regret bounds, and derives a new self-normalized concentration inequality for vector- valued martingales of arbitrary, possibly infinite, dimension.
Input Warping for Bayesian Optimization of Non-Stationary Functions
• Mathematics, Computer Science
ICML
• 2014
On a set of challenging benchmark optimization tasks, it is observed that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably.
Gaussian Process bandits with adaptive discretization
• Mathematics, Computer Science
ArXiv
• 2017
In this paper, the problem of maximizing a black-box function f:\mathcal{X} \to \mathbb{R}\$ is studied in the Bayesian framework with a Gaussian Process (GP) prior, and high probability bounds on its simple and cumulative regret are established.
Stochastic variational inference
• Computer Science, Mathematics
J. Mach. Learn. Res.
• 2013
Stochastic variational inference lets us apply complex Bayesian models to massive data sets, and it is shown that the Bayesian nonparametric topic model outperforms its parametric counterpart.
Adam: A Method for Stochastic Optimization
• Computer Science, Mathematics
ICLR
• 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Variational Learning of Inducing Variables in Sparse Gaussian Processes
• M. Titsias
• Mathematics, Computer Science
AISTATS
• 2009
A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.
Efficient Global Optimization of Expensive Black-Box Functions
• Mathematics, Computer Science
J. Glob. Optim.
• 1998
This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Gaussian Processes for Big Data
• Mathematics, Computer Science
UAI
• 2013
Stochastic variational inference for Gaussian process models is introduced and it is shown how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform Variational inference.