• Corpus ID: 208636940

Ordinal Bayesian Optimisation

@article{Picheny2019OrdinalBO,
  title={Ordinal Bayesian Optimisation},
  author={Victor Picheny and Sattar Vakili and Artem Artemev},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.02493}
}
Bayesian optimisation is a powerful tool to solve expensive black-box problems, but fails when the stationary assumption made on the objective function is strongly violated, which is the case in particular for ill-conditioned or discontinuous objectives. We tackle this problem by proposing a new Bayesian optimisation framework that only considers the ordering of variables, both in the input and output spaces, to fit a Gaussian process in a latent space. By doing so, our approach is agnostic to… 
Optimal Order Simple Regret for Gaussian Process Bandits
TLDR
This work proves an Õ( √ γN/N) bound on the simple regret performance of a pure exploration algorithm that is significantly tighter than the existing bounds and is order optimal up to logarithmic factors for the cases where a lower bound on regret is known.
A Domain-Shrinking based Bayesian Optimization Algorithm with Order-Optimal Regret Performance
TLDR
This is the first GP-based algorithm with an order-optimal regret guarantee and reduces computational complexity by a factor of O(T 2d−1) (where T is the time horizon and d the dimension of the function domain).
On Information Gain and Regret Bounds in Gaussian Process Bandits
TLDR
General bounds on $\gamma_T$ are provided based on the decay rate of the eigenvalues of the GP kernel, whose specialisation for commonly used kernels, improves the existing bounds on $T$ and consequently the regret bounds relying on $gamma-T$ under numerous settings are provided.

References

SHOWING 1-10 OF 23 REFERENCES
Preferential Bayesian Optimization
TLDR
Preference Bayesian Optimization is presented, which allows us to find the optimum of a latent function that can only be queried through pairwise comparisons, the so-called duels, and the way of modeling correlations in PBO is key in obtaining this advantage.
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
On Kernelized Multi-armed Bandits
TLDR
This work provides two new Gaussian process-based algorithms for continuous bandit optimization-Improved GP-UCB and GP-Thomson sampling (GP-TS) and derive corresponding regret bounds, and derives a new self-normalized concentration inequality for vector- valued martingales of arbitrary, possibly infinite, dimension.
Input Warping for Bayesian Optimization of Non-Stationary Functions
TLDR
On a set of challenging benchmark optimization tasks, it is observed that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably.
Gaussian Process bandits with adaptive discretization
TLDR
In this paper, the problem of maximizing a black-box function f:\mathcal{X} \to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process (GP) prior, and high probability bounds on its simple and cumulative regret are established.
Stochastic variational inference
TLDR
Stochastic variational inference lets us apply complex Bayesian models to massive data sets, and it is shown that the Bayesian nonparametric topic model outperforms its parametric counterpart.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Variational Learning of Inducing Variables in Sparse Gaussian Processes
  • M. Titsias
  • Mathematics, Computer Science
    AISTATS
  • 2009
TLDR
A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.
Efficient Global Optimization of Expensive Black-Box Functions
TLDR
This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.
Gaussian Processes for Big Data
TLDR
Stochastic variational inference for Gaussian process models is introduced and it is shown how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform Variational inference.
...
1
2
3
...