Preferential Bayesian optimisation with skew gaussian processes

  title={Preferential Bayesian optimisation with skew gaussian processes},
  author={A. Benavoli and Dario Azzimonti and D. Piga},
  journal={Proceedings of the Genetic and Evolutionary Computation Conference Companion},
Preferential Bayesian optimisation (PBO) deals with optimisation problems where the objective function can only be accessed via preference judgments, such as "this is better than that" between two candidate solutions (like in A/B tests). The state-of-the-art approach to PBO uses a Gaussian process to model the preference function and a Bernoulli likelihood to model the observed pair-wise comparisons. Laplace's method is then employed to compute posterior inferences and, in particular, to build… Expand

Figures from this paper

A unified framework for closed-form nonparametric regression, classification, preference and mixed problems with Skew Gaussian Processes
It is proved that SkewGP is conjugate with both the normal and affine probit likelihood, and more in general, with their product, which allows to handle classification, preference, numeric and ordinal regression, and mixed problems in a unified framework. Expand
Bayesian Optimisation for Sequential Experimental Design with Applications in Additive Manufacturing
This work aims to bring attention to the benefits of applying BO in designing experiments and to provide a BO manual, covering both methodology and software, for the convenience of anyone who wants to apply or learn BO. Expand


Preferential Bayesian Optimization
Preference Bayesian Optimization is presented, which allows us to find the optimum of a latent function that can only be queried through pairwise comparisons, the so-called duels, and the way of modeling correlations in PBO is key in obtaining this advantage. Expand
Skew Gaussian Processes for Classification
This paper proposes Skew-Gaussian processes (SkewGPs) as a non-parametric prior over functions and verifies empirically that the proposed SkewGP classifier provides a better performance than a GP classifier based on either Laplace's method or Expectation Propagation. Expand
Safe Exploration for Optimization with Gaussian Processes
This work develops an efficient algorithm called SAFEOPT, and theoretically guarantees its convergence to a natural notion of optimum reachable under safety constraints, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation. Expand
Bayesian Active Learning for Classification and Preference Learning
This work proposes an approach that expresses information gain in terms of predictive entropies, and applies this method to the Gaussian Process Classier (GPC), and makes minimal approximations to the full information theoretic objective. Expand
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
A tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions using the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. Expand
Integrals over Gaussians under Linear Domain Constraints
An efficient black-box algorithm that exploits geometry for the estimation of integrals over a small, truncated Gaussian volume, and to simulate therefrom, using the Holmes-Diaconis-Ross (HDR) method combined with an analytic version of elliptical slice sampling (ESS). Expand
Active Preference Learning with Discrete Choice Data
An active learning algorithm that learns a continuous valuation model from discrete preferences that maximizes the expected improvement at each query without accurately modelling the entire valuation surface, which would be needlessly expensive. Expand
Active preference learning based on radial basis functions
This paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as "this is better… Expand
Stagewise Safe Bayesian Optimization with Gaussian Processes
An efficient safe Bayesian optimization algorithm is developed, StageOpt, that separates safe region expansion and utility function maximization into two distinct stages and provides theoretical guarantees for both the satisfaction of safety constraints as well as convergence to the optimal utility value. Expand
Preference learning with Gaussian processes
A probabilistic kernel approach to preference learning based on Gaussian processes and a new likelihood function is proposed to capture the preference relations in the Bayesian framework. Expand