• Publications
  • Influence
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Taking the Human Out of the Loop: A Review of Bayesian Optimization
TLDR
This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications.
Convolutional Networks on Graphs for Learning Molecular Fingerprints
TLDR
A convolutional neural network that operates directly on graphs that allows end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape is introduced.
Bayesian Online Changepoint Detection
TLDR
This work examines the case where the model parameters before and after the changepoint are independent and derives an online algorithm for exact inference of the most recent changepoint and its implementation is highly modular so that the algorithm may be applied to a variety of types of data.
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
TLDR
This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.
Elliptical slice sampling
TLDR
A new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors is presented, which has simple, generic code applicable to many models, and works well for a variety of Gaussian process based models.
Gradient-based Hyperparameter Optimization through Reversible Learning
TLDR
This work computes exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure, which allows us to optimize thousands ofhyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures.
Multi-Task Bayesian Optimization
TLDR
This paper proposes an adaptation of a recently developed acquisition function, entropy search, to the cost-sensitive, multi-task setting and demonstrates the utility of this new acquisition function by leveraging a small dataset to explore hyper-parameter settings for a large dataset.
Scalable Bayesian Optimization Using Deep Neural Networks
TLDR
This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism.
...
...