• Publications
  • Influence
Scalable Hyperparameter Transfer Learning
TLDR
This work proposes a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations: one Bayesianlinear regression model is associated to each black-box function optimization problem (or task), while transfer learning is achieved by coupling the models through a shared deep neural net.
A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks
TLDR
This work develops an exchangeable neural network that performs summary statistic-free, likelihood-free inference, and demonstrates the power of the approach on the recombination hotspot testing problem, outperforming the state-of-the-art.
Relativistic Monte Carlo
TLDR
Relativistic Hamiltonian Monte Carlo is proposed, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles that derives stochastic gradient versions of the algorithm and shows that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning.
Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning
TLDR
This work introduces a method to automatically design the BO search space by relying on evaluations of previous black-box functions, which depart from the common practice of defining a set of arbitrary search ranges a priori by considering search space geometries that are learnt from historical data.
A Quantile-based Approach for Hyperparameter Transfer Learning
TLDR
This work introduces a novel approach to achieve transfer learning across different datasets as well as different objectives, to regress the mapping from hyperparameter to objective quantiles with a semi-parametric Gaussian Copula distribution.
GASC: Genre-Aware Semantic Change for Ancient Greek
TLDR
GASC is developed, a dynamic semantic change model that leverages categorical metadata about the texts’ genre to boost inference and uncover the evolution of meanings in Ancient Greek corpora in a new evaluation framework, and achieves improved predictive performance compared to the state of the art.
Poisson Random Fields for Dynamic Feature Models
TLDR
A new framework for generating dependent Indian buffet processes is established, where the Poisson random field model from population genetics is used as a way of constructing dependent beta processes.
Amazon SageMaker Automatic Model Tuning: Scalable Gradient-Free Optimization
TLDR
Amazon SageMaker Automatic Model Tuning (AMT) is presented, a fully managed system for gradient-free optimization at scale and finds the best version of a trained machine learning model by repeatedly evaluating it with different hyperparameter configurations.
Fair Bayesian Optimization
TLDR
A general constrained Bayesian optimization framework to optimize the performance of any ML model while enforcing one or multiple fairness constraints, and a correlation between regularization and unbiased models is observed, explaining why acting on the hyperparameters leads to ML models that generalize well and are fair.
Constrained Bayesian Optimization with Max-Value Entropy Search
TLDR
This work proposes constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation of Gaussian process-based BO with continuous or binary constraints, and revisits the validity of the factorized approximation adopted for rapid computation of the MES acquisition function.
...
...