• Publications
  • Influence
A Likelihood-Free Inference Framework for Population Genetic Data using Exchangeable Neural Networks
TLDR
In this work, we develop an exchangeable neural network that performs summary statistic-free, likelihood-free inference. Expand
Scalable Hyperparameter Transfer Learning
TLDR
We propose a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations, while transfer learning is achieved by coupling the models through a shared deep neural net. Expand
Relativistic Monte Carlo
TLDR
We propose relativistic stochastic gradient Hamiltonian Monte Carlo, a version of HMC based on relativist dynamics that introduce a maximum velocity on particles. Expand
A Quantile-based Approach for Hyperparameter Transfer Learning
TLDR
This paper shows how semi-parametric Gaussian Copulas effectively handle heterogeneous scales across tasks, giving rise to several algorithmic instantiations for hyperparameter transfer learning. Expand
GASC: Genre-Aware Semantic Change for Ancient Greek
TLDR
We develop GASC, a dynamic semantic change model that leverages categorical metadata about the texts’ genre to boost inference and uncover the evolution of meanings in Ancient Greek corpora. Expand
Poisson Random Fields for Dynamic Feature Models
TLDR
We present the Wright-Fisher Indian buffet process (WF-IBP), a probabilistic model for time-dependent data assumed to have been generated by an unknown number of latent features, where the Poisson random field model from population genetics is used as a way of constructing dependent beta processes. Expand
Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning
TLDR
We introduce a method to automatically design the BO search space by relying on evaluations of previous black-box functions. Expand
Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization. Typically, BO is powered by a Gaussian process (GP), whose algorithmic complexity is cubic inExpand
Constrained Bayesian Optimization with Max-Value Entropy Search
TLDR
We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing a general formulation of Gaussian process-based BO with continuous or binary constraints. Expand
Cost-aware Bayesian Optimization
TLDR
We introduce Cost Apportioned BO (CArBO), a novel BO algorithm that attempts to minimize an objective function in as little cost as possible. Expand
...
1
2
...