• Publications
  • Influence
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
TLDR
This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.
Grammar Variational Autoencoder
TLDR
Surprisingly, it is shown that not only does the model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs.
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
TLDR
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution.
Deep Gaussian Processes for Regression using Approximate Expectation Propagation
TLDR
A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.
Probabilistic Matrix Factorization with Non-random Missing Data
TLDR
A probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR) to obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process.
Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of accelerating their execution with specialized hardware. While published designs easily give an
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
TLDR
In EDDI, a novel partial variational autoencoder to predict missing data entries problematically given any subset of the observed ones, and combine it with an acquisition function that maximizes expected information gain on a set of target variables is proposed.
GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution
TLDR
This work evaluates the performance of GANs based on recurrent neural networks with Gumbel-softmax output distributions in the task of generating sequences of discrete elements with a continuous approximation to a multinomial distribution parameterized in terms of the softmax function.
...
...