University of Cambridge
Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules
- Rafael Gómez-Bombarelli, D. Duvenaud, Alán Aspuru-Guzik
- Computer ScienceACS central science
- 7 October 2016
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration…
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.
Grammar Variational Autoencoder
Surprisingly, it is shown that not only does the model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs.
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution.
Predictive Entropy Search for Multi-objective Bayesian Optimization
Deep Gaussian Processes for Regression using Approximate Expectation Propagation
- T. Bui, D. Hernández-Lobato, José Miguel Hernández-Lobato, Yingzhen Li, Richard E. Turner
- Computer ScienceICML
- 12 February 2016
A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.
Probabilistic Matrix Factorization with Non-random Missing Data
A probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR) to obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process.
Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators
- Brandon Reagen, P. Whatmough, D. Brooks
- Computer ScienceACM/IEEE 43rd Annual International Symposium on…
- 18 June 2016
The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of accelerating their execution with specialized hardware. While published designs easily give an…
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
- Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, S. Nowozin, C. Zhang
- Computer ScienceICML
- 27 September 2018
In EDDI, a novel partial variational autoencoder to predict missing data entries problematically given any subset of the observed ones, and combine it with an acquisition function that maximizes expected information gain on a set of target variables is proposed.
GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution
This work evaluates the performance of GANs based on recurrent neural networks with Gumbel-softmax output distributions in the task of generating sequences of discrete elements with a continuous approximation to a multinomial distribution parameterized in terms of the softmax function.