Learn More
We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised class of deep, directed generative models , endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisa-tion of a variational(More)
The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation(More)
Preface This thesis contributes to the field of Bayesian machine learning. Familiarity with most of the material in Bishop [2007], MacKay [2003] and Hastie et al. [2009] would thus be convenient for the reader. Sections which may be skipped by the expert reader without disrupting the flow of the text have been clearly marked with a " fast-forward " ([])(More)
The use of L 1 regularisation for sparse learning has generated immense research interest , with many successful applications in diverse areas such as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L 1 methods, in this paper we find that L 1 regularisation often dramatically(More)
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model(More)
We present a probabilistic model for learning non-negative tensor factorizations (NTF), in which the tensor factors are latent variables associated with each data dimension. The non-negativity constraint for the latent factors is handled by choosing priors with support on the non-negative numbers. Two Bayesian inference procedures based on Markov chain(More)
— The classification of protein sequences into families is an important tool in the annotation of structural and functional properties to newly discovered proteins. We present a classification system using pattern recognition techniques to create a numerical vector representation of a protein sequence and then classify the sequence into a number of given(More)