#### Filter Results:

#### Publication Year

2006

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

We marry ideas from deep neural networks and approximate Bayesian inference to derive a gen-eralised class of deep, directed generative models , endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisa-tion of a variational… (More)

The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation… (More)

The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, fo-cusing on mean-field or other simple structured approximations. This restriction has a significant impact on the… (More)

- Yunus Saatçi, John P Cunningham, Ryan Turner, Marc Deisenroth, Shakir Mohamed, Ferenc Huszar +5 others
- 2011

Preface This thesis contributes to the field of Bayesian machine learning. Familiarity with most of the material in Bishop [2007], MacKay [2003] and Hastie et al. [2009] would thus be convenient for the reader. Sections which may be skipped by the expert reader without disrupting the flow of the text have been clearly marked with a " fast-forward " ([])… (More)

Principal Components Analysis (PCA) has become established as one of the key tools for dimensionality reduction when dealing with real valued data. Approaches such as exponential family PCA and non-negative matrix factorisation have successfully extended PCA to non-Gaussian data types, but these techniques fail to take advantage of Bayesian inference and… (More)

The use of L 1 regularisation for sparse learning has generated immense research interest , with many successful applications in diverse areas such as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L 1 methods, in this paper we find that L 1 regularisation often dramatically… (More)

Introduction Motivation: Non-parametric regression using Gaussian processes is one of the most popular and widely used models in machine learning, with application to binary and multi-class classification, as well as ordinal and Poisson regression.

- Irina Higgins, Loïc Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell +2 others
- ArXiv
- 2016

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model… (More)

We present a probabilistic model for learning non-negative tensor factorizations (NTF), in which the tensor factors are latent variables associated with each data dimension. The non-negativity constraint for the latent factors is handled by choosing priors with support on the non-negative numbers. Two Bayesian inference procedures based on Markov chain… (More)

— The classification of protein sequences into families is an important tool in the annotation of structural and functional properties to newly discovered proteins. We present a classification system using pattern recognition techniques to create a numerical vector representation of a protein sequence and then classify the sequence into a number of given… (More)