• Publications
  • Influence
Probabilistic Principal Component Analysis
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axesExpand
Mixtures of Probabilistic Principal Component Analyzers
TLDR
PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model, which leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm. Expand
Fast Marginal Likelihood Maximisation for Sparse Bayesian Models
TLDR
This work describes a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions. Expand
Sparse Bayesian Learning and the Relevance Vector Machine
TLDR
It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages. Expand
The Relevance Vector Machine
TLDR
The Relevance Vector Machine is introduced, a Bayesian treatment of a generalised linear model of identical functional form to the SVM, and examples demonstrate that for comparable generalisation performance, the RVM requires dramatically fewer kernel functions. Expand
Use of the Zero-Norm with Linear Models and Kernel Methods
A blow-molded thermoplastic can has front and side walls and a nozzle integral therewith. The nozzle leads into a quarter-moon-shaped, force-absorbing protuberance in the wall of the can directedExpand
Variational Relevance Vector Machines
TLDR
This paper shows how the RVM can be formulated and solved within a completely Bayesian paradigm through the use of variational inference, thereby giving a posterior distribution over both parameters and hyperparameters. Expand
Mixtures of Principal Component Analysers
Principal component analysis (PCA) is a ubiquitous technique for data analysis but one whose effective application is restricted by its global linear character. While global nonlinear variants of PCAExpand
Analysis of Sparse Bayesian Learning
TLDR
It is shown that conditioned on an individual hyper-parameter, the marginal likelihood has a unique maximum which is computable in closed form, and it is further shown that if a derived 'sparsity criterion' is satisfied, this maximum is exactly equivalent to 'pruning' the corresponding parameter from the model. Expand
Bayesian Image Super-Resolution
TLDR
This paper develops a Bayesian treatment of the super-resolution problem in which the likelihood function for the image registration parameters is based on a marginalization over the unknown high-re solution image, and is rendered tractable through the introduction of a Gaussian process prior over images. Expand
...
1
2
3
4
...