Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Abstract

In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior’s covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to ‘twin kernel PCA’ in which a mapping between feature spaces occurs.

Extracted Key Phrases

4 Figures and Tables

02040'04'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

481 Citations

Semantic Scholar estimates that this publication has 481 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Lawrence2003GaussianPL, title={Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data}, author={Neil D. Lawrence}, booktitle={NIPS}, year={2003} }