#### Filter Results:

- Full text PDF available (40)

#### Publication Year

2000

2017

- This year (1)
- Last 5 years (16)
- Last 10 years (29)

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Michalis K. Titsias
- AISTATS
- 2009

Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparam-eters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hy-perparameters by maximizing a lower bound of the true log marginal likelihood. The key property of… (More)

- Michalis K. Titsias, Neil D. Lawrence
- AISTATS
- 2010

We introduce a variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and compute a lower bound on the exact marginal likelihood of the nonlinear latent variable… (More)

- Christopher K. I. Williams, Michalis K. Titsias
- Neural Computation
- 2004

We consider data that are images containing views of multiple objects. Our task is to learn about each of the objects present in the images. This task can be approached as a factorial learning problem, where each image must be explained by instantiating a model for each of the objects present with the correct instantiation parameters. A major problem with… (More)

Standard Gaussian processes (GPs) model observations' noise as constant throughout input space. This is often a too restrictive assumption, but one that is needed for GP inference to be tractable. In this work we present a non-standard variational approximation that allows accurate inference in heteroscedastic GPs (i.e., under input-dependent noise… (More)

We propose a simple and effective variational inference algorithm based on stochastic optimi-sation that can be widely applied for Bayesian non-conjugate inference in continuous parameter spaces. This algorithm is based on stochastic approximation and allows for efficient use of gradient information from the model joint density. We demonstrate these… (More)

We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior which, from a Bayesian perspective, is the golden standard for sparse inference. We apply the method to a general multi-task and multiple kernel learning model in which a common set of Gaussian… (More)

• Motivation (just an example): private information: RGB color shared information: depicted gesture private information: depth map

- Michalis K. Titsias
- NIPS
- 2007

We present a probability distribution over non-negative integer valued matrices with possibly an infinite number of columns. We also derive a stochastic process that reproduces this distribution over equivalence classes. This model can play the role of the prior in nonparametric Bayesian learning scenarios where multiple latent features are associated with… (More)

Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of… (More)

- Constantinos Constantinopoulos, Michalis K. Titsias, Aristidis Likas
- IEEE Transactions on Pattern Analysis and Machine…
- 2006

We present a Bayesian method for mixture model training that simultaneously treats the feature selection and the model selection problem. The method is based on the integration of a mixture model formulation that takes into account the saliency of the features and a Bayesian approach to mixture learning that can be used to estimate the number of mixture… (More)