Unsupervised Variational Bayesian Learning of Nonlinear Models

Abstract

In this paper we present a framework for using multi-layer perceptron (MLP) networks in nonlinear generative models trained by variational Bayesian learning. The nonlinearity is handled by linearizing it using a Gauss–Hermite quadrature at the hidden neurons. This yields an accurate approximation for cases of large posterior variance. The method can be used to derive nonlinear counterparts for linear algorithms such as factor analysis, independent component/factor analysis and state-space models. This is demonstrated with a nonlinear factor analysis experiment in which even 20 sources can be estimated from a real world speech data set.

Extracted Key Phrases

2 Figures and Tables

Statistics

051015'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

72 Citations

Semantic Scholar estimates that this publication has 72 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Honkela2004UnsupervisedVB, title={Unsupervised Variational Bayesian Learning of Nonlinear Models}, author={Antti Honkela and Harri Valpola}, booktitle={NIPS}, year={2004} }