A Note on Variational Bayesian Inference


This self-contained note describes variational Bayesian approaches and its collapsed variants to unsupervised clustering. The proposed methods are demonstrated on cancer microarray data sets which showed outperforming results over the classical Gaussian mixture models. 1 LPD probabilistic model Let d index samples, g the genes (attributes), and k the processes (cluster). The numbers of processes, genes and samples are denoted K, G, and D respectively. We start with recalling LPD [8], a form of Gaussian mixture model. First of all, let the number of processes K be fixed. Each data Ed is associated with a multiple process latent variable Zd = {Zdg : g = 1, . . . ,G} where each Zdg is a K-dimensional unit-basis vector, that is, Zdg choosing process k is represented by Zdg,k = 1 and Zdg,j = 0 for j 6= k. Given the mixing coefficient θd, the conditional distribution of Zd is given by p(Zd|θd) = ∏

3 Figures and Tables

Cite this paper

@inproceedings{Ying2009ANO, title={A Note on Variational Bayesian Inference}, author={Yiming Ying}, year={2009} }