A View of the Em Algorithm That Justifies Incremental, Sparse, and Other Variants

Abstract

The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible.

Showing 1-7 of 7 references

The EM Algorithm and Extensions

  • G J Mclachlan, T Krishnan
  • 1997
1 Excerpt

\Recent extensions of the EM algorithm (with discussion)

  • X L Meng, D B Rubin
  • 1992
1 Excerpt

Soft Competitive Adaptation: Neural Network Learning Algorithms based on Fitting Statistical Mixtures

  • S J Nowlan
  • 1991
2 Excerpts

\Another interpretation of the EM algorithm for mixture distributions

  • R J Hathaway
  • 1986
1 Excerpt

\Information geometry and alternating minimization procedures

  • I Csiszz, G Tusnn Ady
  • 1984
Showing 1-10 of 1,175 extracted citations
050100150'99'01'03'05'07'09'11'13'15'17
Citations per Year

2,203 Citations

Semantic Scholar estimates that this publication has received between 1,983 and 2,448 citations based on the available data.

See our FAQ for additional information.