• Corpus ID: 822242

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

@article{Bilmes1998AGT,
  title={A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models},
  author={Jeff A. Bilmes},
  journal={CTIT technical reports series},
  year={1998}
}
  • J. Bilmes
  • Published 1998
  • Computer Science
  • CTIT technical reports series
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and… 

EM Algorithm and its Application

The general structure of the EM algorithm and the convergence guarantee are introduced and Gaussian Mixture Model are employed to demonstrate how EM algorithm could be applied under Maximum-Likelihood (ML) criteria.

The EM Algorithm as a Lower Bound Optimization Technique

The purpose of report is to present the EM algorithm in a more self-contained way from a lower bounding viewpoint, and show how it can be used used to find the parameters of a mixture of densities.

On Convergence Problems of the EM Algorithm for Finite Gaussian Mixtures

This paper traces and discusses the origin of finite Gaussian mixture models and provides some theoretical evidence on how the maximum likelihood estimates of the model parameters can be iteratively computed in an elegant way.

A Note on the Expectation-Maximization (EM) Algorithm

The EM algorithm is a hill-climbing approach, thus it can only be guanranteed to reach a local maxima and uses a much simpler model (ideally one with a unique global maxima) to determine an initial value for more complex models.

Robust estimation by expectation maximization algorithm

  • K. Koch
  • Mathematics
    Journal of Geodesy
  • 2012
A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining

On the parameters Estimation of The Generalized Gaussian Mixture Model

This paper aims to provide a realistic distribution based on Mixture of Generalized Gaussian distribution (MGG), which has the advantage to characterize the variability of shape parameter in each component in the mixture.

Generalized Gaussian mixture model

This paper aims to provide a realistic distribution based on Mixture of Generalized Gaussian distribution (MGG), which has the advantage to characterize the variability of shape parameter in each component in the mixture.

The EM Algorithm for Generalized Exponential Mixture Model

  • Yueyang TengTie Zhang
  • Computer Science
    2010 International Conference on Computational Intelligence and Software Engineering
  • 2010
The generalized exponential mixture model is studied, in which Gaussian mixture and Laplacian mixture are two special cases, and the EM algorithm is developed to address the solution to the mixture model.

A Probabilistic Expectation Maximization Algorithm for Multivariate Laplacian Mixtures

It is proved that, with high probability, single update steps of the probabilistic variant of the EM algorithm do not differ much from the deterministic solution, assuming certain properties of the input data set.
...

References

SHOWING 1-10 OF 172 REFERENCES

Mixture densities, maximum likelihood, and the EM algorithm

This work discusses the formulation and theoretical and practical properties of the EM algorithm, a specialization to the mixture density context of a general algorithm used to approximate maximum-likelihood estimates for incomplete data problems.

On Convergence Properties of the EM Algorithm for Gaussian Mixtures

The mathematical connection between the Expectation-Maximization (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures is built up and an explicit expression for the matrix is provided.

ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM

Two convergence aspects of the EM algorithm are studied: (i) does the EM algorithm find a local maximum or a stationary value of the (incompletedata) likelihood function? (ii) does the sequence of

Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification

Software is now available that implements Gaussian process methods using covariance functions with hierarchical parameterizations, which can discover high-level properties of the data, such as which inputs are relevant to predicting the response.

Comparison of Approximate Methods for Handling Hyperparameters

  • D. Mackay
  • Computer Science
    Neural Computation
  • 1999
Two approximate methods for computational implementation of Bayesian hierarchical models that include unknown hyperparameters such as regularization constants and noise levels are examined, and the evidence framework is shown to introduce negligible predictive error under straightforward conditions.

Hierarchical Mixtures of Experts and the EM Algorithm

An Expectation-Maximization (EM) algorithm for adjusting the parameters of the tree-structured architecture for supervised learning and an on-line learning algorithm in which the parameters are updated incrementally.

Models of Noise and Robust Estimates

It is shown that, for a class of functions V, using robust estimators corresponds to assuming that data are corrupted by Gaussian noise whose variance uctuates according to some given probability distribution, that uniquely determines the shape of V.

A Bayesian Approach to Robust Binary Nonparametric Regression

A comprehensive approach is presented in which the function estimates are smoothing splines with the smoothing parameters integrated out and the estimates are made robust to outliers, and can handle a wide range of link functions.

Evaluation of gaussian processes and other methods for non-linear regression

It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time.
...