• Corpus ID: 11384375

General A Tutorial on MM Algorithms

@inproceedings{Ange2007GeneralAT,
  title={General A Tutorial on MM Algorithms},
  author={Kenneth L Ange},
  year={2007}
}
Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exempliŽ ed by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 35 REFERENCES
EM algorithms without missing data
TLDR
A theoretical perspective clarifies the operation of the EM algorithm and suggests novel generalizations that lead to highly stable algorithms with well-understood local and global convergence properties in medical statistics.
MM algorithms for generalized Bradley-Terry models
The Bradley-Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among
Optimization Transfer Using Surrogate Objective Functions
TLDR
Because optimization transfer algorithms often exhibit the slow convergence of EM algorithms, two methods of accelerating optimization transfer are discussed and evaluated in the context of specific problems.
A Connection Between Variable Selection and EM-Type Algorithms
TLDR
A connection is established between local quadratic approximation and the so-called MM algorithms, useful extensions of the EM algorithms, to analyze the local and global convergence of the local quadRatic approximation algorithm by employing the techniques used for EM algorithms.
Using EM to Obtain Asymptotic Variance-Covariance Matrices: The SEM Algorithm
TLDR
This article defines and illustrates a procedure that obtains numerically stable asymptotic variance–covariance matrices using only the code for computing the complete-data variance-covarance matrix, the code of the expectation maximization algorithm, and code for standard matrix operations.
ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM
Two convergence aspects of the EM algorithm are studied: (i) does the EM algorithm find a local maximum or a stationary value of the (incompletedata) likelihood function? (ii) does the sequence of
Normal/Independent Distributions and Their Applications in Robust Regression
Abstract Maximum likelihood estimation with nonnormal error distributions provides one method of robust regression. Certain families of normal/independent distributions are particularly attractive
Globally convergent algorithms for maximum a posteriori transmission tomography
TLDR
Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm.
Monotonicity of quadratic-approximation algorithms
It is desirable that a numerical maximization algorithm monotonically increase its objective function for the sake of its stability of convergence. It is here shown how one can adjust the
A modified expectation maximization algorithm for penalized likelihood estimation in emission tomography
  • A. Pierro
  • Mathematics
    IEEE Trans. Medical Imaging
  • 1995
TLDR
The new method is a natural extension of the EM for maximizing likelihood with concave priors for emission tomography and convergence proofs are given.
...
...