A Class of Two-Timescale Stochastic EM Algorithms for Nonconvex Latent Variable Models

@article{Karimi2022ACO,
  title={A Class of Two-Timescale Stochastic EM Algorithms for Nonconvex Latent Variable Models},
  author={Belhal Karimi and Ping Li},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.10186}
}
The1 Expectation-Maximization (EM) algorithm is a popular choice for learning latent variable models. Variants of the EM have been initially introduced by Neal and Hinton (1998), using incremental updates to scale to large datasets, and by Wei and Tanner (1990); Delyon et al. (1999), using Monte Carlo (MC) approximations to bypass the intractable conditional expectation of the latent data for most nonconvex models. In this paper, we propose a general class of methods called Two-Timescale EM… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 40 REFERENCES

A Stochastic Path Integral Differential EstimatoR Expectation Maximization Algorithm

TLDR
A novel EM algorithm, called SPIDER-EM, for inference from a training set of size n, n ≫ 1, adapted from the stochastic path-integrated differential estimator (SPIDER) technique, which improves over the state-of-the-art algorithms.

On‐line expectation–maximization algorithm for latent data models

TLDR
A generic on‐line version of the expectation–maximization (EM) algorithm applicable to latent variable models of independent observations that is suitable for conditional models, as illustrated in the case of the mixture of linear regressions model.

Stochastic Expectation Maximization with Variance Reduction

TLDR
It is shown that sEM-vr has the same exponential asymptotic convergence rate as batch EM, and only requires a constant step size to achieve this rate, which alleviates the burden of parameter tuning.

Mini-batch learning of exponential family finite mixture models

TLDR
It is demonstrated that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm, and a scheme for the stochastic stabilization of the constructedmini-batch algorithms is proposed.

On the choice of the number of blocks with the incremental EM algorithm for the fitting of normal mixtures

TLDR
A simple rule is proposed for choosing the number of blocks with the IEM algorithm in the extreme case of one observation per block, which provides efficient updating formulas, which avoid the direct calculation of the inverses and determinants of the component-covariance matrices.

Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

TLDR
This paper discusses a variant of the algorithm which consists of applying a post-optimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and shows that such modification allows to improve significantly the large-deviation properties of the algorithms.

Online EM for functional data

Coupling a stochastic approximation version of EM with an MCMC procedure

The stochastic approximation version of EM (SAEM) proposed by Delyon et al. (1999) is a powerful alternative to EM when the E-step is intractable. Convergence of SAEM toward a maximum of the observed

Online EM Algorithm for Hidden Markov Models

TLDR
Although the proposed online EM algorithm resembles a classical stochastic approximation (or Robbins–Monro) algorithm, it is sufficiently different to resist conventional analysis of convergence and provides limited results which identify the potential limiting points of the recursion as well as the large-sample behavior of the quantities involved in the algorithm.

Geom-Spider-EM: Faster Variance Reduced Stochastic Expectation Maximization for Nonconvex Finite-Sum Optimization

  • G. FortÉ. MoulinesHoi-To Wai
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
TLDR
This paper proposes an extension of the Stochastic Path-Integrated Differential EstimatoR EM (SPIDER-EM) and derives complexity bounds for this novel algorithm, designed to solve smooth nonconvex finite-sum optimization problems.