• Corpus ID: 55169848

A Novel Algorithm for Clustering of Data on the Unit Sphere via Mixture Models

@article{Nguyen2017ANA,
  title={A Novel Algorithm for Clustering of Data on the Unit Sphere via Mixture Models},
  author={Hien Duy Nguyen},
  journal={arXiv: Computation},
  year={2017}
}
  • H. Nguyen
  • Published 14 September 2017
  • Computer Science
  • arXiv: Computation
A new maximum approximate likelihood (ML) estimation algorithm for the mixture of Kent distribution is proposed. The new algorithm is constructed via the BSLM (block successive lower-bound maximization) framework and incorporates manifold optimization procedures within it. The BSLM algorithm is iterative and monotonically increases the approximate log-likelihood function in each step. Under mild regularity conditions, the BSLM algorithm is proved to be convergent and the approximate ML… 

Figures from this paper

Robust Feature-Based Point Registration Using Directional Mixture Model

Universal approximation on the hypersphere

It is well known that any continuous probability density function on $\mathbb{R}^m$ can be approximated arbitrarily well by a finite mixture of normal distributions, provided that the number of

References

SHOWING 1-10 OF 43 REFERENCES

movMF: An R Package for Fitting Mixtures of von Mises-Fisher Distributions

TLDR
The main fitting function of the R package movMF, which contains functionality to draw samples from finite mixtures of von Mises-Fisher distributions and to fit these models using the expectation-maximization algorithm for maximum likelihood estimation, is described and illustrated.

Estimation and model selection for model-based clustering with the conditional classification likelihood

TLDR
The Integrated Completed Likelihood criterion is proved to be an approximation of one of these criteria and insights into the class notion underlying ICL are given and feed a reflection on theclass notion in clustering.

Clustering on the Unit Hypersphere using von Mises-Fisher Distributions

TLDR
A generative mixture-model approach to clustering directional data based on the von Mises-Fisher distribution, which arises naturally for data distributed on the unit hypersphere, and derives and analyzes two variants of the Expectation Maximization framework for estimating the mean and concentration parameters of this mixture.

On the choice of starting values for the EM algorithm in fitting mixture models

We consider the problem of finding suitable starting values for the EM algorithm in the fitting of finite mixture models to multivariate data. Given that the likelihood equation often has multiple

MM Algorithms for Some Discrete Multivariate Distributions

  • Hua ZhouK. Lange
  • Mathematics, Computer Science
    Journal of computational and graphical statistics : a joint publication of American Statistical Association, Institute of Mathematical Statistics, Interface Foundation of North America
  • 2010
TLDR
This article derives MM algorithms for maximum likelihood estimation with discrete multivariate distributions such as the Dirichlet- multinomial and Connor–Mosimann distributions, the Neerchal–Morel distribution, the negative-multinomial distribution, certain distributions on partitions, and zero-truncated andzero-inflated distributions.

The multivariate Watson distribution: Maximum-likelihood estimation and other aspects

An unsupervised clustering algorithm for data on the unit hypersphere

On a Mixture Model for Directional Data on the Sphere

We consider mixtures of general angular central Gaussian distributions as models for multimodal directional data. We prove consistency of the maximum‐likelihood estimates of model parameters and

A new look at the statistical model identification

The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as

The EM algorithm and extensions

TLDR
The EM Algorithm and Extensions describes the formulation of the EM algorithm, details its methodology, discusses its implementation, and illustrates applications in many statistical contexts, opening the door to the tremendous potential of this remarkably versatile statistical tool.