On strong identifiability and convergence rates of parameter estimation in finite mixtures

@article{Ho2016OnSI,
  title={On strong identifiability and convergence rates of parameter estimation in finite mixtures},
  author={Nhat Ho and X. Nguyen},
  journal={Electronic Journal of Statistics},
  year={2016},
  volume={10},
  pages={271-307}
}
Abstract: This paper studies identifiability and convergence behaviors for parameters of multiple types, including matrix-variate ones, that arise in finite mixtures, and the effects of model fitting with extra mixing components. We consider several notions of strong identifiability in a matrix-variate setting, and use them to establish sharp inequalities relating the distance of mixture densities to the Wasserstein distances of the corresponding mixing measures. Characterization of… Expand

Figures from this paper

Optimal Bayesian estimation of Gaussian mixtures with growing number of components
We study posterior concentration properties of Bayesian procedures for estimating finite Gaussian mixtures in which the number of components is unknown and allowed to grow with the sample size. UnderExpand
Singularity Structures and Impacts on Parameter Estimation in Finite Mixtures of Distributions
TLDR
This study makes explicit the deep links between model singularities, parameter estimation convergence rates and minimax lower bounds, and the algebraic geometry of the parameter space for mixtures of continuous distributions. Expand
Robust estimation of mixing measures in finite mixture models
In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is toExpand
Identifiability of Nonparametric Mixture Models and Bayes Optimal Clustering
TLDR
This work establishes general conditions under which families of nonparametric mixture models are identifiable by introducing a novel framework for clustering overfitted \emph{parametric} (i.e. misspecified) mixture models, and applies these results to partition-based clustering, generalizing the well-known notion of a Bayes optimal partition from classical model- based clustering to non parametric settings. Expand
Uniform Convergence Rates for Maximum Likelihood Estimation under Two-Component Gaussian Mixture Models
We derive uniform convergence rates for the maximum likelihood estimator and minimax lower bounds for parameter estimation in two-component location-scale Gaussian mixture models with unequalExpand
Optimal estimation of high-dimensional location Gaussian mixtures
TLDR
The observation that the information geometry of finite Gaussian mixtures is characterized by the moment tensors of the mixing distribution, whose low-rank structure can be exploited to obtain a sharp local entropy bound is central to the results. Expand
On posterior contraction of parameters and interpretability in Bayesian mixture modeling
We study posterior contraction behaviors for parameters of interest in the context of Bayesian mixture modeling, where the number of mixing components is unknown while the model itself may or may notExpand
Strong identifiability and optimal minimax rates for finite mixture estimation
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching andExpand
Estimating the Number of Components in Finite Mixture Models via the Group-Sort-Fuse Procedure
Estimation of the number of components (or order) of a finite mixture model is a long standing and challenging problem in statistics. We propose the Group-Sort-Fuse (GSF) procedure---a new penalizedExpand
A non-asymptotic model selection in block-diagonal mixture of polynomial experts models
TLDR
A block-diagonal localized mixture of polynomial experts (BLoMPE) regression model is investigated, which is constructed upon an inverse regression and blockdiagonal structures of the Gaussian expert covariance matrices, and a penalized maximum likelihood selection criterion is introduced to estimate the unknown conditional density of the regression model. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 28 REFERENCES
Optimal Rate of Convergence for Finite Mixture Models
In finite mixture models, we establish the best possible rate of convergence for estimating the mixing distribution. We find that the key for estimating the mixing distribution is the knowledge ofExpand
Convergence of latent mixing measures in finite and infinite mixture models
This paper studies convergence behavior of latent mixing measures that arise in finite and infinite mixture models, using transportation distances (i.e., Wasserstein metrics). The relationshipExpand
Identifiability of parameters in latent structure models with many observed variables
While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there isExpand
Identifiability of Finite Mixtures for Directional Data
For an example with both (1.1) and (1.2) applied to bimodal circular data, see Mardia and Spurr (1973). Two questions of identifiability arise here. (i) Are finite mixtures of von Mises densitiesExpand
Asymptotics for likelihood ratio tests under loss of identifiability
This paper describes the large sample properties of the likelihood ratio test statistic (LRTS) when the parameters characterizing the true null distribution are not unique. It is well known that theExpand
Optimal Rates of Convergence for Deconvolving a Density
Abstract Suppose that the sum of two independent random variables X and Z is observed, where Z denotes measurement error and has a known distribution, and where the unknown density f of X is to beExpand
Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities
We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or location-scale mixtures ofExpand
Nonparametric inference in multivariate mixtures
We consider mixture models in which the components of data vectors from any given subpopulation are statistically independent, or independent in blocks. We argue that if, under this condition ofExpand
NONPARAMETRIC ESTIMATION OF COMPONENT DISTRIBUTIONS IN A MULTIVARIATE MIXTURE
Suppose k-variate data are drawn from a mixture of two distributions, each having independent components. It is desired to estimate the univariate marginal distributions in each of the products, asExpand
Multivariate Generalized Gaussian Distribution: Convexity and Graphical Models
TLDR
This work considers covariance estimation in the multivariate generalized Gaussian distribution and shows that the optimizations can be formulated as convex minimization as long the MGGD shape parameter is larger than half and the sparsity pattern is chordal. Expand
...
1
2
3
...