# Regularization of Mixture Models for Robust Principal Graph Learning

@article{Bonnaire2021RegularizationOM, title={Regularization of Mixture Models for Robust Principal Graph Learning}, author={Tony Bonnaire and Aur{\'e}lien Decelle and Nabila Aghanim}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2021}, volume={PP}, pages={1-1} }

A regularized version of Mixture Models is proposed to learn a principal graph from a distribution of D-dimensional data points. In the particular case of manifold learning for ridge detection, we assume that the underlying structure can be modeled as a graph acting like a topological prior for the Gaussian clusters turning the problem into a maximum a posteriori estimation. Parameters of the model are iteratively estimated through an Expectation-Maximization procedure making the learning of…

## References

SHOWING 1-10 OF 39 REFERENCES

### Regularization-free principal curve estimation

- MathematicsJ. Mach. Learn. Res.
- 2013

This paper introduces a new objective function, facilitated through a modification of the principal curve estimation approach, for which all critical points are principal curves and minima, and removes the fundamental issue for model selection in principal curves estimation.

### Nonparametric Ridge Estimation

- Mathematics, Computer ScienceArXiv
- 2012

Ridge estimation is an extension of mode finding and is useful for understanding the structure of a density and can be used to find hidden structure in point cloud data.

### Robust and Scalable Learning of Complex Intrinsic Dataset Geometry via ElPiGraph

- Computer ScienceEntropy
- 2020

ElPiGraph exploits and further develops the concept of elastic energy, the topological graph grammar approach, and a gradient descent-like optimization of the graph topology, and is capable of approximating data point clouds via principal graph ensembles.

### Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

- Computer ScienceNeural Computation
- 2003

This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.

### Locally Defined Principal Curves and Surfaces

- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2011

A novel theoretical understanding of principal curves and surfaces, practical algorithms as general purpose machine learning tools, and applications of these algorithms to several practical problems are presented.

### Generalized Mode and Ridge Estimation

- Physics, MathematicsArXiv
- 2014

A method is proposed for studying the geometric structure of generalized densities using a modification of the mean shift algorithm and its variant, subspace constrained mean shift, which can be used to perform clustering and to calculate a measure of connectivity between clusters.

### Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

- Computer Science, MathematicsNIPS
- 2001

The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering.

### Geometric Inference for Probability Measures

- Computer Science, MathematicsFound. Comput. Math.
- 2011

Replacing compact subsets by measures, a notion of distance function to a probability distribution in ℝd is introduced and it is shown that it is possible to reconstruct offsets of sampled shapes with topological guarantees even in the presence of outliers.

### Principal Graphs and Manifolds

- MathematicsArXiv
- 2008

This chapter gives a brief practical introduction into the methods of construction of general principal objects, i.e. objects embedded in the ‘middle’ of the multidimensional data set, using the family of expectation/maximisation algorithms with nearest generalisations.

### Nonlinear dimensionality reduction by locally linear embedding.

- Computer ScienceScience
- 2000

Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.