Isotropic PCA and Affine-Invariant Clustering

@article{Brubaker2008IsotropicPA,
  title={Isotropic PCA and Affine-Invariant Clustering},
  author={S. Charles Brubaker and Santosh S. Vempala},
  journal={2008 49th Annual IEEE Symposium on Foundations of Computer Science},
  year={2008},
  pages={551-560}
}
  • S. BrubakerS. Vempala
  • Published 22 April 2008
  • Computer Science, Mathematics
  • 2008 49th Annual IEEE Symposium on Foundations of Computer Science
We present an extension of principal component analysis (PCA) and a new algorithm for clustering points in \Rn based on it. The key property of the algorithm is that it is affine-invariant. When the input is a sample from a mixture of two arbitrary Gaussians, the algorithm correctly classifies the sample assuming only that the two components are separable by a hyperplane, i.e., there exists a halfspace that contains most of one Gaussian and almost none of the other in probability mass. This is… 

Figures from this paper

Fourier PCA and robust tensor decomposition

The main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an n × m matrix A from observations y = Ax where x is drawn from an unknown product distribution with arbitrary non-Gaussian components.

Robustly Clustering a Mixture of Gaussians

An efficient algorithm for robustly clustering of a mixture of two arbitrary Gaussians, a central open problem in the theory of computationally efficient robust estimation, and it is shown that for Gaussian mixtures, separation in total variation distance suffices to achieve robust clustering.

Extensions of principal components analysis

This thesis considers several novel extensions of PCA, which provably reveals hidden structure where standard PCA fails to do so and defines the “Subgraph Parity Tensor”, which leads to the first affine-invariant algorithm that can provably learn mixtures of Gaussians in high dimensions, improving significantly on known results.

Efficient Sparse Clustering of High-Dimensional Non-spherical Gaussian Mixtures

This work considers a Gaussian mixture model with two non-spherical Gaussian components, where the clusters are distinguished by only a few relevant dimensions, and proposes a method for estimating the set of features relevant for clustering.

Learning Mixtures of Gaussians using the k-means Algorithm

This paper shows an information-theoretic lower bound on any algorithm that learns mixtures of two spherical Gaussians, and indicates that in the case when the overlap between the probability masses of the two distributions is small, the sample requirement of k-means is near-optimal.

Affine-transformation invariant clustering models

A cluster process which is invariant with respect to unknown affine transformations of the feature space without knowing the number of clusters in advance is developed, which could be widely applied in many fields.

Clustering Semi-Random Mixtures of Gaussians

A natural semi-random model for $k-means clustering that generalizes the Gaussian mixture model, and that the authors believe will be useful in identifying robust algorithms.

Clustering with Spectral Norm and the k-Means Algorithm

  • Amit KumarR. Kannan
  • Computer Science, Mathematics
    2010 IEEE 51st Annual Symposium on Foundations of Computer Science
  • 2010
This paper shows that a simple clustering algorithm works without assuming any generative (probabilistic) model, and proves some new results for generative models - e.g., it can cluster all but a small fraction of points only assuming a bound on the variance.

Spectral Properties of Radial Kernels and Clustering in High Dimensions

This paper studies the spectrum and the eigenvectors of radial kernels for mixtures of distributions in R n and shows that the minimum angular separation between the covariance matrices that is required for the algorithm to succeed tends to 0 as n goes to infinity.
...

References

SHOWING 1-10 OF 21 REFERENCES

Some methods for classification and analysis of multivariate observations

The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give

On Spectral Learning of Mixtures of Distributions

It is proved that a very simple algorithm, namely spectral projection followed by single-linkage clustering, properly classifies every point in the sample, and there are many Gaussian mixtures such that each pair of means is separated, yet upon spectral projection the mixture collapses completely.

PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption

A new vantage point for the learning of mixtures of Gaussians is proposed: namely, the PAC-style model of learning probability distributions introduced by Kearns et al.

Learning mixtures of arbitrary gaussians

This paper presents the first algorithm that provably learns the component gaussians in time that is polynomial in the dimension.

The Spectral Method for General Mixture Models

A general property of spectral projection for arbitrary mixtures is proved and it is shown that the resulting algorithm is efficient when the components of the mixture are logconcave distributions in R n whose means are separated.

A spectral algorithm for learning mixtures of distributions

  • S. VempalaGrant J. Wang
  • Computer Science
    The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings.
  • 2002
We show that a simple spectral algorithm for learning a mixture of k spherical Gaussians in /spl Ropf//sup n/ works remarkably well - it succeeds in identifying the Gaussians assuming essentially the

The geometry of logconcave functions and sampling algorithms

These results are applied to analyze two efficient algorithms for sampling from a logconcave distribution in n dimensions, with no assumptions on the local smoothness of the density function.

Learning mixtures of Gaussians

  • S. Dasgupta
  • Computer Science
    40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)
  • 1999
This work presents the first provably correct algorithm for learning a mixture of Gaussians, which returns the true centers of the Gaussian to within the precision specified by the user with high probability.

Beyond Gaussians: Spectral Methods for Learning Mixtures of Heavy-Tailed Distributions

The main contribution is an embedding which transforms a mixture of heavy-tailed product distributions intoA mixture of distributions over the hypercube in a higher dimension, while still maintaining separability.

A Two-Round Variant of EM for Gaussian Mixtures

We show that, given data from a mixture of k well-separated spherical Gaussians in Rn, a simple two-round variant of EM will, with high probability, learn the centers of the Gaussians to near-optimal