Sparse Subspace Clustering: Algorithm, Theory, and Applications

@article{Elhamifar2013SparseSC,
  title={Sparse Subspace Clustering: Algorithm, Theory, and Applications},
  author={Ehsan Elhamifar and Ren{\'e} Vidal},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2013},
  volume={35},
  pages={2765-2781}
}
  • Ehsan Elhamifar, R. Vidal
  • Published 2013
  • Computer Science, Mathematics, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the… Expand

Paper Mentions

Dimensionality-reduced subspace clustering
TLDR
It is found that dimensionality reduction down to the order of the subspace dimensions is possible without incurring significant performance degradation, and these results are order-wise optimal in the sense that reducing the dimensionality further leads to a fundamentally ill-posed clustering problem. Expand
Greedy Subspace Clustering
TLDR
The statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces, which are weaker than those considered in the standard statistical literature. Expand
A Survey on Sparse Subspace Clustering
Sparse subspace clustering (SSC) is a newly developed spectral clustering-based framework for data clustering. High-dimensional data usually lie in a union of several low-dimensional subspaces, whichExpand
Stochastic Sparse Subspace Clustering
TLDR
This work shows that dropout is equivalent to adding a squared $\ell_2$ norm regularization on the representation coefficients, therefore induces denser solutions, which leads to a scalable and flexible sparse subspace clustering approach, termed Stochastic Sparse Subspace Clustering, which can effectively handle large scale datasets. Expand
Group-sparse subspace clustering with missing data
TLDR
Two novel methods for subspace clustering with missing data are described: (a) group-sparse sub- space clustering (GSSC), which is based on group-sparsity and alternating minimization, and (b) mixture subspace clusters (MSC) which models each data point as a convex combination of its projections onto all subspaces in the union. Expand
Exploiting Unsupervised and Supervised Constraints for Subspace Clustering
  • Han Hu, J. Feng, Jie Zhou
  • Mathematics, Computer Science
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2015
TLDR
This paper attempts to do subspace clustering based on a constrained subspace assumption that the data is further restricted in the corresponding subspaces, e.g., belonging to a submanifold or satisfying the spatial regularity constraint. Expand
Robust Subspace Clustering via Thresholding
TLDR
A simple low-complexity subspace clustering algorithm is proposed, which applies spectral clustering to an adjacency matrix obtained by thresholding the correlations between data points, and the results reveal an explicit tradeoff between the affinity of the subspaces and the tolerable noise level. Expand
Subspace clustering via structure-enforced dictionary learning
TLDR
This paper proposes a novel subspace clustering method based on the structured sparse PCA-based dictionary learning that learns the reduced dimensional dictionary and coefficient matrices using the structural information as well as sparsity of data. Expand
Efficient dense subspace clustering
TLDR
An efficient subspace clustering algorithm that estimates dense connections between the points lying in the same subspace that shares the same solution as the popular nuclear norm minimization approach when the data is free of any noise, or when a clean dictionary is learned. Expand
Low-rank and structured sparse subspace clustering
TLDR
A joint affinity learning and spectral clustering approach for low-Rank and Structured Sparse Subspace Clustering (LRS3C), where a subspace structured norm that depends on subspace clustering result is introduced into the objective of low-rank representation problem. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 78 REFERENCES
Clustering disjoint subspaces via sparse representation
  • Ehsan Elhamifar, R. Vidal
  • Mathematics, Computer Science
  • 2010 IEEE International Conference on Acoustics, Speech and Signal Processing
  • 2010
TLDR
Given a set of data points drawn from multiple low-dimensional linear subspaces of a high-dimensional space, this work derives theoretical bounds relating the principal angles between the subspaced and the distribution of the data points across all theSubspaces under which the coefficients are guaranteed to be sparse. Expand
A Geometric Analysis of Subspace Clustering with Outliers
TLDR
A novel geometric analysis of an algorithm named sparse subspace clustering (SSC) is developed, which signicantly broadens the range of problems where it is provably eective and shows that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. Expand
A closed form solution to robust subspace estimation and clustering
TLDR
This work uses an augmented Lagrangian optimization framework, which requires a combination of the proposed polynomial thresholding operator with the more traditional shrinkage-thresholding operator, to solve the problem of fitting one or more subspace to a collection of data points drawn from the subspaces and corrupted by noise/outliers. Expand
Sparse subspace clustering
TLDR
This work proposes a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space and applies this method to the problem of segmenting multiple motions in video. Expand
Generalized principal component analysis (GPCA)
  • R. Vidal, Y. Ma, S. Sastry
  • Mathematics, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2005
TLDR
An algebro-geometric solution to the problem of segmenting an unknown number of subspaces of unknown and varying dimensions from sample data points and applications of GPCA to computer vision problems such as face clustering, temporal video segmentation, and 3D motion segmentation from point correspondences in multiple affine views are presented. Expand
Robust Recovery of Subspace Structures by Low-Rank Representation
TLDR
It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, it is proved that under certain conditions LRR can exactly recover the row space of the original data. Expand
Robust classification using structured sparse representation
TLDR
It is shown that casting the face recognition problem as a structured sparse recovery problem can improve the results of the state-of-the-art face recognition algorithms, especially when the authors have relatively small number of training data for each class. Expand
Graph connectivity in sparse subspace clustering
TLDR
It is proved that the connectivity within each subspace holds for 2- and 3-dimensional subspaces and the claim of connectivity for general d-dimensional case, even for generic configurations, is proved false by giving a counterexample in dimensions greater than 3. Expand
See all by looking at a few: Sparse modeling for finding representative objects
TLDR
The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives and can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. Expand
Block-Sparse Recovery via Convex Optimization
TLDR
It is shown that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10% with only 25% of the training data. Expand
...
1
2
3
4
5
...