Distributed Low-Rank Subspace Segmentation

@article{Talwalkar2013DistributedLS,
  title={Distributed Low-Rank Subspace Segmentation},
  author={Ameet S. Talwalkar and Lester W. Mackey and Yadong Mu and Shih-Fu Chang and Michael I. Jordan},
  journal={2013 IEEE International Conference on Computer Vision},
  year={2013},
  pages={3543-3550}
}
Vision problems ranging from image clustering to motion segmentation to semi-supervised learning can naturally be framed as subspace segmentation problems, in which one aims to recover multiple low-dimensional subspaces from noisy and corrupted input data. Low-Rank Representation (LRR), a convex formulation of the subspace segmentation problem, is provably and empirically accurate on small problems but does not scale to the massive sizes of modern vision datasets. Moreover, past work aimed at… 

Figures and Tables from this paper

Robust Subspace Segmentation with Block-Diagonal Prior

The subspace segmentation problem is addressed by effectively constructing an exactly block-diagonal sample affinity matrix by proposing a graph Laplacian constraint based formulation, and developing an efficient stochastic subgradient algorithm for optimization.

Ensemble Subspace Segmentation Under Blockwise Constraints

A novel graph-based method, Ensemble Subspace Segmentation under Blockwise constraints (ESSB), which unifies least squares regression and a locality preserving graph regularizer into an ensemble learning framework, making ESSB more robust and efficient, especially when handling high-dimensional data.

Projective Low-rank Subspace Clustering via Learning Deep Encoder

A projective low-rank subspace clustering (PLrSC) scheme for large scale clustering problem, which theoretically prove the deep encoder can universally approximate to the exact (or bounded) recovery of the row space.

Robust Subspace Learning

This chapter introduces a discriminative subspace learning method named Supervised Regularization based Robust Subspace (SRRS) approach, by incorporating the low-rank constraint, and designs an inexact augmented Lagrange multiplier (ALM) optimization algorithm to solve it.

Learning Robust and Discriminative Subspace With Low-Rank Constraints

  • Sheng LiY. Fu
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2016
This paper presents a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint, which learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information.

Large-Scale Subspace Clustering by Independent Distributed and Parallel Coding

This article considers a large-scale subspace clustering problem, that is, partitioning million data points with a millon dimensions, and explores an independent distributed and parallel framework by dividing big data/variable matrices and regularization by both columns and rows.

Elastic-net regularization of singular values for robust subspace learning

An elastic-net regularization based low-rank matrix factorization method for subspace learning that finds a robust solution efficiently by enforcing a strong convex constraint to improve the algorithm's stability while maintaining the low- rank property of the solution.

Robust Subspace Clustering With Complex Noise

Experimental results on three commonly used data sets show that the proposed novel optimization model for robust subspace clustering outperforms state-of-the-art subspace clusters methods.

Exact Subspace Clustering in Linear Time

This paper exploits a data selection algorithm to speedup computation and the robust principal component analysis to strengthen robustness and devise a scalable and robust subspace clustering method which costs time only linear in $n$.

References

SHOWING 1-10 OF 33 REFERENCES

Latent Low-Rank Representation for subspace segmentation and feature extraction

This paper proposes to construct the dictionary by using both observed and unobserved, hidden data, and shows that the effects of the hidden data can be approximately recovered by solving a nuclear norm minimization problem, which is convex and can be solved efficiently.

Robust Subspace Segmentation by Low-Rank Representation

Both theoretical and experimental results show that low-rank representation is a promising tool for subspace segmentation from corrupted data.

Multi-task low-rank affinity pursuit for image segmentation

Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results.

Sparse subspace clustering

This work proposes a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space and applies this method to the problem of segmenting multiple motions in video.

Exact Subspace Segmentation and Outlier Detection by Low-Rank Representation

It is proved that under mild technical conditions, any solution to LRR exactly recovers the row space of the samples and detect the outliers as well, which implies that LRR can perform exact subspace segmentation and outlier detection, in an efficient way.

Active Subspace: Toward Scalable Low-Rank Learning

This work revisits the classic mechanism of low-rank matrix factorization, based on which an active subspace algorithm is presented for efficiently solving nuclear norm regularized optimization problems by transforming large-scale NNROPs into small-scale problems.

Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix

Two complementary approaches for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted are developed and compared, both several orders of magnitude faster than the previous state-of-the-art algorithm for this problem.

Local features are not lonely – Laplacian sparse coding for image classification

This paper proposes to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features, and incorporates it into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features.

Robust Face Recognition via Sparse Representation

This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.

Non-negative low rank and sparse graph for semi-supervised learning

A novel non-negative low-rank and sparse (NNLRS) graph for semi-supervised learning that can capture both the global mixture of subspaces structure and the locally linear structure of the data, hence is both generative and discriminative.