• Corpus ID: 1938013

Convex Multi-view Subspace Learning

@inproceedings{White2012ConvexMS,
  title={Convex Multi-view Subspace Learning},
  author={Martha White and Yaoliang Yu and Xinhua Zhang and Dale Schuurmans},
  booktitle={NIPS},
  year={2012}
}
Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction. However, in many applications, data is obtained from multiple sources rather than a single source (e.g. an object might be viewed by cameras at different angles, or a document might consist of text and images). The conditional independence of separate sources imposes constraints on their shared latent representation, which, if respected, can improve the quality of a learned low dimensional… 

Figures from this paper

Tensorized Multi-view Subspace Representation Learning
TLDR
A novel algorithm termed as Tensorized Multi-view Subspace Representation Learning is established, which models elegantly the complementary information among different views, reduces redundancy of subspace representations, and then improves the accuracy of subsequent tasks.
Convex Subspace Representation Learning from Multi-View Data
TLDR
The empirical study shows the proposed subspace representation learning method can effectively facilitate multi-view clustering and induce superior clustering results than alternative multi- view clustering methods.
Latent Complete Row Space Recovery for Multi-View Subspace Clustering
TLDR
The Latent Complete Row Space Recovery (LCRSR) method is proposed, which is able to recover the row space of the latent representation, which not only carries complete information from multiple views but also determines the subspace membership under certain conditions.
Latent Multi-view Subspace Clustering
TLDR
A novel Latent Multi-view Subspace Clustering method, which clusters data points with latent representation and simultaneously explores underlying complementary information from multiple views, which makes subspace representation more accurate and robust as well.
Deep multi-view robust representation learning
  • Zhenyu Jiao, Chao Xu
  • Computer Science
    2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2017
TLDR
This work proposes an auto encoder based deep multi-view robust representation learning (DMRRL) algorithm, which can learn a shared representation from multi-View observations and the algorithm is robust to noise and outliers by using Cauchy estimator as loss function.
Shared Subspace Learning for Latent Representation of Multi-View Data
TLDR
This paper focuses on capturing the shared latent representation across multi-view by constructing the correlation in a shared subspace by boosting the boosted discriminative ability of the proposed multi-View analysis model.
Multi-view embedding learning via robust joint nonnegative matrix factorization
TLDR
A novel multi-view embedding algorithm via robust joint nonnegative matrix factorization is proposed, which utilizes the correntropy induced metric to measure the reconstruction error for each view, and defines a consensus matrix subspace to constrain the disagreement of different views.
Low-Rank Tensor Constrained Multiview Subspace Clustering
TLDR
A low-rank tensor constraint is introduced to explore the complementary information from multiple views and, accordingly, a novel method called Low-rank Tensor constrained Multiview Subspace Clustering (LT-MSC) is established.
Hyper-Laplacian Regularized Multilinear Multiview Self-Representations for Clustering and Semisupervised Learning
TLDR
A hyper-Laplacian regularized multilinear multiview self-representation model is proposed, which is referred to as HLR-M2VS, to jointly learn multiple views correlation and a local geometrical structure in a unified tensor space and view-specific self- Representation feature spaces, respectively.
Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation
TLDR
Three novel dimension reduction methods for incomplete multiview data are developed that achieve the performance superior to that of the state-of-the-art comparable methods and show the advantage of integrating the sparsity and low-rankness over using each of them separately.
...
...

References

SHOWING 1-10 OF 33 REFERENCES
Factorized Latent Spaces with Structured Sparsity
TLDR
This paper shows that structured sparsity allows us to address the multi-view learning problem by alternately solving two convex optimization problems and shows that the resulting factorized latent spaces generalize over existing approaches in that they allow having latent dimensions shared between any subset of the views instead of between all the views only.
Learning Multi-View Neighborhood Preserving Projections
We address the problem of metric learning for multi-view data, namely the construction of embedding projections from data in different representations into a shared feature space, such that the
Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions
TLDR
This work applies the framework to a semi-supervised learning problem, and demonstrates that feature discovery can co-occur with input reconstruction and supervised training while still admitting globally optimal solutions.
Shared Kernel Information Embedding for Discriminative Inference
TLDR
An LVM called the Kernel Information Embedding (KIE) is proposed that defines a coherent joint density over the input and a learned latent space and a generalization, the shared KIE (sKIE), that allows us to model multiple input spaces using a single, shared latent representation.
Convex multi-task feature learning
TLDR
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
Accelerated Training for Matrix-norm Regularization: A Boosting Approach
TLDR
A boosting method for regularized learning that guarantees e accuracy within O(1 /e) iterations is proposed and an application to latent multiview learning is demonstrated for which it provides the first efficient weak-oracle.
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose,
Greedy Algorithms for Structurally Constrained High Dimensional Problems
TLDR
This framework not only unifies existing greedy algorithms by recovering them as special cases but also yields novel ones that solve convex optimization problems that arise when dealing with structurally constrained high-dimensional problems.
A General Model for Multiple View Unsupervised Learning
TLDR
The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations.
...
...