Corpus ID: 52965823

Learning Tensor Latent Features

  title={Learning Tensor Latent Features},
  author={Sung-En Chang and Xun Zheng and Ian En-Hsu Yen and Pradeep Ravikumar and Rose Yu},
We study the problem of learning latent feature models (LFMs) for tensor data commonly observed in science and engineering such as hyperspectral imagery. However, the problem is challenging not only due to the non-convex formulation, the combinatorial nature of the constraints in LFMs, but also the high-order correlations in the data. In this work, we formulate a tensor latent feature learning problem by representing the data as a mixture of high-order latent features and binary codes, which… Expand


Latent Feature Lasso
This paper addresses the outstanding problem of tractable estimation of LFMs via a novel atomic-norm regularization, which gives an algorithm with polynomial run-time and sample complexity without impractical assumptions on the data distribution. Expand
Learning Binary Latent Variable Models: A Tensor Eigenpair Approach
This paper proposes a novel spectral approach to latent variable models with hidden binary units based on the eigenvectors of both the second order moment matrix and third order moment tensor of the observed data, and proves that under mild non-degeneracy conditions, the method consistently estimates the model parameters at the optimal parametric rate. Expand
Tensor completion and low-n-rank tensor recovery via convex optimization
In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recoveryExpand
Tensor decompositions for learning latent variable models
A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices, and implies a robust and computationally tractable estimation approach for several popular latent variable models. Expand
Scalable Probabilistic Tensor Factorization for Binary and Count Data
A scalable probabilistic tensor factorization framework is developed that enables us to perform efficient factorization of massive binary and count tensor data and various types of constraints on the factor matrices can be incorporated under the proposed framework, providing good interpretability. Expand
Structured Sparse Method for Hyperspectral Unmixing
A Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method that incorporates a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space and can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Expand
Spectral Unmixing via Data-Guided Sparsity
This paper proposes a novel sparsity-based method by learning a data-guided map (DgMap) to describe the individual mixed level of each pixel and applies the ℓp (0 <; p <; 1) constraint in an adaptive manner. Expand
Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization
A family of the modified Regularized Alternating Least Squares (RALS) algorithms for NMF/NTF is proposed, characterized by improved efficiency and convergence properties, especially for large-scale problems. Expand
Convex Tensor Decomposition via Structured Schatten Norm Regularization
It is shown theoretically that when the unknown true tensor is low-rank in a specific mode, this approach performs as good as knowing the mode with the smallest rank, and it is confirmed through numerical simulations that the theoretical prediction can precisely predict the scaling behavior of the mean squared error. Expand
The Convex Geometry of Linear Inverse Problems
This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. Expand