Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors

@article{Mu2015SuccessiveRA,
  title={Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors},
  author={Cun Mu and Daniel J. Hsu and Donald Goldfarb},
  journal={ArXiv},
  year={2015},
  volume={abs/1705.10404}
}
Many idealized problems in signal processing, machine learning, and statistics can be reduced to the problem of finding the symmetric canonical decomposition of an underlying symmetric and orthogonally decomposable (SOD) tensor. Drawing inspiration from the matrix case, the successive rank-one approximation (SROA) scheme has been proposed and shown to yield this tensor decomposition exactly, and a plethora of numerical methods have thus been developed for the tensor rank-one approximation… 

Figures from this paper

Recovering orthogonal tensors under arbitrarily strong, but locally correlated, noise
TLDR
The problem of recovering an orthogonally decomposable tensor with a subset of elements distorted by noise with arbitrarily large magnitude can be solved through a system of coupled Sylvester-like equations and how to accelerate their solution by an alternating solver is shown.
Perturbation Bounds for (Nearly) Orthogonally Decomposable Tensors
We develop deterministic perturbation bounds for singular values and vectors of orthogonally decomposable tensors, in a spirit similar to classical results for matrices such as those due to Weyl,
Tensor Decompositions via Two-Mode Higher-Order SVD (HOSVD)
TLDR
This new method built on Kruskal's uniqueness theorem to decompose symmetric, nearly orthogonally decomposable tensors provably handles a greater level of noise compared to previous methods and achieves a high estimation accuracy.
Perturbation Bounds for Orthogonally Decomposable Tensors and Their Applications in High Dimensional Data Analysis
TLDR
The implications of deterministic perturbation bounds for singular values and vectors of orthogonally decomposable tensors are illustrated through three connected yet seemingly different high dimensional data analysis tasks: tensor SVD, tensor regression and estimation of latent variable models, leading to new insights in each of these settings.
Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations
TLDR
An improved version iAPD of the classical APD is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in optimization.
Optimal orthogonal approximations to symmetric tensors cannot always be chosen symmetric
TLDR
It is shown that optimal orthogonal approximations of rank greater than one cannot always be chosen to be symmetric.
Successive Partial-Symmetric Rank-One Algorithms for Almost Unitarily Decomposable Conjugate Partial-Symmetric Tensors
In this paper, we introduce the almost unitarily decomposable conjugate partial-symmetric tensors, which are different from the commonly studied orthogonally decomposable tensors by involving the
Robust Eigenvectors of Symmetric Tensors
TLDR
This paper shows that whenever an eigenvector is a generator of the symmetric decomposition of a symmetric tensor, then (if the order of the tensor is sufficiently high) this eigen vector is robust, i.e., it is an attracting fixed point of the Tensor power method.
O C ] 9 D ec 2 01 9 LINEAR CONVERGENCE OF AN ALTERNATING POLAR DECOMPOSITION METHOD FOR LOW RANK ORTHOGONAL TENSOR APPROXIMATIONS
  • YE Ke
  • Mathematics, Computer Science
  • 2019
TLDR
An improved version of the classical APD, iAPD, of the alternating polar decomposition method is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual Op1{kq for first order methods in optimization.
Greedy Approaches to Symmetric Orthogonal Tensor Decomposition
TLDR
This paper review, establish, and compare the perturbation bounds for two natural types of incremental rank-one approximation approaches for finding the symmetric and orthogonal decomposition of a tensor.
...
...

References

SHOWING 1-10 OF 61 REFERENCES
Tensor principal component analysis via convex optimization
TLDR
The tensor PCA problem can be solved by means of matrix optimization under a rank-one constraint, for which two solution methods are proposed: imposing a nuclear norm penalty in the objective to enforce a low-rank solution and relaxing the rank- one constraint by semidefinite programming.
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
TLDR
It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.
Subtracting a best rank-1 approximation may increase tensor rank
  • A. Stegeman, P. Comon
  • Computer Science, Mathematics
    2009 17th European Signal Processing Conference
  • 2009
Tensor decompositions for learning latent variable models
TLDR
A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices, and implies a robust and computationally tractable estimation approach for several popular latent variable models.
On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors
TLDR
A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.
Properties and methods for finding the best rank-one approximation to higher-order tensors
TLDR
This paper reformulates the polynomial optimization problem to a matrix programming, and shows the equivalence between these two problems, and proves that there is no duality gap between the reformulation and its Lagrangian dual problem.
Semidefinite Relaxations for Best Rank-1 Tensor Approximations
TLDR
This paper proposes semidefinite relaxations, based on sum of squares representations, to solve the problem of finding best rank-1 approximations for both symmetric and nonsymmetric tensors.
On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors
TLDR
It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.
Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method
TLDR
This is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of T and T' have similar structures.
Multiarray Signal Processing: Tensor decomposition meets compressed sensing
...
...