Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

@article{Silva2006TensorRA,
  title={Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem},
  author={Vin de Silva and Lek-Heng Lim},
  journal={SIAM J. Matrix Anal. Appl.},
  year={2006},
  volume={30},
  pages={1084-1127}
}
There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-$r$ approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of… 

Tables from this paper

Subtracting a best rank-1 approximation may increase tensor rank

  • A. StegemanP. Comon
  • Computer Science, Mathematics
    2009 17th European Signal Processing Conference
  • 2009

Subtracting a best rank-1 approximation does not necessarily decrease tensor rank

It is shown that for generic 2×2×2 tensors (which have rank 2 or 3), subtracting a best rank-1 approximation results in a tensor that has rank 3 and lies on the boundary between the rank-2 and rank-3 sets.

Low-Rank Approximation and Completion of Positive Tensors

  • A. Aswani
  • Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2016
The approach is to use algebraic topology to define a new (numerically well-posed) decomposition for positive tensors, which is equivalent to the standard tensor decomposition in important cases and it is proved it can be exactly reformulated as a convex optimization problem.

ON THE GLOBAL CONVERGENCE OF THE HIGH-ORDER POWER METHOD FOR RANK-ONE TENSOR APPROXIMATION DRAFT AS OF January

This paper presents a rigorous convergence analysis of the power method for the r ank-one approximation of tensors beyond matrices and compares it with the global optimization.

On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors

This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rank-one approximation converge globally.

Guarantees for existence of a best canonical polyadic approximation of a noisy low-rank tensor

Deterministic bounds for the existence of best low rank tensor approximations over K = R or K = C are given and it is shown that every K-rank R tensor inside of this ball has a unique canonical polyadic decomposition.

Some results concerning rank-one truncated steepest descent directions in tensor spaces

  • A. Uschmajew
  • Computer Science
    2015 International Conference on Sampling Theory and Applications (SampTA)
  • 2015
This work presents a conceptual review of this approach to finding low-rank solutions to matrix or tensor optimization tasks by greedy rank-one methods, and provides some new insights.

TENSOR NETWORK RANKS

  • YE Ke
  • Mathematics, Computer Science
  • 2020
It is argued that the near-universal practice of assuming that a function, matrix, or tensor has low rank may be ill-justified, and it is shown that one may vastly expand these classical notions of ranks.

Non-iterative low-multilinear-rank tensor approximation with application to decomposition in rank-(1,L,L) terms

An iterative deflationary approach for computing a decomposition of a tensor in low-mrank blocks, termed DBTD, which outperforms existing algorithms if the blocks are not too correlated and is much less sensitive to discrepancies among the block's norms.
...

References

SHOWING 1-10 OF 94 REFERENCES

On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.

On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

Rank-One Approximation to High Order Tensors

The singular value decomposition has been extensively used in engineering and statistical applications and certain properties of this decomposition are investigated as well as numerical algorithms.

Low-Rank Approximation of Generic p˟q˟2 Arrays and Diverging Components in the Candecomp/Parafac Model

  • A. Stegeman
  • Mathematics, Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2008
It is shown that if a best rank-$R$ approximation does not exist, then any sequence of CP updates will exhibit diverging CP components, which implies that several components are highly correlated in all three modes and their component weights become arbitrarily large.

Tensor Approximation and Signal Processing Applications

A detailed convergence analysis of a recently introduced tensor-equivalent of the power method for determining a rank-1 approximant to a matrix and a novel version adapted to the symmetric case is developed and shown to show great promise in yielding a clear solution to the local-extrema problem.

Symmetric Tensors and Symmetric Tensor Rank

The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.

On condition numbers and the distance to the nearest ill-posed problem

SummaryThe condition number of a problem measures the sensitivity of the answer to small changes in the input. We call the problem ill-posed if its condition number is infinite. It turns out that for

Genericity And Rank Deficiency Of High Order Symmetric Tensors

Blind identification of under-determined mixtures (UDM) is involved in numerous applications, including multi-way factor analysis (MWA) and signal processing. In the latter case, the use of

Degeneracy in Candecomp/Parafac explained for p × p × 2 arrays of rank p + 1 or higher

The Candecomp/Parafac (CP) model decomposes a three-way array into a prespecified number R of rank-1 arrays and a residual array, in which the sum of squares of the residual array is minimized. The

A Counterexample to the Possibility of an Extension of the Eckart-Young Low-Rank Approximation Theorem for the Orthogonal Rank Tensor Decomposition

  • T. Kolda
  • Computer Science, Mathematics
    SIAM J. Matrix Anal. Appl.
  • 2003
A counterexample to the extension of the Eckart--Young SVD approximation theorem to the orthogonal rank tensor decomposition is presented, answering an open question previously posed by Kolda.
...