Subtracting a best rank-1 approximation may increase tensor rank

@article{Stegeman2009SubtractingAB,
  title={Subtracting a best rank-1 approximation may increase tensor rank},
  author={Alwin Stegeman and Pierre Comon},
  journal={2009 17th European Signal Processing Conference},
  year={2009},
  pages={505-509}
}
  • A. StegemanP. Comon
  • Published 2 June 2009
  • Computer Science, Mathematics
  • 2009 17th European Signal Processing Conference

Tables from this paper

Tensor Deflation for CANDECOMP/PARAFAC— Part I: Alternating Subspace Update Algorithm

A novel deflation method for the CP decomposition of order-3 tensors of size R×R×R and rank- R which has a computational cost of O(R3) per iteration which is lower than the cost of the ALS algorithm for the overallCP decomposition.

Rank Splitting for CANDECOMP/PARAFAC

This paper extends the method to rank-1 tensor deflation problem to block deflation problem, when at least two factor matrices have full column rank and when the block deflation has a complexity lower than the cost of the ALS algorithm for the overall CP decomposition.

Uniqueness of Nonnegative Tensor Approximations

A singular vector variant of the Perron-Frobenius theorem for positive tensors is proved and applied to show that a best nonnegative rank-r approximation of a positive tensor can never be obtained by deflation.

Canonical Forms of 2 2 2 and 2 2 2 2 Tensors

The rank and canonical forms of a tensor are concepts that naturally generalize that of a matrix. The question of how to determine the rank of a tensor has been widely studied in the literature and

A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms

A constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products and allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor.

Title A constructive algorithm for decomposing a tensor into a finitesum of orthonormal rank-1 terms

A constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products and allows, for the first time, a complete characterization of all tensors orthogonal with the original tensor.

ORTHOGONAL RANK-TWO TENSOR APPROXIMATION: A MODIFIED HIGH-ORDER POWER METHOD AND ITS CONVERGENCE ANALYSIS DRAFT AS OF February 25, 2013

The conventional high-order power method is modified to address the orthogonality and a rigorous analysis of convergence is provided in this paper.

Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors

This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rank-one approximation converge globally.

Subtracting a best rank‐1 approximation from p × p × 2(p≥2) tensors

One special form of the ptimesp × 2 (p≥2) tensors by multilinear orthonormal transformations is introduced, and some interesting properties are presented, and it is confirmed that consecutively subtracting the best rank-1 approximations may not lead to a best low rank approximation of a tensor.
...

References

SHOWING 1-10 OF 55 REFERENCES

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

On the Best Rank-1 and Rank-(

In this paper we discuss a multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal leastsquares

On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.

Symmetric Tensors and Symmetric Tensor Rank

The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.

On the best rank-1 approximation to higher-order symmetric tensors

Low-Rank Approximation of Generic p˟q˟2 Arrays and Diverging Components in the Candecomp/Parafac Model

  • A. Stegeman
  • Mathematics, Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2008
It is shown that if a best rank-$R$ approximation does not exist, then any sequence of CP updates will exhibit diverging CP components, which implies that several components are highly correlated in all three modes and their component weights become arbitrarily large.

Tensor Decompositions and Applications

This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order
...