On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors

@article{Wang2014OnTG,
  title={On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors},
  author={Liqi Wang and Moody T. Chu},
  journal={SIAM J. Matrix Anal. Appl.},
  year={2014},
  volume={35},
  pages={1058-1072}
}
  • Liqi WangM. Chu
  • Published 7 August 2014
  • Computer Science
  • SIAM J. Matrix Anal. Appl.
Tensor decomposition has important applications in various disciplines, but it remains an extremely challenging task even to this date. A slightly more manageable endeavor has been to find a low rank approximation in place of the decomposition. Even for this less stringent undertaking, it is an established fact that tensors beyond matrices can fail to have best low rank approximations, with the notable exception that the best rank-one approximation always exists for tensors of any order. Toward… 

Tables from this paper

Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors

In this analysis, the global convergence and the rate of convergence of the ALS algorithm for the rank-one approximation problem are focused on and it is shown that the ALS method can converge sublinearly, Q-lin Early, and even Q-superlinearly.

Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

Some results concerning rank-one truncated steepest descent directions in tensor spaces

  • A. Uschmajew
  • Computer Science
    2015 International Conference on Sampling Theory and Applications (SampTA)
  • 2015
This work presents a conceptual review of this approach to finding low-rank solutions to matrix or tensor optimization tasks by greedy rank-one methods, and provides some new insights.

GLOBAL RANK-1 APPROXIMATION FOR ORDER-3 TENSORS

The empirical results of two investigations are reported, finding that the rank-1 approximat on problem can easily have many local solutions and that most of the lower rank approximation methods available in the literature might have severely missed the target.

Convergence Analysis of Alternating Direction Methods: A General Framework and Its Applications to Tensor Approximations

For problems involving multiple variables, the notion of solving a sequence of simplified problems by fixing all but one variable a time and alternating among the variables has been exploited in a

Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations

An improved version iAPD of the classical APD is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in optimization.

On the convergence of higher-order orthogonal iteration

Abstract The higher-order orthogonal iteration (HOOI) has been popularly used for finding a best low-multilinear rank approximation of a tensor. However, its convergence is still an open question. In

Alternating Least Squares as Moving Subspace Correction

This work is able to provide an alternative and conceptually simple derivation of the asymptotic convergence rate of the two-sided block power method of numerical algebra for computing the dominant singular subspaces of a rectangular matrix.

Convergence analysis of an SVD-based algorithm for the best rank-1 tensor approximation

Convergence rate analysis for the higher order power method in best rank one approximations of tensors

It is established that the sequence generated by HOPM always converges globally and R-linearly for orthogonally decomposable tensors with order at least 3, and for almost all tensors, all the singular vector tuples are nondegenerate, and so, the HopM “typically” exhibits global R-linear convergence rate.

References

SHOWING 1-10 OF 49 REFERENCES

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.

Subtracting a best rank-1 approximation may increase tensor rank

  • A. StegemanP. Comon
  • Computer Science, Mathematics
    2009 17th European Signal Processing Conference
  • 2009

On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

Local Convergence of the Alternating Least Squares Algorithm for Canonical Tensor Approximation

A local convergence theorem for calculating canonical low-rank tensor approximations (PARAFAC, CANDECOMP) by the alternating least squares algorithm is established. The main assumption is that the

The projected gradient methods for least squares matrix approximations with spectral constraints

The problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. This paper describes a general

Rank-One Approximation to High Order Tensors

The singular value decomposition has been extensively used in engineering and statistical applications and certain properties of this decomposition are investigated as well as numerical algorithms.

Musings on multilinear fitting

Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix

Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given.