On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

@article{Kofidis2001OnTB,
  title={On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors},
  author={Eleftherios Kofidis and Phillip A. Regalia},
  journal={SIAM J. Matrix Anal. Appl.},
  year={2001},
  volume={23},
  pages={863-884}
}
Recently the problem of determining the best, in the least-squares sense, rank-1 approximation to a higher-order tensor was studied and an iterative method that extends the well-known power method for matrices was proposed for its solution. This higher-order power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, as its convergence is… 

ON THE GLOBAL CONVERGENCE OF THE HIGH-ORDER POWER METHOD FOR RANK-ONE TENSOR APPROXIMATION DRAFT AS OF January

This paper presents a rigorous convergence analysis of the power method for the r ank-one approximation of tensors beyond matrices and compares it with the global optimization.

Properties and methods for finding the best rank-one approximation to higher-order tensors

This paper reformulates the polynomial optimization problem to a matrix programming, and shows the equivalence between these two problems, and proves that there is no duality gap between the reformulation and its Lagrangian dual problem.

Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors

This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rank-one approximation converge globally.

The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

  • Yuning Yang
  • Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2020
The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices

Best Nonnegative Rank-One Approximations of Tensors

A Positivstellensatz is given for this class of polynomial optimization problems, based on which a globally convergent hierarchy of doubly nonnegative (DNN) relaxations is proposed, and it is shown that this approach is quite promising.

A sequential subspace projection method for extreme Z-eigenvalues of supersymmetric tensors

The main idea of SSPM is to form a 2-dimensional subspace at the current point and then solve the original optimization problem in the subspace and the globalization strategy of random phase can be easily incorporated into S SPM, which promotes the ability to find extreme Z-eigenvalues.

ORTHOGONAL RANK-TWO TENSOR APPROXIMATION: A MODIFIED HIGH-ORDER POWER METHOD AND ITS CONVERGENCE ANALYSIS DRAFT AS OF February 25, 2013

The conventional high-order power method is modified to address the orthogonality and a rigorous analysis of convergence is provided in this paper.

Convergence rate analysis for the higher order power method in best rank one approximations of tensors

It is established that the sequence generated by HOPM always converges globally and R-linearly for orthogonally decomposable tensors with order at least 3, and for almost all tensors, all the singular vector tuples are nondegenerate, and so, the HopM “typically” exhibits global R-linear convergence rate.

Convergence rate analysis for the higher order power method in best rank one approximations of tensors

A popular and classical method for finding the best rank one approximation of a real tensor is the higher order power method (HOPM). It is known in the literature that the iterative sequence
...

References

SHOWING 1-10 OF 25 REFERENCES

On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

The higher-order power method revisited: convergence proofs and effective initialization

  • P. RegaliaEleftherios Kofidis
  • Computer Science, Mathematics
    2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100)
  • 2000
A simple convergence proof for the general nonsymmetric tensor case is established and it is shown that a symmetric version of the algorithm, offering an order of magnitude reduction in computational complexity but discarded by De Lathauwer et al. as unpredictable, is likewise provably convergent.

Tensor Methods in Statistics

This book provides a systematic development of tensor methods in statistics, beginning with the study of multivariate moments and cumulants. The effect on moment arrays and on cumulant arrays of

Super-symmetric decomposition of the fourth-order cumulant tensor. Blind identification of more sources than sensors

  • J. Cardoso
  • Physics
    [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing
  • 1991
Ideas for higher-order array processing are introduced, focusing on fourth-order cumulant statistics, and it is shown that, when dealing with 4-index quantities, symmetries are related to rank properties.

A Multilinear Singular Value Decomposition

There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.

A gradient search interpretation of the super-exponential algorithm

The principle of this algorithm-Hadamard exponentiation, projection over the set of attainable combined channel-equalizer impulse responses followed by a normalization-is shown to coincide with a gradient search of an extremum of a cost function, which gives a simple proof of convergence for the super-exponential algorithm.

Tensor Decompositions, State of the Art and Applications

A partial survey of the tools borrowed from tensor algebra, which have been utilized recently in Statistics and Signal Processing, shows why the decompositions well known in linear algebra can hardly be extended to tensors.

Super-exponential methods for blind deconvolution

  • O. ShalviE. Weinstein
  • Computer Science
    17th Convention of Electrical and Electronics Engineers in Israel
  • 1991
The authors present a class of iterative methods for solving the problem of blind deconvolution of an unknown possibly nonminimum phase linear system driven by an unobserved input process. The

Orthogonal Tensor Decompositions

  • T. Kolda
  • Mathematics, Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2001
The orthogonal decomposition of tensors (also known as multidimensional arrays or n-way arrays) using two different definitions of orthogonality are explored using a counterexample to a tensor extension of the Eckart--Young SVD approximation theorem.