# On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

@article{Kofidis2001OnTB,
title={On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors},
author={Eleftherios Kofidis and Phillip A. Regalia},
journal={SIAM J. Matrix Anal. Appl.},
year={2001},
volume={23},
pages={863-884}
}
• Published 1 March 2001
• Mathematics
• SIAM J. Matrix Anal. Appl.
Recently the problem of determining the best, in the least-squares sense, rank-1 approximation to a higher-order tensor was studied and an iterative method that extends the well-known power method for matrices was proposed for its solution. This higher-order power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, as its convergence is…
373 Citations

## Figures from this paper

• Computer Science
• 2013
This paper presents a rigorous convergence analysis of the power method for the r ank-one approximation of tensors beyond matrices and compares it with the global optimization.
• Computer Science, Mathematics
Computational Optimization and Applications
• 2013
This paper reformulates the polynomial optimization problem to a matrix programming, and shows the equivalence between these two problems, and proves that there is no duality gap between the reformulation and its Lagrangian dual problem.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2015
The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2014
This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rank-one approximation converge globally.
• Yuning Yang
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2020
The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2019
A Positivstellensatz is given for this class of polynomial optimization problems, based on which a globally convergent hierarchy of doubly nonnegative (DNN) relaxations is proposed, and it is shown that this approach is quite promising.
• Mathematics
Numer. Linear Algebra Appl.
• 2015
The main idea of SSPM is to form a 2-dimensional subspace at the current point and then solve the original optimization problem in the subspace and the globalization strategy of random phase can be easily incorporated into S SPM, which promotes the ability to find extreme Z-eigenvalues.
• Computer Science
• 2013
The conventional high-order power method is modified to address the orthogonality and a rigorous analysis of convergence is provided in this paper.
• Mathematics
Numerische Mathematik
• 2018
It is established that the sequence generated by HOPM always converges globally and R-linearly for orthogonally decomposable tensors with order at least 3, and for almost all tensors, all the singular vector tuples are nondegenerate, and so, the HopM “typically” exhibits global R-linear convergence rate.
• Mathematics
Numerische Mathematik
• 2018
A popular and classical method for finding the best rank one approximation of a real tensor is the higher order power method (HOPM). It is known in the literature that the iterative sequence

## References

SHOWING 1-10 OF 25 REFERENCES

• Mathematics, Computer Science
SIAM J. Matrix Anal. Appl.
• 2000
A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.
• Computer Science, Mathematics
2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100)
• 2000
A simple convergence proof for the general nonsymmetric tensor case is established and it is shown that a symmetric version of the algorithm, offering an order of magnitude reduction in computational complexity but discarded by De Lathauwer et al. as unpredictable, is likewise provably convergent.
This book provides a systematic development of tensor methods in statistics, beginning with the study of multivariate moments and cumulants. The effect on moment arrays and on cumulant arrays of
• J. Cardoso
• Physics
[Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing
• 1991
Ideas for higher-order array processing are introduced, focusing on fourth-order cumulant statistics, and it is shown that, when dealing with 4-index quantities, symmetries are related to rank properties.
• Mathematics
SIAM J. Matrix Anal. Appl.
• 2000
There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.
• Mathematics
IEEE Trans. Inf. Theory
• 2000
The principle of this algorithm-Hadamard exponentiation, projection over the set of attainable combined channel-equalizer impulse responses followed by a normalization-is shown to coincide with a gradient search of an extremum of a cost function, which gives a simple proof of convergence for the super-exponential algorithm.
A partial survey of the tools borrowed from tensor algebra, which have been utilized recently in Statistics and Signal Processing, shows why the decompositions well known in linear algebra can hardly be extended to tensors.
• Computer Science
17th Convention of Electrical and Electronics Engineers in Israel
• 1991
The authors present a class of iterative methods for solving the problem of blind deconvolution of an unknown possibly nonminimum phase linear system driven by an unobserved input process. The
• T. Kolda
• Mathematics, Computer Science
SIAM J. Matrix Anal. Appl.
• 2001
The orthogonal decomposition of tensors (also known as multidimensional arrays or n-way arrays) using two different definitions of orthogonality are explored using a counterexample to a tensor extension of the Eckart--Young SVD approximation theorem.