# The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

@article{Yang2019TheEL,
title={The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence},
author={Yuning Yang},
journal={ArXiv},
year={2019},
volume={abs/1911.10921}
}
• Yuning Yang
• Published 25 November 2019
• Computer Science
• ArXiv
The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is shown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality…
9 Citations

## Tables from this paper

• Mathematics, Computer Science
Mathematical Programming
• 2022
An improved version iAPD of the classical APD is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in optimization.
• Yuning Yang
• Computer Science
Journal of Optimization Theory and Applications
• 2022
The presented results fill a gap left in Yang (SIAM J Matrix Anal Appl 41:1797–1825, 2020), where the approximation bound of that approximation algorithm was established when there is only one orthonormal factor.
• YE Ke
• Mathematics, Computer Science
• 2019
An improved version of the classical APD, iAPD, of the alternating polar decomposition method is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual Op1{kq for first order methods in optimization.
• Chao Zeng
• Computer Science, Mathematics
Journal of Scientific Computing
• 2022
This work presents several properties of orthogonal rank, which are different from those of tensor rank in many aspects, and proposes an algorithm based on the augmented Lagrangian method that has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.
• Mathematics, Computer Science
ArXiv
• 2021
This paper studies the gradient based Jacobi-type algorithms to maximize two classes of homogeneous polynomials with orthogonality constraints, and establishes their convergence properties, and proposes theJacobi-GP and Jacobi -MGP algorithms, and establish their global convergence without any further condition.
• Mathematics
Computational and Applied Mathematics
• 2021
A piezoelectric-type tensor is of order three which is symmetric with respect to its last two indices. The largest C-eigenvalue of a piezoelectric-type tensor determines the highest piezoelectric
• Computer Science
ArXiv
• 2022
It is proved that generating polynomials gives a quasi-optimal low rank tensor approximation if the given tensor is suﬃciently close to a low rank one.
• Computer Science, Mathematics
• 2023
This paper derives a robust orthogonal tensor CPD model with Cauchy loss, which is resistant to heavy-tailed noise such as theCauchy noise, or outliers, and develops the so-called half-quadratic alternating direction method of multipliers (HQ-ADMM) to solve the model.
• Computer Science, Mathematics
ArXiv
• 2019
It turns out that well-known algorithms are all special cases of this general algorithmic framework and its symmetric variant, and the convergence results subsume the results found in the literature designed for those special cases.

## References

SHOWING 1-10 OF 34 REFERENCES

• Mathematics
SIAM J. Matrix Anal. Appl.
• 2002
It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2015
The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.
• Mathematics, Computer Science
SIAM J. Matrix Anal. Appl.
• 2018
This paper considers a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors and proposes and proves a newJacobi-based algorithm in the general setting and proves its global convergence for sufficiently smooth functions.
• Mathematics
SIAM J. Matrix Anal. Appl.
• 2012
Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2011
A shifted symmetric higher-order power method (SS-HOPM), which it is shown is guaranteed to converge to a tensor eigenpair, and a fixed point analysis is used to characterize exactly which eigenpairs can and cannot be found by the method.
Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.
• Computer Science, Mathematics
SIAM J. Sci. Comput.
• 2010
BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these are defined and it is proved that, when local coordinates are used, their BF GS updates on Grassmanians share the same optimality property as the usual BFGS Updates on Euclidean spaces.
• Mathematics, Computer Science
SIAM J. Matrix Anal. Appl.
• 2000
A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.
This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and it is proved that one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\ mathcal{O}((d-1)k^3+dnk)$ parameters.
• Mathematics, Computer Science
Math. Program.
• 2014
A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.