# The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

@article{Yang2019TheEL, title={The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence}, author={Yuning Yang}, journal={ArXiv}, year={2019}, volume={abs/1911.10921} }

The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is shown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality…

## 9 Citations

### Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations

- Mathematics, Computer ScienceMathematical Programming
- 2022

An improved version iAPD of the classical APD is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in optimization.

### On Approximation Algorithm for Orthogonal Low-Rank Tensor Approximation

- Computer ScienceJournal of Optimization Theory and Applications
- 2022

The presented results fill a gap left in Yang (SIAM J Matrix Anal Appl 41:1797–1825, 2020), where the approximation bound of that approximation algorithm was established when there is only one orthonormal factor.

### O C ] 9 D ec 2 01 9 LINEAR CONVERGENCE OF AN ALTERNATING POLAR DECOMPOSITION METHOD FOR LOW RANK ORTHOGONAL TENSOR APPROXIMATIONS

- Mathematics, Computer Science
- 2019

An improved version of the classical APD, iAPD, of the alternating polar decomposition method is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual Op1{kq for first order methods in optimization.

### Rank Properties and Computational Methods for Orthogonal Tensor Decompositions

- Computer Science, MathematicsJournal of Scientific Computing
- 2022

This work presents several properties of orthogonal rank, which are different from those of tensor rank in many aspects, and proposes an algorithm based on the augmented Lagrangian method that has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.

### Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations

- Mathematics, Computer ScienceArXiv
- 2021

This paper studies the gradient based Jacobi-type algorithms to maximize two classes of homogeneous polynomials with orthogonality constraints, and establishes their convergence properties, and proposes theJacobi-GP and Jacobi -MGP algorithms, and establish their global convergence without any further condition.

### Shifted eigenvalue decomposition method for computing C-eigenvalues of a piezoelectric-type tensor

- MathematicsComputational and Applied Mathematics
- 2021

A piezoelectric-type tensor is of order three which is symmetric with respect to its last two indices. The largest C-eigenvalue of a piezoelectric-type tensor determines the highest piezoelectric…

### Low Rank Tensor Decompositions and Approximations

- Computer ScienceArXiv
- 2022

It is proved that generating polynomials gives a quasi-optimal low rank tensor approximation if the given tensor is suﬃciently close to a low rank one.

### Half-quadratic alternating direction method of multipliers for robust orthogonal tensor approximation

- Computer Science, MathematicsAdvances in Computational Mathematics
- 2023

This paper derives a robust orthogonal tensor CPD model with Cauchy loss, which is resistant to heavy-tailed noise such as theCauchy noise, or outliers, and develops the so-called half-quadratic alternating direction method of multipliers (HQ-ADMM) to solve the model.

### Polar decomposition based algorithms on the product of Stiefel manifolds with applications in tensor approximation

- Computer Science, MathematicsArXiv
- 2019

It turns out that well-known algorithms are all special cases of this general algorithmic framework and its symmetric variant, and the convergence results subsume the results found in the literature designed for those special cases.

## References

SHOWING 1-10 OF 34 REFERENCES

### On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

- MathematicsSIAM J. Matrix Anal. Appl.
- 2002

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.

### Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2015

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

### Globally convergent Jacobi-type algorithms for simultaneous orthogonal symmetric tensor diagonalization

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2018

This paper considers a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors and proposes and proves a newJacobi-based algorithm in the general setting and proves its global convergence for sufficiently smooth functions.

### Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix

- MathematicsSIAM J. Matrix Anal. Appl.
- 2012

Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given.

### Shifted Power Method for Computing Tensor Eigenpairs

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2011

A shifted symmetric higher-order power method (SS-HOPM), which it is shown is guaranteed to converge to a tensor eigenpair, and a fixed point analysis is used to characterize exactly which eigenpairs can and cannot be found by the method.

### Computing the polar decomposition with applications

- Mathematics, Computer Science
- 1986

Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.

### Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors

- Computer Science, MathematicsSIAM J. Sci. Comput.
- 2010

BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these are defined and it is proved that, when local coordinates are used, their BF GS updates on Grassmanians share the same optimality property as the usual BFGS Updates on Euclidean spaces.

### On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2000

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

### Hierarchical Singular Value Decomposition of Tensors

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2010

This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and it is proved that one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\ mathcal{O}((d-1)k^3+dnk)$ parameters.

### Proximal alternating linearized minimization for nonconvex and nonsmooth problems

- Mathematics, Computer ScienceMath. Program.
- 2014

A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.