The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

  title={The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence},
  author={Yuning Yang},
  • Yuning Yang
  • Published 25 November 2019
  • Computer Science
  • ArXiv
The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is shown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality… 

Tables from this paper

Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations

An improved version iAPD of the classical APD is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual $O(1/k)$ for first order methods in optimization.

On Approximation Algorithm for Orthogonal Low-Rank Tensor Approximation

  • Yuning Yang
  • Computer Science
    Journal of Optimization Theory and Applications
  • 2022
The presented results fill a gap left in Yang (SIAM J Matrix Anal Appl 41:1797–1825, 2020), where the approximation bound of that approximation algorithm was established when there is only one orthonormal factor.


  • YE Ke
  • Mathematics, Computer Science
  • 2019
An improved version of the classical APD, iAPD, of the alternating polar decomposition method is proposed, which exhibits an overall sublinear convergence with an explicit rate which is sharper than the usual Op1{kq for first order methods in optimization.

Rank Properties and Computational Methods for Orthogonal Tensor Decompositions

  • Chao Zeng
  • Computer Science, Mathematics
    Journal of Scientific Computing
  • 2022
This work presents several properties of orthogonal rank, which are different from those of tensor rank in many aspects, and proposes an algorithm based on the augmented Lagrangian method that has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.

Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations

This paper studies the gradient based Jacobi-type algorithms to maximize two classes of homogeneous polynomials with orthogonality constraints, and establishes their convergence properties, and proposes theJacobi-GP and Jacobi -MGP algorithms, and establish their global convergence without any further condition.

Shifted eigenvalue decomposition method for computing C-eigenvalues of a piezoelectric-type tensor

A piezoelectric-type tensor is of order three which is symmetric with respect to its last two indices. The largest C-eigenvalue of a piezoelectric-type tensor determines the highest piezoelectric

Low Rank Tensor Decompositions and Approximations

It is proved that generating polynomials gives a quasi-optimal low rank tensor approximation if the given tensor is sufficiently close to a low rank one.

Half-quadratic alternating direction method of multipliers for robust orthogonal tensor approximation

This paper derives a robust orthogonal tensor CPD model with Cauchy loss, which is resistant to heavy-tailed noise such as theCauchy noise, or outliers, and develops the so-called half-quadratic alternating direction method of multipliers (HQ-ADMM) to solve the model.

Polar decomposition based algorithms on the product of Stiefel manifolds with applications in tensor approximation

It turns out that well-known algorithms are all special cases of this general algorithmic framework and its symmetric variant, and the convergence results subsume the results found in the literature designed for those special cases.



On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications.

Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

Globally convergent Jacobi-type algorithms for simultaneous orthogonal symmetric tensor diagonalization

This paper considers a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors and proposes and proves a newJacobi-based algorithm in the general setting and proves its global convergence for sufficiently smooth functions.

Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix

Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given.

Shifted Power Method for Computing Tensor Eigenpairs

A shifted symmetric higher-order power method (SS-HOPM), which it is shown is guaranteed to converge to a tensor eigenpair, and a fixed point analysis is used to characterize exactly which eigenpairs can and cannot be found by the method.

Computing the polar decomposition with applications

Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix.

Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors

BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these are defined and it is proved that, when local coordinates are used, their BF GS updates on Grassmanians share the same optimality property as the usual BFGS Updates on Euclidean spaces.

On the Best Rank-1 and Rank-(R1 , R2, ... , RN) Approximation of Higher-Order Tensors

A multilinear generalization of the best rank-R approximation problem for matrices, namely, the approximation of a given higher-order tensor, in an optimal least-squares sense, by a tensor that has prespecified column rank value, rowRank value, etc.

Hierarchical Singular Value Decomposition of Tensors

This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in $d=2$), and it is proved that one can find low rank (almost) best approximations in a hierarchical format ($\mathcal{H}$-Tucker) which requires only $\ mathcal{O}((d-1)k^3+dnk)$ parameters.

Proximal alternating linearized minimization for nonconvex and nonsmooth problems

A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.