Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations

  title={Linear convergence of an alternating polar decomposition method for low rank orthogonal tensor approximations},
  author={Shenglong Hu and Ke Ye},
  journal={Mathematical Programming},
  • Shenglong HuKe Ye
  • Published 9 December 2019
  • Mathematics, Computer Science
  • Mathematical Programming
Low rank orthogonal tensor approximation (LROTA) is an important problem in tensor computations and their applications. A classical and widely used algorithm is the alternating polar decomposition method (APD). In this article, an improved version iAPD of the classical APD is proposed. For the first time, all the following four fundamental properties are established for iAPD: (i) the algorithm converges globally and the whole sequence converges to a KKT point without any assumption; (ii) it… 

Half-Quadratic Alternating Direction Method of Multipliers for Robust Orthogonal Tensor Approximation

This paper derives a robust orthogonal tensor CPD model with Cauchy loss, which is resistant to heavy-tailed noise or outliers and shows that the whole sequence generated by the algorithm globally converges to a stationary point of the problem under consideration.

Polar decomposition based algorithms on the product of Stiefel manifolds with applications in tensor approximation

It turns out that well-known algorithms are all special cases of this general algorithmic framework and its symmetric variant, and the convergence results subsume the results found in the literature designed for those special cases.

On Approximation Algorithm for Orthogonal Low-Rank Tensor Approximation

  • Yuning Yang
  • Computer Science
    Journal of Optimization Theory and Applications
  • 2022
The presented results fill a gap left in Yang (SIAM J Matrix Anal Appl 41:1797–1825, 2020), where the approximation bound of that approximation algorithm was established when there is only one orthonormal factor.

Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations

This paper studies the gradient based Jacobi-type algorithms to maximize two classes of homogeneous polynomials with orthogonality constraints, and establishes their convergence properties, and proposes theJacobi-GP and Jacobi -MGP algorithms, and establish their global convergence without any further condition.

Recovering orthogonal tensors under arbitrarily strong, but locally correlated, noise

The problem of recovering an orthogonally decomposable tensor with a subset of elements distorted by noise with arbitrarily large magnitude can be solved through a system of coupled Sylvester-like equations and how to accelerate their solution by an alternating solver is shown.

Nondegeneracy of eigenvectors and singular vector tuples of tensors

  • Shenglong Hu
  • Computer Science, Physics
    Science China Mathematics
  • 2022
The main results are: (i) each (Z-)eigenvector/singular vector tuple of a generic tensor is nondegenerate, and (ii) each nonzero Z-eigen vector/ Singular vector tuples of an orthogonally decomposable tensors is nondEGenerate.



The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

  • Yuning Yang
  • Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2020
The epsilon alternating least squares ($\epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices

Orthogonal Low Rank Tensor Approximation: Alternating Least Squares Method and Its Global Convergence

The conventional high-order power method is modified to address the desirable orthogonality via the polar decomposition and it is shown that for almost all tensors the orthogonal alternating least squares method converges globally.

On the Global Convergence of the Alternating Least Squares Method for Rank-One Approximation to Generic Tensors

This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rank-one approximation converge globally.

Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis

A Jacobi-type algorithm to solve the low rank orthogonal approximation problem of symmetric tensors is proposed, and it is proved that an accumulation point is the unique limit point under some conditions.

Convergence rate analysis for the higher order power method in best rank one approximations of tensors

It is established that the sequence generated by HOPM always converges globally and R-linearly for orthogonally decomposable tensors with order at least 3, and for almost all tensors, all the singular vector tuples are nondegenerate, and so, the HopM “typically” exhibits global R-linear convergence rate.

On the Tensor SVD and the Optimal Low Rank Orthogonal Approximation of Tensors

The existence of an optimal approximation is theoretically guaranteed under certain conditions, and this optimal approximation yields a tensor decomposition where the diagonal of the core is maximized.

Globally convergent Jacobi-type algorithms for simultaneous orthogonal symmetric tensor diagonalization

This paper considers a family of Jacobi-type algorithms for a simultaneous orthogonal diagonalization problem of symmetric tensors and proposes and proves a newJacobi-based algorithm in the general setting and proves its global convergence for sufficiently smooth functions.

Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix

Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given.

Local Convergence of the Alternating Least Squares Algorithm for Canonical Tensor Approximation

A local convergence theorem for calculating canonical low-rank tensor approximations (PARAFAC, CANDECOMP) by the alternating least squares algorithm is established. The main assumption is that the

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.