Local convergence of alternating low-rank optimization methods with overrelaxation

  title={Local convergence of alternating low-rank optimization methods with overrelaxation},
  author={I. Oseledets and Maxim V. Rakhuba and Andr{\'e} Uschmajew},
The local convergence of alternating optimization methods with overrelaxation for low-rank matrix and tensor problems is established. The analysis is based on the linearization of the method which takes the form of an SOR iteration for a positive semidefinite Hessian and can be studied in the corresponding quotient geometry of equivalent low-rank representations. In the matrix case, the optimal relaxation parameter for accelerating the local convergence can be determined from the convergence… 

Figures from this paper



Solution of Linear Systems and Matrix Inversion in the TT-Format

The goal is to provide a “black-box” type of solver for linear systems where both the matrix and the right-hand side are in the TT-format, and an efficient DMRG (density matrix renormalization group) method is proposed, and several tricks are employed to make it work.

Variants of Alternating Least Squares Tensor Completion in the Tensor Train Format

A alternating least squares (ALS) fit to an overrelaxation scheme inspired by the LMaFit method for matrix completion and both approaches aim at finding a tensor $A$ that fulfills the first order optimality conditions by a nonlinear Gauss--Seidel-type solver.

Iterative Solution of Large Sparse Systems of Equations

In the second edition of this classic monograph, complete with four new chapters and updated references, readers will now have access to content describing and analysing classical and modern methods

Aspects of Nonlinear Block Successive Overrelaxation

In this paper we are concerned with the solution of the nonlinear system $Fx = 0$ by the nonlinear block Gauss–Seidel (NBGS) and the nonlinear block successive overrelaxation (NBSOR) methods. If F is

Nonlinear Difference Equations and Gauss-Seidel Type Iterative Methods

Asymptotic rate of convergence of Gauss-Seidel type iterative processes for nonlinear difference equations

Alternating Least Squares as Moving Subspace Correction

This work is able to provide an alternative and conceptually simple derivation of the asymptotic convergence rate of the two-sided block power method of numerical algebra for computing the dominant singular subspaces of a rectangular matrix.


Analogous methods have been used in practice, with apparent success, on nonlinear problems as well. For the most part, these have not been justified mathematically and this work is an attempt to fill