A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem

@article{Breiding2018ART,
  title={A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem},
  author={Paul Breiding and Nick Vannieuwenhoven},
  journal={SIAM J. Optim.},
  year={2018},
  volume={28},
  pages={2435-2465}
}
The canonical tensor rank approximation problem (TAP) consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian Gauss-Newton method with trust region for solving small-scale, dense TAPs. The novelty of our approach is threefold. First, we parametrize the constraint set as the Cartesian product of Segre… 
A Riemannian Newton Optimization Framework for the Symmetric Tensor Rank Approximation Problem
TLDR
An explicit and exact formula is presented for the gradient vector and the Hessian matrix of the method, in terms of the weights and points of the low rank approximation and the symmetric tensor to approximate, by exploiting the properties of the apolar product.
New Riemannian preconditioned algorithms for tensor completion via polyadic decomposition
TLDR
It is proved that the proposed Riemannian gradient descent algorithm globally converges to a stationary point of the tensor completion problem, with convergence rate estimates using the (cid:32)Lojasiewicz property.
Computation over Tensor Stiefel Manifold
TLDR
Formulas of various retractions based on t-QR, t-polar decomposition, Cayley transform, and t-exponential, as well as vector transports, are presented and may provide basic building blocks for designing and analyzing Riemannian algorithms.
Geometric Methods on Low-Rank Matrix and Tensor Manifolds
TLDR
This chapter presents numerical methods for low-rank matrix and Tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces, and discusses several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors.
Towards a condition number theorem for the tensor rank decomposition
We show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions (i.e., decompositions with unbounded geometric condition number, derived in [P.
A Trust-Region Method For Nonsmooth Nonconvex Optimization
TLDR
This work proves local quadratic convergence for partly smooth functions under a strict complementary condition and establishes fast local convergence under suitable assumptions using a connection with a smooth Riemannian trust-region method.
THE CONDITION NUMBER OF JOIN
TLDR
This paper examines the numerical sensitivity of join decompositions to perturbations and proves that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix.
The Condition Number of Join Decompositions
TLDR
This paper examines the numerical sensitivity of join decompositions to perturbations and proves that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix.
On the average condition number of tensor rank decompositions
We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of
...
...

References

SHOWING 1-10 OF 72 REFERENCES
Best Low Multilinear Rank Approximation of Higher-Order Tensors, Based on the Riemannian Trust-Region Scheme
TLDR
This paper proposes a new iterative algorithm based on the Riemannian trust-region scheme, using the truncated conjugate-gradient method for solving the trust- Region subproblem and compares this new method with the well-known higher-order orthogonal iteration method and discusses the advantages over Newton-type methods.
Condition numbers for the tensor rank decomposition
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
TLDR
It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.
Riemannian Optimization for High-Dimensional Tensor Completion
TLDR
A nonlinear conjugate gradient scheme within the framework of Riemannian optimization which exploits this favorable scaling to obtain competitive reconstructions from uniform random sampling of few entries compared to adaptive sampling techniques such as cross-approximation.
A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition
TLDR
Numerical tests show that ALS accelerated by N-GMRES may significantly outperform both stand-alone ALS and a standard nonlinear conjugate gradient optimization method, especially when highly accurate stationary points are desired for difficult problems.
Low-rank tensor completion by Riemannian optimization
TLDR
A new algorithm is proposed that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank with particular attention to efficient implementation, which scales linearly in the size of the tensor.
The geometry of algorithms using hierarchical tensors
Effective Criteria for Specific Identifiability of Tensors and Forms
TLDR
It is proved that this criterion for symmetric identifiability is effective for both real and complex tensors in its entire range of applicability, which is usually much smaller than the smallest typical rank.
...
...