On the Nuclear Norm and the Singular Value Decomposition of Tensors

@article{Derksen2016OnTN,
  title={On the Nuclear Norm and the Singular Value Decomposition of Tensors},
  author={Harm Derksen},
  journal={Foundations of Computational Mathematics},
  year={2016},
  volume={16},
  pages={779-811}
}
  • H. Derksen
  • Published 18 August 2013
  • Mathematics, Computer Science
  • Foundations of Computational Mathematics
Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition… 

Nuclear norm of higher-order tensors

TLDR
An analogue of Banach's theorem for tensor spectral norm and Comon's conjecture for Tensor rank is established --- for a symmetric tensor, its symmetric nuclear norm always equals its nuclear norm.

Symmetric Tensor Nuclear Norms

  • Jiawang Nie
  • Computer Science
    SIAM J. Appl. Algebra Geom.
  • 2017
TLDR
This paper discusses how to compute symmetric tensor nuclear norms, depending on the tensor order and the ground field, and proposes methods that can be extended to nonsymmetric tensors.

Bounds on the Spectral Norm and the Nuclear Norm of a Tensor Based on Tensor Partitions

  • Zhening Li
  • Computer Science, Mathematics
    SIAM J. Matrix Anal. Appl.
  • 2016
TLDR
When a tensor is partitioned into its matrix slices, the inequalities provide polynomial-time worst-case approximation bounds for computing the spectral norm and the nuclear norm of the tensor.

Algebraic Methods for Tensor Data

TLDR
Numerical experiments are presented whose results show that the performance of the alternating least square algorithm for the low rank approximation of tensors can be improved using tensor amplification.

Rank properties and computational methods for orthogonal tensor decompositions

  • Chao Zeng
  • Mathematics, Computer Science
    ArXiv
  • 2021
TLDR
This work proposes an algorithm based on the augmented Lagrangian method and guarantees the orthogonality by a novel orthogonalization procedure and shows that the proposed method has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.

On the tensor spectral p-norm and its dual norm via partitions

TLDR
A generalization of the spectral norm and the nuclear norm of a tensor via arbitrary tensor partitions, a much richer concept than block tensors, is presented.

Completely positive tensor recovery with minimal nuclear value

TLDR
The CP-nuclear value of a completely positive (CP) tensor and its properties are introduced and a semidefinite relaxation algorithm is proposed for solving the minimal CP- nuclear-value tensor recovery.

On norm compression inequalities for partitioned block tensors

TLDR
It is proved that for the tensor spectral norm, the norm of the compressed tensor is an upper bound of the normOf the original tensor, and this result can be extended to a general class of Tensor spectral norms.

Approximate Low-Rank Tensor Learning

TLDR
This work establishes a formal optimization guarantee for a general low-rank tensor learning formulation by combining a simple approximation algorithm for the tensor spectral norm with the recent generalized conditional gradient.

Near-optimal sample complexity for noisy or 1-bit tensor completion

TLDR
It is proved that when r = O(1), the authors can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex M-norm or the non-convex max-qnorm, and it is shown how the 1-bit measurement model can be used for context-aware recommender systems.

References

SHOWING 1-10 OF 57 REFERENCES

A Multilinear Singular Value Decomposition

TLDR
There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.

On the Ranks and Border Ranks of Symmetric Tensors

TLDR
Improved lower bounds for the rank of a symmetric tensor are provided by considering the singularities of the hypersurface defined by the polynomial.

Tensor completion and low-n-rank tensor recovery via convex optimization

TLDR
This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.

Most Tensor Problems Are NP-Hard

TLDR
It is proved that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.

Symmetric Tensors and Symmetric Tensor Rank

TLDR
The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

TLDR
It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

The permanent of a square matrix

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

TLDR
It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

TLDR
This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).
...