On the Nuclear Norm and the Singular Value Decomposition of Tensors

@article{Derksen2016OnTN,
  title={On the Nuclear Norm and the Singular Value Decomposition of Tensors},
  author={Harm Derksen},
  journal={Foundations of Computational Mathematics},
  year={2016},
  volume={16},
  pages={779-811}
}
  • H. Derksen
  • Published 18 August 2013
  • Mathematics, Computer Science
  • Foundations of Computational Mathematics
Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition… 

Nuclear norm of higher-order tensors

An analogue of Banach's theorem for tensor spectral norm and Comon's conjecture for Tensor rank is established --- for a symmetric tensor, its symmetric nuclear norm always equals its nuclear norm.

Symmetric Tensor Nuclear Norms

  • Jiawang Nie
  • Computer Science
    SIAM J. Appl. Algebra Geom.
  • 2017
This paper discusses how to compute symmetric tensor nuclear norms, depending on the tensor order and the ground field, and proposes methods that can be extended to nonsymmetric tensors.

Bounds on the Spectral Norm and the Nuclear Norm of a Tensor Based on Tensor Partitions

  • Zhening Li
  • Computer Science, Mathematics
    SIAM J. Matrix Anal. Appl.
  • 2016
When a tensor is partitioned into its matrix slices, the inequalities provide polynomial-time worst-case approximation bounds for computing the spectral norm and the nuclear norm of the tensor.

Tensor Ranks and Norms

The value of the recently introduced G-stable rank is calculated, the nuclear norm is investigated, some notions of stable ranks on tensors are introduced, built from common norms on tensor products, and how these stable ranks relate to other notions of tensor rank are introduced.

Rank Properties and Computational Methods for Orthogonal Tensor Decompositions

  • Chao Zeng
  • Computer Science, Mathematics
    Journal of Scientific Computing
  • 2022
This work presents several properties of orthogonal rank, which are different from those of tensor rank in many aspects, and proposes an algorithm based on the augmented Lagrangian method that has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.

Algebraic Methods for Tensor Data

Numerical experiments are presented whose results show that the performance of the alternating least square algorithm for the low rank approximation of tensors can be improved using tensor amplification.

On the tensor spectral p-norm and its dual norm via partitions

A generalization of the spectral norm and the nuclear norm of a tensor via arbitrary tensor partitions, a much richer concept than block tensors, is presented.

Completely positive tensor recovery with minimal nuclear value

The CP-nuclear value of a completely positive (CP) tensor and its properties are introduced and a semidefinite relaxation algorithm is proposed for solving the minimal CP- nuclear-value tensor recovery.

On norm compression inequalities for partitioned block tensors

It is proved that for the tensor spectral norm, the norm of the compressed tensor is an upper bound of the normOf the original tensor, and this result can be extended to a general class of Tensor spectral norms.

Approximate Low-Rank Tensor Learning

This work establishes a formal optimization guarantee for a general low-rank tensor learning formulation by combining a simple approximation algorithm for the tensor spectral norm with the recent generalized conditional gradient.

References

SHOWING 1-10 OF 57 REFERENCES

A Multilinear Singular Value Decomposition

There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.

On the Ranks and Border Ranks of Symmetric Tensors

Improved lower bounds for the rank of a symmetric tensor are provided by considering the singularities of the hypersurface defined by the polynomial.

Tensor completion and low-n-rank tensor recovery via convex optimization

This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.

Most Tensor Problems Are NP-Hard

It is proved that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.

Powers of tensors and fast matrix multiplication

A method to analyze the powers of a given trilinear form (a special kind of algebraic construction also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication and obtain the upper bound ω.

Symmetric Tensors and Symmetric Tensor Rank

The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

The permanent of a square matrix

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
...