# On the Nuclear Norm and the Singular Value Decomposition of Tensors

@article{Derksen2016OnTN, title={On the Nuclear Norm and the Singular Value Decomposition of Tensors}, author={Harm Derksen}, journal={Foundations of Computational Mathematics}, year={2016}, volume={16}, pages={779-811} }

Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition…

## 41 Citations

### Nuclear norm of higher-order tensors

- Computer Science, MathematicsMath. Comput.
- 2018

An analogue of Banach's theorem for tensor spectral norm and Comon's conjecture for Tensor rank is established --- for a symmetric tensor, its symmetric nuclear norm always equals its nuclear norm.

### Symmetric Tensor Nuclear Norms

- Computer ScienceSIAM J. Appl. Algebra Geom.
- 2017

This paper discusses how to compute symmetric tensor nuclear norms, depending on the tensor order and the ground field, and proposes methods that can be extended to nonsymmetric tensors.

### Bounds on the Spectral Norm and the Nuclear Norm of a Tensor Based on Tensor Partitions

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2016

When a tensor is partitioned into its matrix slices, the inequalities provide polynomial-time worst-case approximation bounds for computing the spectral norm and the nuclear norm of the tensor.

### Algebraic Methods for Tensor Data

- Computer Science, MathematicsSIAM J. Appl. Algebra Geom.
- 2021

Numerical experiments are presented whose results show that the performance of the alternating least square algorithm for the low rank approximation of tensors can be improved using tensor amplification.

### Rank properties and computational methods for orthogonal tensor decompositions

- Mathematics, Computer ScienceArXiv
- 2021

This work proposes an algorithm based on the augmented Lagrangian method and guarantees the orthogonality by a novel orthogonalization procedure and shows that the proposed method has a great advantage over the existing methods for strongly Orthogonal decompositions in terms of the approximation error.

### On the tensor spectral p-norm and its dual norm via partitions

- MathematicsComput. Optim. Appl.
- 2020

A generalization of the spectral norm and the nuclear norm of a tensor via arbitrary tensor partitions, a much richer concept than block tensors, is presented.

### Completely positive tensor recovery with minimal nuclear value

- Computer ScienceComput. Optim. Appl.
- 2018

The CP-nuclear value of a completely positive (CP) tensor and its properties are introduced and a semidefinite relaxation algorithm is proposed for solving the minimal CP- nuclear-value tensor recovery.

### On norm compression inequalities for partitioned block tensors

- Computer Science, Mathematics
- 2020

It is proved that for the tensor spectral norm, the norm of the compressed tensor is an upper bound of the normOf the original tensor, and this result can be extended to a general class of Tensor spectral norms.

### Approximate Low-Rank Tensor Learning

- Computer Science
- 2014

This work establishes a formal optimization guarantee for a general low-rank tensor learning formulation by combining a simple approximation algorithm for the tensor spectral norm with the recent generalized conditional gradient.

### Near-optimal sample complexity for noisy or 1-bit tensor completion

- Computer Science, Mathematics
- 2018

It is proved that when r = O(1), the authors can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex M-norm or the non-convex max-qnorm, and it is shown how the 1-bit measurement model can be used for context-aware recommender systems.

## References

SHOWING 1-10 OF 57 REFERENCES

### A Multilinear Singular Value Decomposition

- MathematicsSIAM J. Matrix Anal. Appl.
- 2000

There is a strong analogy between several properties of the matrix and the higher-order tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, first-order perturbation effects, etc., are analyzed.

### On the Ranks and Border Ranks of Symmetric Tensors

- Mathematics, Computer ScienceFound. Comput. Math.
- 2010

Improved lower bounds for the rank of a symmetric tensor are provided by considering the singularities of the hypersurface defined by the polynomial.

### Tensor completion and low-n-rank tensor recovery via convex optimization

- Computer Science
- 2011

This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.

### Most Tensor Problems Are NP-Hard

- Computer Science, MathematicsJACM
- 2013

It is proved that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.

### Symmetric Tensors and Symmetric Tensor Rank

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2008

The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.

### Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2008

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

### Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

- Computer Science, MathematicsSIAM Rev.
- 2010

It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.

### Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics

- Mathematics
- 1977

### The Power of Convex Relaxation: Near-Optimal Matrix Completion

- Computer ScienceIEEE Transactions on Information Theory
- 2010

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).