# Most Tensor Problems Are NP-Hard

@article{Hillar2013MostTP, title={Most Tensor Problems Are NP-Hard}, author={Christopher J. Hillar and Lek-Heng Lim}, journal={ArXiv}, year={2013}, volume={abs/0911.1393} }

We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determining the rank or best rank-1 approximation of a 3-tensor. Furthermore, we show that restricting these…

## 825 Citations

Low-Rank Approximation and Completion of Positive Tensors

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2016

The approach is to use algebraic topology to define a new (numerically well-posed) decomposition for positive tensors, which is equivalent to the standard tensor decomposition in important cases and it is proved it can be exactly reformulated as a convex optimization problem.

How hard is the tensor rank

- Mathematics
- 2016

We investigate the computational complexity of tensor rank, a concept that plays fundamental role in different topics of modern applied mathematics. For tensors over any integral domain, we prove…

Tensor and its tucker core: The invariance relationships

- Mathematics, Computer ScienceNumer. Linear Algebra Appl.
- 2017

This paper shows that the Tucker core of a tensor retains many properties of the original tensor, including the CP rank, the border rank,The tensor Schatten quasi norms, and the Z-eigenvalues, when the core tensor is smaller than the original Tensor.

Nonnegative Tensor Completion via Integer Optimization

- Computer Science, MathematicsArXiv
- 2021

This paper develops a new algorithm for the special case of completion for nonnegative tensors that converges in a linear (in numerical tolerance) number of oracle steps, while achieving the informationtheoretic rate.

Near-optimal sample complexity for noisy or 1-bit tensor completion

- Computer Science
- 2018

It is proved that when r = O(1), the authors can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex M-norm or the non-convex max-qnorm, and it is shown how the 1-bit measurement model can be used for context-aware recommender systems.

Greedy Optimization and Applications to Structured Tensor Factorizations

- 2016

Efficiently representing real world data in a succinct and parsimonious manner is of central importance in many fields. We present a generalized greedy framework, which allow us to efficiently solve…

Semidefinite Relaxations for Best Rank-1 Tensor Approximations

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2014

This paper proposes semidefinite relaxations, based on sum of squares representations, to solve the problem of finding best rank-1 approximations for both symmetric and nonsymmetric tensors.

Exact Partitioning of High-order Models with a Novel Convex Tensor Cone Relaxation

- Mathematics, Computer ScienceArXiv
- 2019

This paper defines a general class of $m$-degree Homogeneous Polynomial Models, which subsumes several examples motivated from prior literature, and proposes the first correct poly-time algorithm for exact partitioning of high-order models.

A Closed Form Solution to Best Rank-1 Tensor Approximation via KL divergence Minimization

- Computer ScienceArXiv
- 2021

This work analytically derives a closed form solution for the rank-1 tensor that minimizes the KL divergence from a given positive tensor, and demonstrates that the algorithm is an order of magnitude faster than the existingRank-1 approximation methods and gives better approximation of given tensors, which supports the theoretical finding.

Kullback-Leibler principal component for tensors is not NP-hard

- Computer Science, Engineering2017 51st Asilomar Conference on Signals, Systems, and Computers
- 2017

We study the problem of nonnegative rank-one approximation of a nonnegative tensor, and show that the globally optimal solution that minimizes the generalized Kullback-Leibler divergence can be…

## References

SHOWING 1-10 OF 212 REFERENCES

On the Complexity of Mixed Discriminants and Related Problems

- Mathematics, Computer ScienceMFCS
- 2005

It is proved that it is #P-hard to compute the mixed discriminant of rank 2 positive semidefinite matrices and poly-time algorithms to approximate the ”beast” are presented.

Finding Well-Conditioned Similarities to Block-Diagonalize Nonsymmetric Matrices Is NP-Hard

- Computer Science, MathematicsJ. Complex.
- 1995

Given an upper triangular matrix A ? Rn×n and a tolerance ?, we show that the problem of finding a similarity transformation G such that G?1AG is block diagonal with the condition number of G being…

Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2008

It is argued that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-r approximations, and a natural way of overcoming the ill-posedness of the low-rank approximation problem is proposed by using weak solutions when true solutions do not exist.

Computing a nonnegative matrix factorization -- provably

- Mathematics, Computer ScienceSTOC '12
- 2012

This work gives an algorithm that runs in time polynomial in n, m and r under the separablity condition identified by Donoho and Stodden in 2003, and is the firstPolynomial-time algorithm that provably works under a non-trivial condition on the input matrix.

NP-hardness of deciding convexity of quartic polynomials and related problems

- Mathematics, Computer ScienceMath. Program.
- 2013

It is shown that unless P = NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomials of degree four (or higher even degree) is globally convex, and it is proved that deciding strict convexity, strong conveXity, quasiconvexality, and pseudoconvexity of polynmials of even degree four or higher is strongly NP-hard.

Most Tensor Problems Are NP-Hard

- Computer Science
- 2013

We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear ...

Systems of Bilinear Equations

- Mathematics
- 1997

How hard is it to solve a system of bilinear equations? No solutions are presented in this report, but the problem is posed and some preliminary remarks are made. In particular, solving a system of…

The Multivariate Resultant Is NP-hard in Any Characteristic

- Mathematics, Computer ScienceMFCS
- 2010

The main result is that testing the resultant for zero is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension).

Tensor decomposition and approximation schemes for constraint satisfaction problems

- Mathematics, Computer ScienceSTOC '05
- 2005

PTAS's for a much larger class of weighted MAX-rCSP problems which includes as special cases the dense problems and, for r = 2, all metric instances and quasimetric instances; for r > 2, this class includes a generalization of metrics.

Symmetric Tensors and Symmetric Tensor Rank

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2008

The notion of the generic symmetric rank is discussed, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order.