# Deterministic tensor completion with hypergraph expanders

@article{Harris2019DeterministicTC, title={Deterministic tensor completion with hypergraph expanders}, author={K. D. Harris and Yizhe Zhu}, journal={ArXiv}, year={2019}, volume={abs/1910.10692} }

We provide a novel analysis of low rank tensor completion based on hypergraph expanders. As a proxy for rank, we minimize the max-quasinorm of the tensor, introduced by Ghadermarzy, Plan, and Yilmaz (2018), which generalizes the max-norm for matrices. Our analysis is deterministic and shows that the number of samples required to recover an order-$t$ tensor with at most $n$ entries per dimension is linear in $n$, under the assumption that the rank and order of the tensor are $O(1)$. As steps in… Expand

#### Figures from this paper

#### 3 Citations

D ec 2 01 9 SPARSE RANDOM TENSORS : CONCENTRATION , REGULARIZATION AND APPLICATIONS

- 2019

Abstract. We prove a non-asymptotic concentration inequality of sparse inhomogeneous random tensors under the spectral norm. For an order-k inhomogeneous random tensor T with sparsity pmax ≥ c log n… Expand

PR ] 2 0 N ov 2 01 9 SPARSE RANDOM TENSORS : CONCENTRATION , REGULARIZATION AND APPLICATIONS

- 2019

Abstract. We prove a non-asymptotic concentration inequality of sparse inhomogeneous random tensors under the spectral norm. For an order-k inhomogeneous random tensor T with sparsity pmax ≥ c log n… Expand

Sparse random tensors: Concentration, regularization and applications

- Mathematics
- 2019

We prove a non-asymptotic concentration inequality of sparse inhomogeneous random tensors under the spectral norm. For an order-$k$ inhomogeneous random tensor $T$ with sparsity $p_{\max}\geq… Expand

#### References

SHOWING 1-10 OF 75 REFERENCES

Near-optimal sample complexity for convex tensor completion

- Mathematics, Computer Science
- ArXiv
- 2017

This paper proves that solving a constrained least squares estimation using either the convex atomic-norm or the nonconvex max-qnorm results in optimal sample complexity for the problem of low-rank tensor completion and shows that these bounds are nearly minimax rate-optimal. Expand

Spectral algorithms for tensor completion

- Mathematics, Computer Science
- ArXiv
- 2016

A new unfolding-based method is proposed, which outperforms naive ones for symmetric $k$-th order tensors of rank $r$ and complements this result with a different spectral algorithm for third-order tensors in the overcomplete regime. Expand

Provable Tensor Factorization with Missing Data

- Computer Science, Mathematics
- NIPS
- 2014

A novel alternating minimization based method which iteratively refines estimates of the singular vectors which can recover a three-mode n × n → n dimensional rank-r tensor exactly from O(n3/2r5 log4 n) randomly sampled entries. Expand

Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

- Mathematics, Computer Science
- ICML
- 2014

The new tractable formulation for low-rank tensor recovery shows how the sample complexity can be reduced by designing convex regularizers that exploit several structures jointly. Expand

Noisy Tensor Completion via the Sum-of-Squares Hierarchy

- Mathematics, Computer Science
- COLT
- 2016

The main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the sum-of-squares relaxations to the tensor nuclear norm by establishing a new connection between noisy tensor completion and the task of refuting random constant satisfaction problems. Expand

Tensor completion and low-n-rank tensor recovery via convex optimization

- Mathematics
- 2011

In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery… Expand

Tensor Completion Algorithms in Big Data Analytics

- Mathematics, Computer Science
- ACM Trans. Knowl. Discov. Data
- 2019

A modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity is provided. Expand

Most Tensor Problems Are NP-Hard

- Mathematics, Computer Science
- JACM
- 2013

It is proved that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard. Expand

Tensor completion for estimating missing values in visual data

- Medicine, Mathematics
- 2009 IEEE 12th International Conference on Computer Vision
- 2009

An algorithm to estimate missing values in tensors of visual data by laying out the theoretical foundations and building a working algorithm is proposed, which is more accurate and robust than heuristic approaches. Expand

Universal Matrix Completion

- Mathematics, Computer Science
- ICML
- 2014

This work shows that if the set of sampled indices come from the edges of a bipartite graph with large spectral gap, then the nuclear norm minimization based method exactly recovers all low-rank matrices that satisfy certain incoherence properties. Expand