• Corpus ID: 227989940

TenIPS: Inverse Propensity Sampling for Tensor Completion

@inproceedings{Yang2021TenIPSIP,
  title={TenIPS: Inverse Propensity Sampling for Tensor Completion},
  author={Chengrun Yang and Lijun Ding and Ziyang Wu and Madeleine Udell},
  booktitle={International Conference on Artificial Intelligence and Statistics},
  year={2021}
}
Tensors are widely used to represent multiway arrays of data. The recovery of missing entries in a tensor has been extensively studied, generally under the assumption that entries are missing completely at random (MCAR). However, in most practical settings, observations are missing not at random (MNAR): the probability that a given entry is observed (also called the propensity) may depend on other entries in the tensor or even on the value of the missing entry. In this paper, we study the… 

Figures and Tables from this paper

Covariate-assisted Sparse Tensor Completion

Covariate-assisted Sparse Tensor Completion (COSTCO) is proposed to incorporate covariate information for the recovery of the sparse tensor to jointly extract latent components from both the tensor and the covariate matrix to learn a synthetic representation.

Matrix Completion With Data-Dependent Missingness Probabilities

Two new estimators are proposed, based on singular value thresholding and nuclear norm minimization, to recover the matrix under this assumption, and involve no tuning parameters, and are shown to be consistent under a low rank assumption.

Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization for Low-Rank Tensor Completion and Tensor Robust Principal Component Analysis

A new class of tensor rank regularizers based on the Euclidean norms of the CP component vectors of a tensor is proposed and it is proved that for LRTC with Schatten-$p$ quasi-norm regularizer on $d$-order tensors, p=1/d is always better than any $p> 1/d$ in terms of the generalization ability.

Truncated Matrix Completion - An Empirical Study

Through a series of experiments, this paper studies and compares the performance of various LRMC algorithms that were originally successful for data-independent sampling patterns and considers various settings where the sampling mask is dependent on the underlying data values.

References

SHOWING 1-10 OF 37 REFERENCES

Provable Tensor Factorization with Missing Data

A novel alternating minimization based method which iteratively refines estimates of the singular vectors which can recover a three-mode n × n → n dimensional rank-r tensor exactly from O(n3/2r5 log4 n) randomly sampled entries.

HOSVD-Based Algorithm for Weighted Tensor Completion

An efficient weighted HOSVD algorithm is proposed for recovery of the underlying low-rank tensor from noisy observations and then derive the error bounds under a properly weighted metric.

Tensor Completion Made Practical

This paper introduces a new variant of alternating minimization, which is inspired by understanding how the progress measures that guide convergence of alternating maximization in the matrix setting need to be adapted to the tensor setting, and shows strong provable guarantees, including showing that the algorithm converges linearly to the true tensors even when the factors are highly correlated.

Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

The new tractable formulation for low-rank tensor recovery shows how the sample complexity can be reduced by designing convex regularizers that exploit several structures jointly.

Low-Rank Matrix and Tensor Completion via Adaptive Sampling

In the absence of noise, it is shown that one can exactly recover a n x n matrix of rank r from merely Ω(nr3/2 log(r)) matrix entries, and one can recover an order T tensor using Ω (nrT-1/2T2 log (r)) entries.

Missing Slice Recovery for Tensors Using a Low-Rank Model in Embedded Space

This study extends a delay embedding for a time series to a "multi-way delay-embedding transform" for a tensor, which takes a given incomplete tensor as the input and outputs a higher-order incomplete Hankel tensor.

Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries

The fundamental statistical limits of noisy tensor completion are characterized by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.

Cross: Efficient Low-rank Tensor Completion

This article proposes a framework for low-rank tensor completion via a novel tensor measurement scheme called Cross and develops a theoretical upper bound and the matching minimax lower bound for recovery error over certain classes of low- rank tensors for the proposed procedure.

Fundamental Conditions for Low-CP-Rank Tensor Completion

This work considers the problem of low canonical polyadic rank tensor completion and proposes a combinatorial method to derive a lower bound on the sampling probability of the tensor, or equivalently, the number of sampled entries that guarantees finite completability with high probability.

1-Bit Tensor Completion

This work introduces a novel approach for the recovery of a low-rank tensor from a small number of binary measurements, called 1bit Tensor Completion, which relies on the application of 1-bit matrix completion over different matricizations of the underlying tensor.