An Adaptation for Iterative Structured Matrix Completion

@article{Kassab2020AnAF,
  title={An Adaptation for Iterative Structured Matrix Completion},
  author={Lara Kassab and Henry Adams and Deanna Needell},
  journal={2020 54th Asilomar Conference on Signals, Systems, and Computers},
  year={2020},
  pages={1451-1456}
}
Matrix completion is the task of predicting missing entries of matrix from a subset of known entries. Notions of structured matrix completion include any setting in which whether an entry is observed does not occur uniformly at random. In recent work, a modification to the standard nuclear norm minimization for matrix completion has been made to take into account sparsity-based structure in the missing entries, which is motivated e.g. in recommender systems. In this work, we propose adjusting… 

Figures from this paper

Convex and Nonconvex Approaches for the Matrix Completion Problem

This paper test the (convex) soft-impute and (non-conveX) alternating proximal matrix completion methods using some real data and test [1]’s claim that non-conventus approaches perform better than convex approaches, and shows a comparison of the two approaches in terms of computational complexity, execution time, and prediction accuracy.

Research Statement: Bridging applied and quantitative topology

Henry Adams, Colorado State University Large sets of high-dimensional data are common in most branches of science, and their shapes reflect important patterns within. The goal of topological data

References

SHOWING 1-10 OF 61 REFERENCES

Matrix Completion for Structured Observations

This work proposes adjusting the standard nuclear norm minimization strategy for matrix completion to account for such structural differences between observed and unobserved entries by regularizing the values of the unobserved entry, and shows that the proposed method outperforms nuclearnorm minimization in certain settings.

Coherent Matrix Completion

It is shown that nuclear norm minimization can recover an arbitrary n×n matrix of rank r from O(nr log2(n)) revealed entries, provided that revealed entries are drawn proportionally to the local row and column coherences of the underlying matrix.

Iterative reweighted algorithms for matrix rank minimization

This paper proposes a family of Iterative Reweighted Least Squares algorithms IRLS-p, and gives theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).

Imputation and low-rank estimation with Missing Not At Random data

This paper proposes matrix completion methods to recover Missing Not At Random (MNAR) data to predict if the doctors should administrate tranexomic acid to patients with traumatic brain injury, that would limit excessive bleeding.

Completing any low-rank matrix, provably

It is shown that any low-rank matrix can be exactly recovered from as few as $O(nr \log^2 n)$ randomly chosen elements, provided this random choice is made according to a {\em specific biased distribution}: the probability of any element being sampled should be proportional to the sum of the leverage scores of the corresponding row, and column.

Matrix Completion with Deterministic Sampling: Theories and Methods

This paper proposes two conditions, isomeric condition and relative well-conditionedness, for guaranteeing an arbitrary matrix to be recoverable from a sampling of the matrix entries and proves a collection of theorems for missing data recovery as well as convex/nonconvex matrix completion.

Universal Matrix Completion

This work shows that if the set of sampled indices come from the edges of a bipartite graph with large spectral gap, then the nuclear norm minimization based method exactly recovers all low-rank matrices that satisfy certain incoherence properties.

A Singular Value Thresholding Algorithm for Matrix Completion

This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Online Reweighted Least Squares Robust PCA

A novel online RPCA algorithm is developed that is based entirely on reweighted least squares recursions and is appropriate for sequential data processing, fast, memory optimal and competitive to the state-of-the-art in terms of estimation performance.
...