• Corpus ID: 239998577

Nonparametric Matrix Estimation with One-Sided Covariates

@article{Yu2021NonparametricME,
  title={Nonparametric Matrix Estimation with One-Sided Covariates},
  author={Christina Lee Yu},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.13969}
}
Consider the task of matrix estimation in which a dataset X ∈ Rn×m is observed with sparsity p, and we would like to estimate E[X], where E[Xui] = f(αu, βi) for some Holder smooth function f . We consider the setting where the row covariates α are unobserved yet the column covariates β are observed. We provide an algorithm and accompanying analysis which shows that our algorithm improves upon naively estimating each row separately when the number of rows is not too small. Furthermore when the… 

Figures from this paper

References

SHOWING 1-10 OF 37 REFERENCES
Matrix estimation by Universal Singular Value Thresholding
Consider the problem of estimating the entries of a large matrix, when the observed entries are noisy versions of a small random fraction of the original entries. This problem has received widespread
A Sparse Interactive Model for Matrix Completion with Side Information
TLDR
A novel sparse formulation is proposed that explicitly models the interaction between the row and column side features to approximate the matrix entries and outperforms three state-of-the-art methods both in simulations and on real world datasets.
Inductive Matrix Completion with Feature Selection
We consider the problem of inductive matrix completion, i.e., the reconstruction of a matrix using side features of its rows and columns. In numerous applications, however, side information of this
Using Side Information to Reliably Learn Low-Rank Matrices from Missing and Corrupted Observations
TLDR
A general model that exploits side information to better learn low-rank matrices from missing and corrupted observations is proposed, and it is shown that the proposed model can be further applied to several popular scenarios such as matrix completion and robust PCA.
Optimal Estimation and Completion of Matrices with Biclustering Structures
TLDR
This paper develops a unified theory for the estimation and completion of matrices with biclustering structures, where the data is a partially observed and noise contaminated data matrix with a certain bic Lustering structure, and shows that a constrained least squares estimator achieves minimax rate-optimal performance in several of the most important scenarios.
Provable Inductive Matrix Completion
TLDR
This paper studies the problem of inductive matrix completion in the exact recovery setting, and shows that two other low-rank estimation problems can be studied in this framework: a) generalLow rank matrix sensing using rank-1 measurements, and b) multi-label regression with missing labels.
Collaborative Filtering with Graph Information: Consistency and Scalable Methods
TLDR
This work formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods.
On estimation of the Lr norm of a regression function
f be observed with noise. In the present paper we study the problem of nonparametric estimation of certain nonsmooth functionals of f, specifically, Lr norms ||f||r of f. Known from the literature
Thy Friend is My Friend: Iterative Collaborative Filtering for Sparse Matrix Estimation
TLDR
This work proposes a novel iterative, collaborative filtering-style algorithm for matrix estimation in this generic setting and shows that the mean squared error of the estimator converges to $0$ at the rate of $O(d^2 (pn)^{-2/5})$ as long as the entries of $Y$ are observed.
Blind Regression: Nonparametric Regression for Latent Variable Models via Collaborative Filtering
TLDR
Inspired by the classical Taylor's expansion for differentiable functions, a prediction algorithm that is consistent for all Lipschitz functions is provided, and it is proved that the expected fraction of estimates with error greater than $\epsilon$ is less than the variance of the additive entry-wise noise term.
...
1
2
3
4
...