Empirical Bayes matrix completion

@article{Matsuda2019EmpiricalBM,
  title={Empirical Bayes matrix completion},
  author={Takeru Matsuda and Fumiyasu Komaki},
  journal={Comput. Stat. Data Anal.},
  year={2019},
  volume={137},
  pages={195-210}
}

Figures and Tables from this paper

Estimation under matrix quadratic loss and matrix superharmonicity
We investigate estimation of a normal mean matrix under the matrix quadratic loss. Improved estimation under the matrix quadratic loss implies improved estimation of any linear combination of the
New and Evolving Roles of Shrinkage in Large-Scale Prediction and Inference
The BIRS workshop “New and Evolving Roles of Shrinkage in Large-Scale Prediction and Inference” brought in thirty-six experts in statistical theory, methods and related applied fields to assess the

References

SHOWING 1-10 OF 17 REFERENCES
Empirical Bayes on vector observations: An extension of Stein's method
SUMMARY The statistician is considering several independent normal linear models with identical structures, and desires to estimate the vector of unknown parameters in each of them. An estimator is
A Singular Value Thresholding Algorithm for Matrix Completion
TLDR
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
TLDR
Using the nuclear norm as a regularizer, the algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD in a sequence of regularized low-rank solutions for large-scale matrix completion problems.
Singular value shrinkage priors for Bayesian prediction
We develop singular value shrinkage priors for the mean matrix parameters in the matrix-variate normal model with known covariance matrices. Our priors are superharmonic and put more weight on
Some matrix-variate distribution theory: Notational considerations and a Bayesian application
SUMMARY We introduce and justify a convenient notation for certain matrix-variate distributions which, by its emphasis on the important underlying parameters, and the theory on which it is based,
A Simpler Approach to Matrix Completion
  • B. Recht
  • Computer Science
    J. Mach. Learn. Res.
  • 2011
TLDR
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix by minimizing the nuclear norm of the hidden matrix subject to agreement with the provided entries.
Eigentaste: A Constant Time Collaborative Filtering Algorithm
TLDR
This work compares Eigentaste to alternative algorithms using data from Jester, an online joke recommending system, and uses the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms.
Low-Rank Matrix Completion by Riemannian Optimization
TLDR
This work proposes a new algorithm for matrix completion that minimizes the least-square distance on the sampling set over the Riemannian manifold of fixed-rank matrices and proves convergence of a regularized version of the algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations.
Matrix Completion from Noisy Entries
TLDR
This work studies a low complexity algorithm, introduced in [1], based on a combination of spectral techniques and manifold optimization, that is called here OPTSPACE, and proves performance guarantees that are order-optimal in a number of circumstances.
Manopt, a matlab toolbox for optimization on manifolds
TLDR
The Manopt toolbox, available at www.manopt.org, is a user-friendly, documented piece of software dedicated to simplify experimenting with state of the art Riemannian optimization algorithms, which aims particularly at lowering the entrance barrier.
...
...