Corpus ID: 219176560

Scale matrix estimation under data-based loss in high and low dimensions

@article{Haddouche2020ScaleME,
  title={Scale matrix estimation under data-based loss in high and low dimensions},
  author={Mohamed Anis Haddouche and Dominique Fourdrinier and Fatiha Mezoued},
  journal={arXiv: Statistics Theory},
  year={2020}
}
We consider the problem of estimating the scale matrix $\Sigma$ of the additif model $Y_{p\times n} = M + \mathcal{E}$, under a theoretical decision point of view. Here, $ p $ is the number of variables, $ n$ is the number of observations, $ M $ is a matrix of unknown parameters with rank $q m$ (S non-invertible), we propose estimators of the form ${\hat{\Sigma}}_{a, G} = a\big( S+ S \, {S^{+}\,G(Z,S)}\big)$ where ${S^{+}}$ is the Moore-Penrose inverse of $ S$ (which coincides with $S^{-1… Expand

References

SHOWING 1-10 OF 12 REFERENCES
Robust minimax Stein estimation under invariant data-based loss for spherically and elliptically symmetric distributions
From an observable $$(X,U)$$(X,U) in $$\mathbb R^p \times \mathbb R^k$$Rp×Rk, we consider estimation of an unknown location parameter $$\theta \in \mathbb R^p$$θ∈Rp under two distributional settings:Expand
A unified approach to estimating a normal mean matrix in high and low dimensions
TLDR
This paper addresses the problem of estimating the normal mean matrix with an unknown covariance matrix by suggesting a unified form of the Efron-Morris type estimators based on the Moore-Penrose inverse, which can be defined for any dimension and any sample size. Expand
Estimation of the precision matrix of a singular Wishart distribution and its application in high-dimensional data
In this article, the Stein-Haff identity is established for a singular Wishart distribution with a positive definite mean matrix but with the dimension larger than the degrees of freedom. ThisExpand
Shrinkage estimators for large covariance matrices in multivariate real and complex normal distributions under an invariant quadratic loss
TLDR
Sh shrinkage estimators which are counterparts of the estimators due to Haff are shown to improve upon the best scalar multiple of the empirical covariance matrix under the invariant quadratic loss functions for both real and complex multivariate normal distributions in the situation where the dimension of the variables is larger than the number of samples. Expand
Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators
TLDR
The utility of the unbiased risk estimation for SVT-based denoising of real clinical cardiac MRI series data is demonstrated and an unbiased risk estimate formula for singular value thresholding (SVT), a popular estimation strategy that applies a soft-thresholding rule to the singular values of the noisy observations is given. Expand
Estimation with Quadratic Loss
It has long been customary to measure the adequacy of an estimator by the smallness of its mean squared error. The least squares estimators were studied by Gauss and by other authors later in theExpand
Exact matrix completion via convex optimization
TLDR
It is demonstrated that in very general settings, one can perfectly recover all of the missing entries from most sufficiently large subsets by solving a convex programming problem that finds the matrix with the minimum nuclear norm agreeing with the observed entries. Expand
Robust video denoising using low rank matrix completion
TLDR
The robustness and effectiveness of the proposed Denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and the proposed approach compares favorably against some existing video denoising algorithms. Expand
Multivariate Empirical Bayes and Estimation of Covariance Matrices
Abstract : The problem of estimating a covariance matrix in the standard multivariate normal situation is considered. The loss function is one obtained naturally from the problem of estimatingExpand
Unbiased risk estimates for matrix estimation in the elliptical case
TLDR
This paper is concerned with additive models of the form Y=M+E, where Y is an observed nm matrix with m so that M+E is an observations-only matrix and Y is a model-agnostic matrix. Expand
...
1
2
...