OptShrink: An Algorithm for Improved Low-Rank Signal Matrix Denoising by Optimal, Data-Driven Singular Value Shrinkage

  title={OptShrink: An Algorithm for Improved Low-Rank Signal Matrix Denoising by Optimal, Data-Driven Singular Value Shrinkage},
  author={Raj Rao Nadakuditi},
  journal={IEEE Transactions on Information Theory},
  • R. Nadakuditi
  • Published 25 June 2013
  • Computer Science
  • IEEE Transactions on Information Theory
The truncated singular value decomposition of the measurement matrix is the optimal solution to the representation problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable) denoising problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit… 

Figures from this paper

Matrix Denoising with Partial Noise Statistics: Optimal Singular Value Shrinkage of Spiked F-Matrices

This work studies the problem of estimating a large, low-rank matrix corrupted by additive noise of unknown covariance, and shows that under the mean square error loss, a unique, asymptotically optimal shrinkage nonlinearity exists for the Whiten-Shrink-reColor denoising work.

Generalized SURE for optimal shrinkage of singular values in low-rank matrix denoising

This work derives generalized Stein's unbiased risk estimation formulas that hold for any spectral estimators which shrink or threshold the singular values of the data matrix, which leads to new data-driven spectral estimator estimators, whose optimality is discussed using tools from random matrix theory and through numerical experiments.

A Fast Data Driven Shrinkage of Singular Values for Arbitrary Rank Signal Matrix Denoising

Recovering a low-rank signal matrix from its noisy observation, commonly known as matrix denoising, is a fundamental inverse problem in statistical signal processing. Matrix denoising methods are


We consider recovery of low-rank matrices from noisy data by shrinkage of singular values, in which a single, univariate nonlinearity is applied to each of the empirical singular values. We adopt an

Optimal Shrinkage of Singular Values

This work considers the recovery of low-rank matrices from noisy data by shrinkage of singular values by adopting an asymptotic framework, and provides a general method for evaluating optimal shrinkers numerically to arbitrary precision.

ASSVD: Adaptive Sparse Singular Value Decomposition for High Dimensional Matrices

An adaptive sparse singular value decomposition (ASSVD) algorithm is proposed to estimate the signal matrix when only one data matrix is observed and there is high dimensional white noise, and it is proved that when the signal is strong, the estimator is consistent and outperforms over many state-of-the-art algorithms.

Adaptive shrinkage of singular values

A generalized Stein unbiased risk estimation criterion is proposed that does not require knowledge of the variance of the noise and that is computationally fast and accurately estimates the rank of the signal when it is detectable.

On Low-Rank Hankel Matrix Denoising

Matrix estimation using shrinkage of singular values with applications to signal denoising

This thesis documents investigations on developing the better spectral shrinkage functions for matrix estimation and investigates the effectiveness of the proposed estimators in signal denoising applications in a collaborative filtering framework.

Adaptive Higher-order Spectral Estimators

New classes of estimators that shrink or threshold the mode-specific singular values from the higher-order singular value decomposition are developed that provide a way to estimate the multilinear rank of the underlying signal tensor.



Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations

This paper studies the recovery task in the general settings that only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise, and shows that the resulting model falls into the applicable scope of the classical augmented Lagrangian method.

Nuclear norm penalization and optimal rates for noisy low rank matrix completion

A new nuclear norm penalized estimator of A_0 is proposed and a general sharp oracle inequality for this estimator is established for arbitrary values of $n,m_1,m-2$ under the condition of isometry in expectation to find the best trace regression model approximating the data.

Matrix estimation by Universal Singular Value Thresholding

This paper introduces a simple estimation procedure, called Universal Singular Value Thresholding (USVT), that works for any matrix that has "a little bit of structure" and achieves the minimax error rate up to a constant factor.

Estimation of high-dimensional low-rank matrices

This work investigates penalized least squares estimators with a Schatten-p quasi-norm penalty term and derives bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.

A Singular Value Thresholding Algorithm for Matrix Completion

This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Sparse and low-rank matrix decompositions

The uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row/column spaces and forms the basis for the decomposition method and its analysis.

Matrix Completion With Noise

This paper surveys the novel literature on matrix completion and introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise, and shows that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).