Geometric Inference for General High-Dimensional Linear Inverse Problems

@article{Cai2014GeometricIF,
  title={Geometric Inference for General High-Dimensional Linear Inverse Problems},
  author={T. Tony Cai and Tengyuan Liang and Alexander Rakhlin},
  journal={arXiv: Statistics Theory},
  year={2014}
}
This paper presents a unified geometric framework for the statistical analysis of a general ill-posed linear inverse model which includes as special cases noisy compressed sensing, sign vector recovery, trace regression, orthogonal matrix estimation, and noisy matrix completion. We propose computationally feasible convex programs for statistical inference including estimation, confidence intervals and hypothesis testing. A theoretical framework is developed to characterize the local estimation… 

Figures from this paper

A Unified Theory of Confidence Regions and Testing for High-Dimensional Estimating Equations
TLDR
A new inferential framework for constructing confidence regions and testing hypotheses in statistical models specified by a system of high dimensional estimating equations is proposed, which is likelihood-free and provides valid inference for a broad class of highdimensional constrained estimating equation problems, which are not covered by existing methods.
Inference for Low-rank Tensors - No Need to Debias
TLDR
In the Tucker low-rank tensor PCA or regression model, provided with any estimates achieving some attainable error rate, the data-driven confidence regions for the singular subspace of the parameter tensor are developed based on the asymptotic distribution of an updated estimate by two-iteration alternating minimization.
Sharp Time–Data Tradeoffs for Linear Inverse Problems
TLDR
The results demonstrate that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings, and present a unified convergence analysis of the gradient projection algorithm applied to such problems.
Uncertainty quantification for nonconvex tensor completion: Confidence intervals, heteroscedasticity and optimality
TLDR
The findings unveil the statistical optimality of nonconvex tensor completion: it attains un-improvable $\ell_{2}$ accuracy when estimating both the unknown tensor and the underlying tensor factors.
New Computational and Statistical Aspects of Regularized Regression with Application to Rare Feature Selection and Aggregation
TLDR
This work provides a unified computational framework for defining norms that promote structures and develops associated tools for optimization involving such norms given only the orthogonal projection oracle onto the non-convex set of desired models.
A lava attack on the recovery of sums of dense and sparse signals
TLDR
This work proposes a new penalization-based method, called lava, which is computationally efficient and strictly dominates both lasso and ridge estimation, and derives analytic expressions for the finite-sample risk function of the lava estimator in the Gaussian sequence model.
Inference and uncertainty quantification for noisy matrix completion
TLDR
A simple procedure to compensate for the bias of the widely used convex and nonconvex estimators and derive distributional characterizations for the resulting debiased estimators, which enable optimal construction of confidence intervals/regions for the missing entries and the low-rank factors.
Confidence Region of Singular Subspaces for Low-Rank Matrix Regression
  • Dong Xia
  • Computer Science
    IEEE Transactions on Information Theory
  • 2019
TLDR
This paper revisits the low-rank matrix regression model and introduces a two-step procedure to construct confidence regions of the singular subspaces, and proves asymptotical normality of the joint projection distance with data-dependent centering and normalization.
On the robustness of minimum-norm interpolators
TLDR
A quantitative bound for the prediction error is given, relating it to the Rademacher complexity of the covariates, the norm of the minimum norm interpolator of the errors and the shape of the subdifferential around the true parameter.
...
1
2
3
...

References

SHOWING 1-10 OF 70 REFERENCES
Estimation in High Dimensions: A Geometric Perspective
TLDR
This tutorial provides an exposition of a flexible geometric framework for high-dimensional estimation problems with constraints and justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems.
Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information
TLDR
It is shown that, if precise information about the value f(x_0) or the l_2-norm of the noise is available, one can do a particularly good job at estimation, and the reconstruction error becomes proportional to the “sparsity” of the signal rather than to the ambient dimension of the Noise vector.
Statistical Estimation and Optimal Recovery
TLDR
The method of proof exposes a correspondence between minimax affine estimates in the statistical estimation problem and optimal algorithms in the theory of optimal recovery.
A new perspective on least squares under convex constraint
TLDR
This paper presents three general results about the problem of estimating the mean of a Gaussian random vector, including an exact computation of the main term in the estimation error by relating it to expected maxima of Gaussian processes, a theorem showing that the least squares estimator is always admissible up to a universal constant in any problem of the above kind and a counterexample showing that least squares estimating may not always be minimax rate-optimal.
A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
TLDR
A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results.
The Convex Geometry of Linear Inverse Problems
TLDR
This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems.
How well can we estimate a sparse vector?
A Constrained ℓ1 Minimization Approach to Sparse Precision Matrix Estimation
TLDR
A constrained ℓ1 minimization method for estimating a sparse inverse covariance matrix based on a sample of n iid p-variate random variables and is applied to analyze a breast cancer dataset and is found to perform favorably compared with existing methods.
Sparse PCA: Optimal rates and adaptive estimation
TLDR
Under mild technical conditions, this paper establishes the optimal rates of convergence for estimating the principal subspace which are sharp with respect to all the parameters, thus providing a complete characterization of the difficulty of the estimation problem in term of the convergence rate.
Estimation of high-dimensional low-rank matrices
TLDR
This work investigates penalized least squares estimators with a Schatten-p quasi-norm penalty term and derives bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.
...
1
2
3
4
5
...