Confidence intervals for low dimensional parameters in high dimensional linear models

@article{Zhang2011ConfidenceIF,
  title={Confidence intervals for low dimensional parameters in high dimensional linear models},
  author={Cun-Hui Zhang and Shenmin Zhang},
  journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},
  year={2011},
  volume={76}
}
  • Cun-Hui Zhang, Shenmin Zhang
  • Published 12 October 2011
  • Mathematics, Computer Science
  • Journal of the Royal Statistical Society: Series B (Statistical Methodology)
The purpose of this paper is to propose methodologies for statistical inference of low dimensional parameters with high dimensional data. We focus on constructing confidence intervals for individual coefficients and linear combinations of several of them in a linear regression model, although our ideas are applicable in a much broader context. The theoretical results that are presented provide sufficient conditions for the asymptotic normality of the proposed estimators along with a consistent… 

Statistical Inference for High-Dimensional Linear Models

TLDR
A novel selection procedure is developed, Two-Stage Hard Thresholding (TSHT) to select valid instrumental variables and construct honest confidence intervals for the treatment effect using the selected instrumental variables.

Confidence intervals and hypothesis testing for high-dimensional regression

TLDR
This work considers here high-dimensional linear regression problem, and proposes an efficient algorithm for constructing confidence intervals and p-values, based on constructing a 'de-biased' version of regularized M-estimators.

Confidence bands for coefficients in high dimensional linear models with error-in-variables

We study high-dimensional linear models with error-in-variables. Such models are motivated by various applications in econometrics, finance and genetics. These models are challenging because of the

Estimation, Confidence Intervals, and Large-Scale Hypotheses Testing for High-Dimensional Mixed Linear Regression

This paper studies the high-dimensional mixed linear regression (MLR) where the output variable comes from one of the two linear regression models with an unknown mixing proportion and an unknown

Confidence Intervals and Hypothesis Testing for High-Dimensional Statistical Models

TLDR
This work considers here a broad class of regression problems, and proposes an efficient algorithm for constructing confidence intervals and p-values, based on constructing a 'de-biased' version of regularized M-estimators.

A Comparison of Inference Methods in High-Dimensional Linear Regression

TLDR
This paper looks at the Bayesian paradigm for the LASSO model, and uses its asymptotic normality to calculate the confidence intervals for the model coefficients, and incorporates an adaptive LassO model.

Robust Methods for High-Dimensional Regression and Covariance Matrix Estimation

TLDR
The theory of M-estimators is built on and adapted to handle the problems of high-dimensional regression and covariance matrix estimation via regularization and it is shown that penalized M-ESTimators for high- dimensional generalized linear models can lead to estimators that are consistent when the data is nice and contains no contaminated observations, while importantly remaining stable in the presence of a small fraction of outliers.

A Unified Theory of Confidence Regions and Testing for High-Dimensional Estimating Equations

TLDR
A new inferential framework for constructing confidence regions and testing hypotheses in statistical models specified by a system of high dimensional estimating equations is proposed, which is likelihood-free and provides valid inference for a broad class of highdimensional constrained estimating equation problems, which are not covered by existing methods.

High-dimensional econometrics and regularized GMM

TLDR
This chapter presents key concepts and theoretical results for analyzing estimation and inference in high-dimensional models, and presents results in a framework where estimators of parameters of interest may be represented directly as approximate means.
...

References

SHOWING 1-10 OF 67 REFERENCES

Adaptive Lasso for sparse high-dimensional regression models

TLDR
The adaptive Lasso has the oracle property even when the number of covariates is much larger than the sample size, and under a partial orthogonality condition in which the covariates with zero coefficients are weakly correlated with the covariate with nonzero coefficients, marginal regression can be used to obtain the initial estimator.

ℓ1-penalization for mixture regression models

We consider a finite mixture of regressions (FMR) model for high-dimensional inhomogeneous data where the number of covariates may be much larger than sample size. We propose an ℓ1-penalized maximum

Scaled sparse linear regression

Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

The sparsity and bias of the Lasso selection in high-dimensional linear regression

Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436-1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent,

Nonconcave penalized likelihood with a diverging number of parameters

A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.

Regularized estimation of large covariance matrices

TLDR
If the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix.

Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

TLDR
This article considers a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models, and provides prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator.

High-dimensional graphs and variable selection with the Lasso

TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.

Can One Estimate the Conditional Distribution of Post-Model-Selection Estimators?

We consider the problem of estimating the conditional distribution of a post-model-selection estimator where the conditioning is on the selected model. The notion of a post-model-selection estimator
...