• Corpus ID: 233481565

Directional FDR Control for Sub-Gaussian Sparse GLMs

@inproceedings{Cui2021DirectionalFC,
  title={Directional FDR Control for Sub-Gaussian Sparse GLMs},
  author={Chang Cui and Jinzhu Jia and Yijun Xiao and Huiming Zhang},
  year={2021}
}
High-dimensional sparse generalized linear models (GLMs) have emerged in the setting that the number of samples and the dimension of variables are large, and even the dimension of variables grows faster than the number of samples. False discovery rate (FDR) control aims to identify some small number of statistically significantly nonzero results after getting the sparse penalized estimation of GLMs. Using the CLIME method for precision matrix estimations, we construct the debiased-Lasso… 

Figures from this paper

INFERENCE AND FDR CONTROL FOR SIMULATED ISING MODELS IN HIGH-DIMENSION

TLDR
A decorrelated score test based on the decor related score function is proposed and proved and the asymptotic normality of the score function without the influence of many nuisance parameters under the assumption that accelerates the convergence of the MCMC method is proved.

Inference and FDR Control for Simulated Ising Models in High-dimension

TLDR
Under mild conditions that ensure a specific convergence rate of MCMC method, the l1 consistency of Elastic-net-penalized MCMC-MLE is proved and a decor related score test is proposed based on the decorrelated score function to prove the asymptotic normality of the score function without the influence of many nuisance parameters.

Heterogeneous Overdispersed Count Data Regressions via Double-Penalized Estimations

Recently, the high-dimensional negative binomial regression (NBR) for count data has been widely used in many scientific fields. However, most studies assumed the dispersion parameter as a constant,

References

SHOWING 1-10 OF 41 REFERENCES

Honest variable selection in linear and logistic regression models via ℓ 1 and ℓ 1 + ℓ 2 penalization

  • F. Bunea
  • Mathematics, Computer Science
  • 2008
TLDR
It is shown that in identifiable models, both methods can recover coefficients of size 1 √ n, up to small mul-tiplicative constants and logarithmic factors in M and 1 δ, and the advantage of theℓ 1 + ℓ 2 penalization over the ™ 1 is minor for the variable selection problem.

Concentration Inequalities for Statistical Inference

This paper gives a review of concentration inequalities which are widely employed in analyzes of mathematical statistics in a wide range of settings, from distribution free to distribution dependent,

Global and Simultaneous Hypothesis Testing for High-Dimensional Logistic Regression Models

TLDR
Global testing and large-scale multiple testing for the regression coefficients are considered in both single- and two-regression settings and a lower bound for the global testing is established, which shows that the proposed test is asymptotically minimax optimal over some sparsity range.

False Discovery Rate Control via Debiased Lasso

TLDR
This framework achieves exact directional FDR control without any assumption on the amplitude of unknown regression parameters, and does not require any knowledge of the distribution of covariates or the noise level.

Confidence intervals for high-dimensional Cox models

The purpose of this paper is to construct confidence intervals for the regression coefficients in high-dimensional Cox proportional hazards regression models where the number of covariates may be

Variance of the number of false discoveries

Summary.  In high throughput genomic work, a very large number d of hypotheses are tested based on n≪d data samples. The large number of tests necessitates an adjustment for false discoveries in

Gaussian graphical model estimation with false discovery rate control

TLDR
This paper proposes a simultaneous testing procedure for conditional dependence in GGM by a multiple testing procedure that can control the false discovery rate (FDR) asymptotically and the numerical performance shows that the method works quite well.

Estimating Sparse Precision Matrix: Optimal Rates of Convergence and Adaptive Estimation

TLDR
The upper and lower bounds together yield the optimal rates of convergence for sparse precision matrix estimation and show that the ACLIME estimator is adaptively minimax rate optimal for a collection of parameter spaces and a range of matrix norm losses simultaneously.

SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR

We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk

An error bound for Lasso and Group Lasso in high dimensions

TLDR
It is shown that when the signal is strongly group-sparse, Group Lasso is superior to Lasso and L2 estimation upper bounds are derived, and this work improves over existing results.