• Corpus ID: 218470053

Thresholded Adaptive Validation: Tuning the Graphical Lasso for Graph Recovery

  title={Thresholded Adaptive Validation: Tuning the Graphical Lasso for Graph Recovery},
  author={Mike Laszkiewicz and Asja Fischer and Johannes Lederer},
The graphical lasso is the most popular estimator in Gaussian graphical models, but its performance hinges on a regularization parameter that needs to be calibrated to each application at hand. In this paper, we propose a novel calibration scheme for this parameter. The scheme is equipped with theoretical guarantees and motivates a thresholding pipeline that can improve graph recovery. Moreover, requiring at most one line search over the regularization path of the graphical lasso, the… 
Covariate Selection Based on a Model-free Approach to Linear Regression with Exact Probabilities.
This paper proposes a completely new approach to the problem of covariate selection in linear regression which is intuitive, very simple, fast and powerful, non-frequentist and non-Bayesian, and outperforms all other selection procedures of which it is aware.
Linear Regression, Covariate Selection and the Failure of Modelling
It is argued that all model based approaches to the selection of covariates in linear regression have failed. This applies to frequentist approaches based on P-values and to Bayesian approaches


TIGER: A Tuning-Insensitive Approach for Optimally Estimating Gaussian Graphical Models
This work proposes a new procedure for estimating high dimensional Gaussian graphical models that is asymptotically tuning-free and non-asymptotic tuning-insensitive, and theoretically, the obtained estimator is simultaneously minimax optimal for precision matrix estimation under different norms.
A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees
We introduce a novel scheme for choosing the regularization parameter in high-dimensional linear regression with Lasso. This scheme, inspired by Lepski's method for bandwidth selection in
New Insights and Faster Computations for the Graphical Lasso
A very simple necessary and sufficient condition can be employed to determine whether the estimated inverse covariance matrix will be block diagonal, and if so, then to identify the blocks in the graphical lasso solution.
Non-concave penalties and the adaptive LASSO penalty are introduced to attenuate the bias problem in the network estimation to solve the problem of precision matrix estimation.
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models
The method has a clear interpretation: the authors use the least amount of regularization that simultaneously makes a graph sparse and replicable under random sampling, which requires essentially no conditions.
Sparse inverse covariance estimation with the graphical lasso.
Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.
Selection of the Regularization Parameter in Graphical Models Using Network Characteristics
This article proposes several procedures to select the regularization parameter in the estimation of graphical models that focus on recovering reliably the appropriate network structure of the graph and conducts an extensive simulation study to show that the proposed methods produce useful results for different network topologies.
Model selection and estimation in the Gaussian graphical model
The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.
Inference in High-Dimensional Graphical Models
An overview of methodology and theory for estimation and inference on the edge weights in high-dimensional directed and undirected Gaussian graphical models and proposed estimators lead to confidence intervals for edge weights and recovery of the edge structure are provided.
Learning Scale Free Networks by Reweighted L1 regularization
This work replaces the ‘1 regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted ‘ 1 regularization problems, where the regularization coecients of nodes with high degree are reduced, encouraging the appearance of hubs with high degrees.