# NETWORK EXPLORATION VIA THE ADAPTIVE LASSO AND SCAD PENALTIES.

@article{Fan2009NETWORKEV, title={NETWORK EXPLORATION VIA THE ADAPTIVE LASSO AND SCAD PENALTIES.}, author={Jianqing Fan and Yang Feng and Yichao Wu}, journal={The annals of applied statistics}, year={2009}, volume={3 2}, pages={ 521-541 } }

Graphical models are frequently used to explore networks, such as genetic networks, among a set of variables. This is usually carried out via exploring the sparsity of the precision matrix of the variables under consideration. Penalized likelihood methods are often used in such explorations. Yet, positive-definiteness constraints of precision matrices make the optimization problem challenging. We introduce non-concave penalties and the adaptive LASSO penalty to attenuate the bias problem in the…

## Figures and Tables from this paper

## 340 Citations

### D-trace Precision Matrix Estimation Using Adaptive Lasso Penalties

- Computer Science
- 2015

Two adaptive extensions of the recently proposed l1 norm penalized D-trace loss minimization method are introduced that intend to diminish the produced bias in the estimator.

### D-trace estimation of a precision matrix using adaptive Lasso penalties

- Computer ScienceAdv. Data Anal. Classif.
- 2018

This paper introduces two adaptive extensions of the recently proposed $$\ell _1$$ℓ1 norm penalized D-trace loss minimization method that aim at reducing the produced bias in the estimator.

### Estimating networks with jumps.

- Computer ScienceElectronic journal of statistics
- 2012

A procedure is proposed that estimates the structure of a graphical model by minimizing the temporally smoothed L1 penalized regression, which allows jointly estimating the partition boundaries of the VCVS model and the coefficient of the sparse precision matrix on each block of the partition.

### Group-wise shrinkage estimation in penalized model-based clustering

- Computer Science
- 2021

This work derives group-wise penalty factors, which automatically enforce under or over-connectivity in the estimated graphs, and is entirely data-driven and does not require additional hyper-parameter speciﬁcation.

### Generalized information criterion for model selection in penalized graphical models

- Computer Science
- 2014

This paper introduces an estimator of the relative directed distance between an estimated model and the true model, based on the Kulback-Leibler divergence and is motivated by the generalized…

### Annealed Sparsity via Adaptive and Dynamic Shrinking

- Computer ScienceKDD
- 2016

This paper proposes to achieve "annealed sparsity" by designing a dynamic shrinking scheme that simultaneously optimizes the regularization weights and model coefficients in sparse (multi-task) learning and competes favorably with state-of-the-art methods in sparse and multi-task learning.

### Partial Correlation Graphical LASSO.

- Computer Science, Mathematics
- 2021

This work illustrates the use of one such penalty, the partial correlation graphical LASSO, which sets an $L_{1}$ penalty on partial correlations and shows that, besides being scale invariant, there can be important gains in terms of inference.

### Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs.

- Computer ScienceBiometrika
- 2010

This paper proposes an efficient penalized likelihood method for estimation of the adjacency matrix of directed acyclic graphs, and shows that although the lasso is only variable selection consistent under stringent conditions, the adaptive lasso can consistently estimate the true graph under the usual regularity assumptions.

### Graph Selection with GGMselect

- Computer ScienceStatistical applications in genetics and molecular biology
- 2012

This work proposes a two-stage procedure which first builds a family of candidate graphs from the data, and then selects one graph among this family according to a dedicated criterion, and is shown to be consistent in a high-dimensional setting.

### TUNING PARAMETER SELECTION FOR PENALIZED LIKELIHOOD ESTIMATION OF GAUSSIAN GRAPHICAL MODEL

- Mathematics
- 2012

In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using…

## References

SHOWING 1-10 OF 53 REFERENCES

### Model selection and estimation in the Gaussian graphical model

- Computer Science
- 2007

The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.

### Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

- Mathematics, Computer Science
- 2001

In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

### Sparse estimation of large covariance matrices via a nested Lasso penalty

- Computer Science
- 2008

The paper proposes a new covariance estimator for large covariance matrices when the variables have a natural ordering, and imposes a banded structure on the Cholesky factor, using a novel penalty called nested Lasso, which results in a sparse estimators for the inverse of the covariance matrix.

### Sparse inverse covariance estimation with the graphical lasso.

- Computer ScienceBiostatistics
- 2008

Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.

### Consistent neighbourhood selection for sparse high-dimensional graphs with the Lasso

- Computer Science
- 2004

It is shown that the proposed neighborhood selection scheme with the Lasso is consistent for sparse high-dimensional graphs, and the consistency hinges on the choice of the penalty parameter.

### Gradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks.

- Computer ScienceBiostatistics
- 2006

A threshold gradient descent (TGD) regularization procedure for estimating the sparse precision matrix in the setting of Gaussian graphical models is introduced and demonstrated to identify biologically meaningful genetic networks based on microarray gene expression data.

### One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

- Computer ScienceAnnals of statistics
- 2008

A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.

### Model Selection Through Sparse Maximum Likelihood Estimation

- Computer ScienceArXiv
- 2007

Two new algorithms for solving problems with at least a thousand nodes in the Gaussian case are presented, based on Nesterov's first order method, which yields a complexity estimate with a better dependence on problem size than existing interior point methods.

### Covariance matrix selection and estimation via penalised normal likelihood

- Computer Science
- 2006

A nonparametric method for identifying parsimony and for producing a statistically efficient estimator of a large covariance matrix and an algorithm is developed for computing the estimator and selecting the tuning parameter.

### Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.

- Computer ScienceAnnals of statistics
- 2009

To guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor.