Sparse inverse covariance estimation with the graphical lasso.

@article{Friedman2008SparseIC,
  title={Sparse inverse covariance estimation with the graphical lasso.},
  author={Jerome H. Friedman and Trevor J. Hastie and Robert Tibshirani},
  journal={Biostatistics},
  year={2008},
  volume={9 3},
  pages={
          432-41
        }
}
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and B… 

Figures and Tables from this paper

Two New Algorithms for Solving Covariance Graphical Lasso Based on Coordinate Descent and ECM
TLDR
This work proposes and explores two new algorithms for solving the covariance graphical lasso problem based on coordinate descent and ECM, and shows that these two algorithms are more attractive than the only existing competing algorithm of Bien and Tibshirani (2011) in terms of simplicity, speed and stability.
Sparse Inverse Covariance Estimation via an Adaptive Gradient-Based Method
TLDR
This work develops a new adaptive gradient-based method that carefully combines gradient information with an adaptive step-scaling strategy, which results in a scalable, highly competitive method that outperforms state-of-the-art competitors for large problems.
High-dimensional Covariance Estimation Based On Gaussian Graphical Models
TLDR
It is shown that under suitable conditions, this approach yields consistent estimation in terms of graphical structure and fast convergence rates with respect to the operator and Frobenius norm for the covariance matrix and its inverse using the maximum likelihood estimator.
Sparse permutation invariant covariance estimation
TLDR
A method for constructing a sparse estimator for the inverse covariance (concentration) matrix in high-dimensional settings using a penalized normal likelihood approach and forces sparsity by using a lasso-type penalty is proposed.
SVD-Based Screening for the Graphical Lasso
TLDR
Sting is proposed, a fast approach to the graphical lasso that efficiently identifies blocks in the estimated matrix that have nonzero elements before entering the iterations by exploiting the singular value decomposition of data matrix.
Joint estimation of multiple undirected graphical models
TLDR
This study develops an estimator forGaussian graphical models appropriate for data coming from several datasets that share the same set of variables and a common network substructure and proposes an alternating direction method for its solution.
Sparse Inverse Covariance Selection via Alternating Linearization Methods
TLDR
This paper proposes a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions.
Sparse Laplacian Shrinkage with the Graphical Lasso Estimator for Regression Problems
TLDR
A graph-constrained regularization procedure, named Sparse Laplacian Shrinkage with the Graphical Lasso Estimator (SLS-GLE), which uses the estimated precision matrix to describe the specific information on the conditional dependence pattern among predictors, and encourages both sparsity on the regression model and the graphical model.
New Insights and Faster Computations for the Graphical Lasso
TLDR
A very simple necessary and sufficient condition can be employed to determine whether the estimated inverse covariance matrix will be block diagonal, and if so, then to identify the blocks in the graphical lasso solution.
The joint graphical lasso for inverse covariance estimation across multiple classes
TLDR
The joint graphical lasso is proposed, which borrows strength across the classes to estimate multiple graphical models that share certain characteristics, such as the locations or weights of non‐zero edges, based on maximizing a penalized log‐likelihood.
...
...

References

SHOWING 1-10 OF 16 REFERENCES
Model Selection Through Sparse Maximum Likelihood Estimation
TLDR
Two new algorithms for solving problems with at least a thousand nodes in the Gaussian case are presented, based on Nesterov's first order method, which yields a complexity estimate with a better dependence on problem size than existing interior point methods.
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Model selection and estimation in the Gaussian graphical model
TLDR
The implementation of the penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model is nontrivial, but it is shown that the computation can be done effectively by taking advantage of the efficient maxdet algorithm developed in convex optimization.
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
TLDR
This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
Coordinate descent algorithms for lasso penalized regression
TLDR
This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty and proves that a greedy form of the l 2 algorithm converges to the minimum value of the objective function.
Covariance selection for nonchordal graphs via chordal embedding
TLDR
Algorithms for maximum likelihood estimation of Gaussian graphical models with conditional independence constraints and efficient implementations of Newton's method and the conjugate gradient method are described, by embedding the graph in a chordal graph.
PATHWISE COORDINATE OPTIMIZATION
TLDR
It is shown that coordinate descent is very competitive with the well-known LARS procedure in large lasso problems, can deliver a path of solutions efficiently, and can be applied to many other convex statistical problems such as the garotte and elastic net.
Determinant Maximization with Linear Matrix Inequality Constraints
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities (LMIs) arises in many fields, including computational geometry, statistics, system identification,
Convex Optimization
TLDR
A comprehensive introduction to the subject of convex optimization shows in detail how such problems can be solved numerically with great efficiency.
Causal Protein-Signaling Networks Derived from Multiparameter Single-Cell Data
TLDR
Reconstruction of network models from physiologically relevant primary single cells might be applied to understanding native-state tissue signaling biology, complex drug actions, and dysfunctional signaling in diseased cells.
...
...