# Maximum likelihood estimation of Gaussian graphical models : Numerical implementation and topology selection

@inproceedings{Dahl2009MaximumLE, title={Maximum likelihood estimation of Gaussian graphical models : Numerical implementation and topology selection}, author={Joachim Dahl and Vwani P. Roychowdhury and Lieven Vandenberghe}, year={2009} }

We describe algorithms for maximum likelihood estimation of Gaussian graphical models with conditional independence constraints. It is well-known that this problem can be formulated as an unconstrained convex optimization problem, and that it has a closed-form solution if the underlying graph is chordal. The focus of this paper is on numerical algorithms for large problems with non-chordal graphs. We compare different gradient-based methods (coordinate descent, conjugate gradient, and limited…

## Figures and Tables from this paper

## 29 Citations

Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov Random fields

- Computer Science, Mathematics
- 2014

Optimal covariance selection for estimation using graphical models

- Computer ScienceProceedings of the 2011 American Control Conference
- 2011

It is proved that Dempster's covariance is not optimal in most minimum mean squared error (MMSE) estimation problems, and a method is proposed for finding the MMSE optimal covariance, and its properties are proposed.

Rank Likelihood for Bayesian Nonparanormal Graphical Models

- Mathematics, Computer Science
- 2018

A Bayesian approach is considered for the nonparanormal graphical model using a rank likelihood which remains invariant under monotone transformations, thereby avoiding the need to put a prior on the transformation functions.

A Proximal Point Algorithm for Log-Determinant Optimization with Group Lasso Regularization

- Computer Science, MathematicsSIAM J. Optim.
- 2013

Numerical results are presented to demonstrate that the proposed Newton-CG based PPA is stable and efficient and, in particular, outperforms the alternating direction method (ADM) when high accuracy is required.

Newton-Like Methods for Sparse Inverse Covariance Estimation

- Computer Science, MathematicsNIPS
- 2012

Two classes of second-order optimization methods for solving the sparse inverse covariance estimation problem are proposed and a limited memory BFGS variant of the orthant-based Newton method is proposed.

SMOOTH OPTIMIZATION APPROACH FOR SPARSE COVARIANCE SELECTION∗

- Computer Science
- 2009

This paper studies a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations and applies Nesterov’s smooth optimization technique to sparse covariance selection.

Smooth Optimization Approach for Covariance Selection ∗

- Computer Science, Mathematics
- 2007

This paper applies Nesterov’s smooth optimization technique to dual counterparts that are smooth convex problems and shows that the resulting approach has O(1/√ǫ) iteration complexity for finding an ǫ-optimal solution to both primal and dual problems.

Smooth Optimization Approach for Sparse Covariance Selection

- Computer ScienceSIAM J. Optim.
- 2009

This paper studies a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations and applies Nesterov's smooth optimization technique to sparse covariance selection on a set of randomly generated instances.

Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator

- MathematicsOper. Res.
- 2022

Note. The best result in each experiment is highlighted in bold.The optimal solutions of many decision problems such as the Markowitz portfolio allocation and the linear discriminant analysis depend…

Bayesian Inference in Nonparanormal Graphical Models

- Mathematics, Computer ScienceBayesian Analysis
- 2020

A Bayesian approach is considered in the nonparanormal graphical model by putting priors on the unknown transformations through a random series based on B-splines where the coefficients are ordered to induce monotonicity and presents a posterior consistency result on the underlying transformation and the precision matrix.

## References

SHOWING 1-10 OF 38 REFERENCES

Sparse Covariance Selection via Robust Maximum Likelihood Estimation

- Computer Science, MathematicsArXiv
- 2005

A maximum likelihood problem with a penalty term given by the sum of absolute values of the elements of the inverse covariance matrix, which is directly amenable to now standard interiorpoint algorithms for convex optimization, but remains challenging due to its size.

Covariance selection and estimation via penalised normal likelihood

- Mathematics, Computer Science
- 2005

A nonparametric method to identify parsimony and to produce a statistically efficient estimator of a large covariance matrix through the modified Cholesky decomposition of its inverse or the one-step-ahead predictive representation of the vector of responses is proposed.

Model selection and multimodel inference : a practical information-theoretic approach

- Computer Science
- 2003

The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is…

Exploiting sparsity in semidefinite programming via matrix completion II: implementation and numerical results

- Computer Science, MathematicsMath. Program.
- 2003

New techniques to deal with the sparsity through a clique tree in the former method and through new computational formulae in the latter one are introduced andumerical results show that these methods can be very efficient for some problems.

Parallel and Distributed Computation: Numerical Methods

- Computer Science
- 1989

This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.

Exploiting Sparsity in Semidefinite Programming via Matrix Completion I: General Framework

- Computer Science, MathematicsSIAM J. Optim.
- 2001

A general method of exploiting the aggregate sparsity pattern over all data matrices to overcome a critical disadvantage of primal-dual interior-point methods for large scale semidefinite programs (SDPs).

Linear Recursive Equations, Covariance Selection, and Path Analysis

- Mathematics
- 1980

Abstract By defining a reducible zero pattern and by using the concept of multiplicative models, we relate linear recursive equations that have been introduced by econometrician Herman Wold (1954)…

An Approximate Minimum Degree Ordering Algorithm

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 1996

An approximate minimum degree (AMD) ordering algorithm for preordering a symmetric sparse matrix prior to numerical factorization is presented and produces results that are comparable in quality with the best orderings from other minimum degree algorithms.