• Corpus ID: 1834714

Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

@inproceedings{Rolfs2012IterativeTA,
  title={Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation},
  author={Benjamin T. Rolfs and Bala Rajaratnam and Dominique Guillot and Ian Wong and Arian Maleki},
  booktitle={NIPS},
  year={2012}
}
The L1-regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing L1-regularized covariance matrix estimation is presented. Although numerous algorithms have been proposed for solving this problem, this simple proximal gradient method is found to have attractive… 

Figures and Tables from this paper

Inverse Covariance Estimation for High-Dimensional Data in Linear Time and Space: Spectral Methods for Riccati and Sparse Models
TLDR
This work proposes maximum likelihood estimation for learning Gaussian graphical models with a Gaussian (l22) prior on the parameters and provides techniques for using the learnt models, such as removing unimportant variables, computing likelihoods and conditional distributions.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation
TLDR
Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation
TLDR
This paper presents a new block-coordinate descent approach for solving the sparse inverse covariance estimation problem for large-scale data sets, which treats the sought matrix block-by-block using quadratic approximations and shows that this approach has advantages over existing methods in several aspects.
Linear-Time Algorithm for Learning Large-Scale Sparse Graphical Models
TLDR
A novel Newton-Conjugate Gradient algorithm that can efficiently solve the GGL with general structures is described, which is highly efficient in practice and describes a recursive closed-form solution for the problem when the thresholded sample covariance matrix is chordal.
Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion
TLDR
This paper proves an extension of a recent line of results, and describes a Newton-CG algorithm to efficiently solve the MDMC problem, and proves that the algorithm converges to an $\epsilon$-accurate solution in $O(n\log(1/\ep silon)$ time and $O (n)$ memory.
Sparse Inverse Covariance Estimation with Hierarchical Matrices
TLDR
H hierarchical matrices are used which allow for an (approximate) data-sparse representation of large dense matrices which enables the simultaneous treatment of groups of variables in a block-wise manner and to easily ensure positive definiteness of each iterate.
Large-scale l0 sparse inverse covariance estimation
TLDR
A new block iterative approach is presented for sparse inverse covariance estimation, which can handle large-scale data sizes and outperforms existing methods for this problem.
Sparse Gaussian graphical model estimation via alternating minimization
TLDR
This paper proposes a new method for solving the sparse inverse covariance estimation problem using the alternating minimization algorithm, which effectively works as a proximal gradient algorithm on the dual problem.
BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables
TLDR
An algorithm BIGQUIC is developed, which can solve 1 million dimensional l1-regularized Gaussian MLE problems using a single machine, with bounded memory, and can achieve super-linear or even quadratic convergence rates.
...
...

References

SHOWING 1-10 OF 44 REFERENCES
Sparse inverse covariance matrix estimation using quadratic approximation
TLDR
A novel algorithm is proposed for solving the resulting optimization problem which is a regularized log-determinant program based on Newton's method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem.
Condition Number Regularized Covariance Estimation.
TLDR
This paper proposes a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator, and investigates the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and develops an approach that adaptively determines the level of regularization that is required.
Sparse Inverse Covariance Selection via Alternating Linearization Methods
TLDR
This paper proposes a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions.
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
TLDR
This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
TLDR
A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Adaptive First-Order Methods for General Sparse Inverse Covariance Selection
  • Zhaosong Lu
  • Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2010
TLDR
The computational results demonstrate that the proposed algorithm framework and methods are capable of solving problems of size at least a thousand and number of constraints of nearly a half million within a reasonable amount of time, and that the ASPG method generally outperforms the ANS method and glasso.
Alternating Direction Method for Covariance Selection Models
  • X. Yuan
  • Computer Science
    J. Sci. Comput.
  • 2012
TLDR
The alternating direction method (ADM) is developed and preliminary numerical results show that the ADM approach is very efficient for large-scale cases of the l1-norm penalized log-likelihood model.
SMOOTH OPTIMIZATION APPROACH FOR SPARSE COVARIANCE SELECTION∗
TLDR
This paper studies a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations and applies Nesterov’s smooth optimization technique to sparse covariance selection.
Sparse Reconstruction by Separable Approximation
TLDR
This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
A note on the lack of symmetry in the graphical lasso
...
...