A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees

@article{Khare2013ACP,
  title={A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees},
  author={Kshitij Khare and Sang-Yun Oh and Bala Rajaratnam},
  journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},
  year={2013},
  volume={77}
}
  • K. Khare, Sang-Yun Oh, B. Rajaratnam
  • Published 20 July 2013
  • Computer Science, Mathematics
  • Journal of the Royal Statistical Society: Series B (Statistical Methodology)
Sparse high dimensional graphical model selection is a topic of much interest in modern day statistics. A popular approach is to apply l1‐penalties to either parametric likelihoods, or regularized regression/pseudolikelihoods, with the latter having the distinct advantage that they do not explicitly assume Gaussianity. As none of the popular methods proposed for solving pseudolikelihood‐based objective functions have provable convergence guarantees, it is not clear whether corresponding… 
Optimization Methods for Sparse Pseudo-Likelihood Graphical Model Selection
TLDR
This paper proposes two proximal gradient methods (CONCORD-ISTA and CONCORD-FISTA) for performing l1-regularized inverse covariance matrix estimation in the pseudo-likelihood framework and presents timing comparisons with coordinate-wise minimization and demonstrates that this approach yields tremendous payoffs for l 1-penalized partial correlation graph estimation outside the Gaussian setting.
A convex framework for high-dimensional sparse Cholesky based covariance estimation
TLDR
A new penalized likelihood method for sparse estimation of the inverse covariance Cholesky parameter is proposed that aims to overcome some of the shortcomings of current methods, but retains their respective strengths.
Distributionally Robust Formulation and Model Selection for the Graphical Lasso
TLDR
A novel notion of a Wasserstein ambiguity set specifically tailored to the estimation of the inverse covariance matrix for multivariate data is provided, from which a representation for a tractable class of regularized estimators is obtained.
B-CONCORD -- A scalable Bayesian high-dimensional precision matrix estimation procedure
TLDR
B-CONCORD is developed, a Bayesian analogue of the CONvex CORrelation selection methoD introduced by Khare et al. (2015), which establishes model selection and estimation consistency under high-dimensional scaling and develops a procedure that refits only the non-zero parameters of the precision matrix, leading to significant improvements in the estimates in finite samples.
Learning Local Dependence In Ordered Data
TLDR
This work proposes a framework for learning local dependence based on estimating the inverse of the Cholesky factor of the covariance matrix, which yields a simple regression interpretation for local dependence in which variables are predicted by their neighbors.
Bayesian Regularization for Graphical Models With Unequal Shrinkage
TLDR
A Bayesian framework for estimating a high-dimensional sparse precision matrix, in which adaptive shrinkage and sparsity are induced by a mixture of Laplace priors is considered, and the MAP (maximum a posteriori) estimator is investigated from a penalized likelihood perspective.
On the Solution Path of Regularized Covariance Estimators
TLDR
This paper provides a complete characterization of the entire solution path of the CondReg estimator and presents two instances of fast algorithms: the forward and the backward algorithms that greatly speed up the cross-validation procedure that selects the optimal regularization parameter.
A convex optimization formulation for multivariate regression
TLDR
This article proposes a convex optimization formulation for high-dimensional multivariate linear regression under a general error covariance structure, and shows that the proposed method recovers the oracle estimator under sharp scaling conditions, and rates of convergence in terms of vector `∞ norm are established.
A generalized likelihood-based Bayesian approach for scalable joint regression and covariance selection in high dimensions
TLDR
An algorithm called Joint Regression Network Selector (JRNS) is developed which can accommodate general sparsity patterns, is scalable and orders of magnitude faster than the state-ofthe-art Bayesian approaches providing uncertainty quantification.
...
...

References

SHOWING 1-10 OF 71 REFERENCES
Condition‐number‐regularized covariance estimation
TLDR
This paper proposes a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator, and investigates the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and develops an approach that adaptively determines the level of regularization that is required.
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
TLDR
This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
A path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)
TLDR
This paper proposes an ‘1 penalized pseudo-likelihood estimate for the inverse covariance matrix, and names it SPLICE, which gives the best overall performance in terms of three metrics on the precision matrix and ROC curve for model selection.
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Partial Correlation Estimation by Joint Sparse Regression Models
TLDR
It is shown that space performs well in both nonzero partial correlation selection and the identification of hub variables, and also outperforms two existing methods.
Convergence of cyclic coordinatewise l1 minimization
TLDR
A rigorous general proof of convergence for the cyclic coordinatewise minimization algorithm is provided and the usefulness of the general results in contemporary applications is demonstrated.
On Model Selection Consistency of Lasso
TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso
TLDR
For a range of values of λ, this proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem.
A well-conditioned estimator for large-dimensional covariance matrices
Sparse inverse covariance estimation with the graphical lasso.
TLDR
Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.
...
...