• Corpus ID: 12760511

Structure Learning of Partitioned Markov Networks

@inproceedings{Liu2016StructureLO,
  title={Structure Learning of Partitioned Markov Networks},
  author={Song Liu and Taiji Suzuki and Masashi Sugiyama and Kenji Fukumizu},
  booktitle={ICML},
  year={2016}
}
We learn the structure of a Markov Network between two groups of random variables from joint observations. Since modelling and learning the full MN structure may be hard, learning the links between two groups directly may be a preferable option. We introduce a novel concept called the partitioned ratio whose factorization directly associates with the Markovian properties of random variables across two groups. A simple oneshot convex optimization procedure is proposed for learning the sparse… 

Figures from this paper

Inter-Subject Analysis: A Partial Gaussian Graphical Model Approach
TLDR
A modeling framework for ISA is proposed that is based onGaussian graphical models, under which ISA can be converted to the problem of estimation and inference of a partial Gaussian graphical model.
Inter-Subject Analysis: Inferring Sparse Interactions with Dense Intra-Graphs
TLDR
A new modeling framework for Inter-Subject Analysis (ISA) based on the Gaussian graphical models, under which ISA can be converted to the problem of estimation and inference of the inter-subject precision matrix, and an "untangle and chord" procedure to de-bias the estimator.

References

SHOWING 1-10 OF 39 REFERENCES
Direct estimation of differential networks.
TLDR
In this paper, each condition-specific network is modeled using the precision matrix of a multivariate normal random vector, and a method is proposed to directly estimate the difference of the precision matrices.
Probabilistic Graphical Models - Principles and Techniques
TLDR
The framework of probabilistic graphical models, presented in this book, provides a general approach for causal reasoning and decision making under uncertainty, allowing interpretable models to be constructed and then manipulated by reasoning algorithms.
Sparse inverse covariance estimation with the graphical lasso.
TLDR
Using a coordinate descent procedure for the lasso, a simple algorithm is developed that solves a 1000-node problem in at most a minute and is 30-4000 times faster than competing methods.
Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of l1-regularized MLE
TLDR
This work considers the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples and provides sufficient conditions on (n, p, d) for thel1-regularized MLE estimator to recover all the edges of the graph with high probability.
High Dimensional Semiparametric Gaussian Copula Graphical Models
TLDR
It is proved that the nonparanormal skeptic achieves the optimal parametric rates of convergence for both graph recovery and parameter estimation, and this result suggests that the NonParanormal graphical models can be used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian.
The Nonparanormal SKEPTIC
TLDR
In high dimensional settings, it is proved that the nonparanormal skeptic achieves the optimal parametric rate of convergence in both graph and parameter estimation.
High-dimensional Ising model selection using ℓ1-regularized logistic regression
TLDR
It is proved that consistent neighborhood selection can be obtained for sample sizes $n=\Omega(d^3\log p)$ with exponentially decaying error, and when these same conditions are imposed directly on the sample matrices, it is shown that a reduced sample size suffices for the method to estimate neighborhoods consistently.
Mutual information approximation via maximum likelihood estimation of density ratio
TLDR
The proposed method, Maximum Likelihood Mutual Information (MLMI), possesses useful properties, e.g., it does not involve density estimation, the global optimal solution can be efficiently computed, it has suitable convergence properties, and model selection criteria are available.
Approximating Mutual Information by Maximum Likelihood Density Ratio Estimation
TLDR
This paper proposes a new method of approximating mutual information based on maximum likelihood estimation of a density ratio function, called Maximum Likelihood Mutual Information (MLMI), which has several attractive properties, e.g., density estimation is not involved, it is a single-shot procedure, the global optimal solution can be efficiently computed, and cross-validation is available for model selection.
Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of boldmathell_1-regularized MLE
TLDR
This work considers the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples and provides sufficient conditions on (n, p, d) for the l1-regularized MLE estimator to recover all the edges of the graph with high probability.
...
...