Multivariate Bernoulli distribution

@article{Dai2013MultivariateBD,
  title={Multivariate Bernoulli distribution},
  author={Bin Dai and Shilin Ding and Grace Wahba},
  journal={Bernoulli},
  year={2013},
  volume={19},
  pages={1465-1483}
}
In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique… 

Tables from this paper

Bernoulli vector autoregressive model
Selection and estimation for mixed graphical models.
TLDR
The selection of edges between nodes whose conditional distributions take different parametric forms is investigated, and it is shown that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph.
Learning binary undirected graph in low dimensional regime.
TLDR
A simple method is proposed that provides a closed form estimator of the parameters' vector and through its support also provides an estimate of the undirected graph associated to the MBV distribution, proved to be consistent but it is feasible only in low-dimensional regimes.
Elementary Estimators for Sparse Covariance Matrices and other Structured Moments
TLDR
This work proposes a class of elementary convex estimators, that in many cases are available in closed-form, for estimating general structured moments, and shows that the regularized MLEs for covariance estimation are non-convex, even when the regularization functions themselves are convex.
Bayesian information criterion approximations to Bayes factors for univariate and multivariate logistic regression models
TLDR
Simulations show accuracies of the approximations for small samples sizes as well as comparisons to conclusions from frequentist testing, and an application in prostate cancer is presented, which illustrates the approximation for large data sets in a practical example.
Nonparametric Bayes Modeling of Populations of Networks
TLDR
A flexible Bayesian nonparametric approach for modeling the population distribution of network-valued data through a mixture model that reduces dimensionality and efficiently incorporates network information within each mixture component by leveraging latent space representations is proposed.
Relaxed Multivariate Bernoulli Distribution and Its Applications to Deep Generative Models
TLDR
A multivariate generalization of the Relaxed Bernoulli distribution is proposed, which can be reparameterized and can capture the correlation between variables via a Gaussian copula and demonstrate its effectiveness in two tasks: density estimation withBernoulli VAE and semisupervised multi-label classification.
Composite likelihood approach to the regression analysis of spatial multivariate ordinal data and spatial compositional data with exact zero values
In many environmental and ecological studies, it is of interest to model compositional data. One approach is to consider positive random vectors that are subject to a unit-sum constraint. In
Poisson Latent Feature Calculus for Generalized Indian Buffet Processes
The purpose of this work is to describe a unified, and indeed simple, mechanism for non-parametric Bayesian analysis, construction and generative sampling of a large class of latent feature models
Edge coherence in multiplex networks
TLDR
This paper introduces a nonparametric framework for the setting where multiple networks are observed on the same set of nodes, also known as multiplex networks and introduces the notion of edge coherence as a measure of linear dependence in the graph limit space.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Graphical Models, Exponential Families, and Variational Inference
TLDR
The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
High-dimensional Ising model selection using ℓ1-regularized logistic regression
TLDR
It is proved that consistent neighborhood selection can be obtained for sample sizes $n=\Omega(d^3\log p)$ with exponentially decaying error, and when these same conditions are imposed directly on the sample matrices, it is shown that a reduced sample size suffices for the method to estimate neighborhoods consistently.
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Learning Higher-Order Graph Structure with Features by Structure Penalty
TLDR
It is proved that discrete undirected graphical models with feature X are equivalent to multivariate discrete models and a Structure Lasso penalty is imposed on groups of functions to learn the graph structure.
Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data
TLDR
This work considers the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse, and presents two new algorithms for solving problems with at least a thousand nodes in the Gaussian case.
Smoothing Spline ANOVA for Multivariate Bernoulli Observations With Application to Ophthalmology Data
We combine a smoothing spline analysis of variance (SS-ANOVA) model and a log-linear model to build a partly flexible model for multivariate Bernoulli data. The joint distribution conditioning on the
Nonconcave penalized composite conditional likelihood estimation of sparse Ising models
TLDR
This work proposes efficient procedures for learning a sparse Ising model based on a penalized composite conditional likelihood with nonconcave penalties and demonstrates its finite sample performance via simulation studies and illustrated by studying the Human Immunodeficiency Virus type 1 protease structure.
The Bayesian Lasso
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
On Model Selection Consistency of Lasso
TLDR
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.
Generalized Linear Models
TLDR
This is the Ž rst book on generalized linear models written by authors not mostly associated with the biological sciences, and it is thoroughly enjoyable to read.
...
1
2
3
4
...