- Published 2015

We have focused mainly on linear models for signals, in particular the subspace model x =Hθ, where H is a n×k matrix and θ ∈ Rk is a vector of k < n parameters describing the signal x. The subspace model is useful because it reduces the number of parameters or degrees of freedom in the model from n to k. While applicable to many real-world problems, this is not the only way of modeling signals with a small number of parameters. Another widely used approach is called graphical modeling. The basic idea in a graphical model is treat the variables in the signal vector x as random variables and explicitly represent probabilistic relationships between the variables. More specifically, each of the n variables is represented as a vertex in a graph. Probabilistic relationships between variables are edges in the graph. If two variables are conditionally independent (more on this in a moment), then there will be no edge between them. In general, a graph with n vertices can have up to O(n2) edges. Each edge can be viewed as a degree of freedom in the graphical model. If the number of edges is limited to a smaller number, then we have a model with fewer degrees of freedom. Graphical models are also often referred to as Bayesian Networks. Consider the example graph shown in Figure 1 below. The graphical model represents a joint distribution p(x1, x2, . . . , x7). Specifically, the edges indicate constraints that the joint distribution p(x1, x2, . . . , x7) satisfies. If two variables are conditionally independent given all other variables, then this is indicated by the absence of an edge between the two variables. For example, the graph structure tells us that x2 and x2 are conditionally independent given all other variables; i.e.,

@inproceedings{2015ECE8N,
title={ECE 830 Note on Graphical Models},
author={},
year={2015}
}