Convex Relaxations for Markov Random Field MAP estimation


Markov Random Fields (MRF) are commonly used in computer vision and maching learning applications to model interactions of interdependant variables. Finding the Maximum Aposteriori (MAP) solution of an MRF is in general intractable, and one has to resort to approximate solutions. We review some of the recent literature on convex relaxations for MAP estimation. Our starting point is to notice the MAP estimation (a discrete problem) is in fact equivalent to a real-valued but non-convex Quadratic Program (QP). We reformulate some of those relaxations and see that we can distinguish two main strategies: 1) optimize a convex upper-bound of the (non-convex) cost function (L2QP, CQP, our spectral relaxation); 2) reformulate as a linear objective using lift-and-project and optimize over a convex upper-bound of the (non-convex) feasible set (SDP, SOCP, LP relaxations). We analyse these relaxations according to the following criteria: optimality conditions, relative dominance relationships, multiplicative/additive bounds on the quality of the approximation, ability to handle arbitrary clique size, space/time complexity and convergence guarantees. We will show a few surprising results, such as the equivalence between the CQP relaxation (a quadratic program) and the SOCP relaxation (containing a linear objective), and furthermore show that a large set of SOCP constraints are implied by the local marginalization constraint. Along the way, we also contribute a few new results. The first one is a 1 kc−1 multiplicative approximation bound for an MRF with arbitrary clique size c and k labels, in the general case (extending the pairwise case c = 2). The second one is a tighter additive bound for CQP and LP relaxation in the general case (with k = 2 labels), that also has the big advantage of being invariant to reparameterizations. The new bound involves a modularity norm instead of an `1 norm. We also show that a multiplicative bound δ for the LP relaxation would imply δ ≤ 1 2 (for k = 2), putting LP on par with other convex relaxations such as L2QP. Finally we characterize the equivalence classes of a (broader) class of reparameterizations, show their dimension, and how a basis can be used to generate potentially tigher relaxations. We believe those contributions are novel.

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Cour2008ConvexRF, title={Convex Relaxations for Markov Random Field MAP estimation}, author={Timoth{\'e}e Cour}, year={2008} }