Learn More
Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This(More)
This paper introduces hinge-loss Markov random fields (HL-MRFs), a new class of probabilistic graphical models particularly well-suited to large-scale structured prediction and learning. We derive HL-MRFs by unifying and then generalizing three different approaches to scalable inference in structured models: (1) randomized algorithms for MAX SAT, (2) local(More)
Graphical models for structured domains are powerful tools, but the computational complexities of combinatorial prediction spaces can force restrictions on models, or require approximate inference in order to be tractable. Instead of working in a combina-torial space, we use hinge-loss Markov random fields (HL-MRFs), an expressive class of graphical models(More)
Probabilistic graphical models are powerful tools for analyzing constrained, continuous domains. However, finding most-probable explanations (MPEs) in these models can be computationally expensive. In this paper, we improve the scala-bility of MPE inference in a class of graphical models with piecewise-linear and piecewise-quadratic dependencies and linear(More)
We prove the equivalence of first-order local consistency relaxations and the MAX SAT relaxation of Goemans and Williamson (1994) for a class of MRFs we refer to as logical MRFs. This allows us to combine the advantages of each into a single MAP inference technique: solving the local consistency relaxation with any of a number of highly scal-able(More)
Probabilistic models with latent variables are powerful tools that can help explain related phenomena by mediating dependencies among them. Learning in the presence of latent variables can be difficult though, because of the difficulty of marginalizing them out, or, more commonly, maximizing a lower bound on the marginal likelihood. In this work, we show(More)