Markov Logic: A Unifying Framework for Statistical Relational Learning
Interest in statistical relational learning (SRL) has grown rapidly in recent years.
Several key SRL tasks have been identified, and a large number of approaches have
been proposed. Increasingly, a unifying framework is needed to facilitate transfer of
knowledge across tasks and approaches, to compare approaches, and to help bring
structure to the field. We propose Markov logic as such a framework. Syntactically,
Markov logic is indistinguishable from first-order logic, except that each formula
has a weight attached. Semantically, a set of Markov logic formulas represents
a probability distribution over possible worlds, in the form of a log-linear model
with one feature per grounding of a formula in the set, with the corresponding
weight. We show how approaches like probabilistic relational models, knowledge-
based model construction and stochastic logic programs can be mapped into Markov
logic. We also show how tasks like collective classification, link prediction, link-
based clustering, social network modeling, and object identification can be concisely
formulated in Markov logic. Finally, we develop learning and inference algorithms
for Markov logic, and report experimental results on a link prediction task.