Discriminative Training of Markov Logic Networks
Parag Singla
and
Pedro Domingos
Abstract:
Many machine learning applications require a combination of
probability and first-order logic. Markov logic networks (MLNs)
accomplish this by attaching weights to first-order clauses, and
viewing these as templates for features of Markov networks. Model
parameters (i.e., clause weights) can be learned by maximizing the
likelihood of a relational database, but this can be quite costly and
lead to suboptimal results for any given prediction task. In this
paper we propose a discriminative approach to training MLNs, one which
optimizes the conditional likelihood of the query predicates given the
evidence ones, rather than the joint likelihood of all predicates.
We extend Collins's (2002) voted perceptron algorithm for HMMs to MLNs
by replacing the Viterbi algorithm with a weighted satisfiability solver.
Experiments on entity resolution and link prediction tasks show the
advantages of this approach compared to generative MLN training, as
well as compared to purely probabilistic and purely logical approaches.
Download:
Paper (PDF)
Datasets used:
UW-CSE
Cora