Recursive Random Fields
Daniel Lowd
and
Pedro Domingos
Abstract:
A formula in first-order logic can be viewed as a
tree, with a logical connective at each node, and
a knowledge base can be viewed as a tree whose
root is a conjunction. Markov logic [Richardson
and Domingos, 2006] makes this conjunction probabilistic,
as well as the universal quantifiers directly
under it, but the rest of the tree remains purely logical.
This causes an asymmetry in the treatment
of conjunctions and disjunctions, and of universal
and existential quantifiers. We propose to overcome
this by allowing the features of Markov logic
networks (MLNs) to be nested MLNs. We call
this representation recursive random fields (RRFs).
RRFs can represent many first-order distributions
exponentially more compactly than MLNs. We perform
inference in RRFs using MCMC and ICM,
and weight learning using a form of backpropagation.
Weight learning in RRFs is more powerful
than structure learning in MLNs. Applied to
first-order knowledge bases, it provides a very flexible
form of theory revision. We evaluate RRFs on
the problem of probabilistic integrity constraints in
databases, and obtain promising results.
Download:
PDF