Learning Markov Logic Network Structure via Hypergraph Lifting
Stanley Kok and Pedro Domingos
Markov logic networks (MLNs) combine logic and probability by
attaching weights to first-order clauses, and viewing these as
templates for features of Markov networks. Learning MLN structure
from a relational database involves learning the clauses and weights.
The state-of-the-art MLN structure learners all involve some element
of greedily generating candidate clauses, and are susceptible to
local optima. To address this problem, we present an approach that
directly utilizes the data in constructing candidates. A relational
database can be viewed as a hypergraph with constants as nodes and
relations as hyperedges. We find paths of true ground atoms in the
hypergraph that are connected via their arguments. To make this
tractable (there are exponentially many paths in the hypergraph),
we lift the hypergraph by jointly clustering
the constants to form higher-level concepts, and find paths in it.
We variabilize the ground atoms in each path, and use them to form
clauses, which are evaluated using a pseudo-likelihood measure.
In our experiments on three real-world datasets, we find that our
algorithm outperforms the state-of-the-art approaches.
Derivation of log-posterior