Learning the Structure of Markov Logic Networks

Stanley Kok and Pedro Domingos


Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortest-first search of the space of clauses, guided by a weighted pseudo-likelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN from scratch, or to refine an existing knowledge base. We have applied it in two real-world domains, and found that it outperforms using off-the-shelf ILP systems to learn the MLN structure, as well as pure ILP, purely probabilistic and purely knowledge-based approaches.


Paper (PDF)

Datasets used:


Supplementary Information:

Templates and Syntactic Restrictions used in the Cora Domain

Aside from adding a first order predicate to a clause to extend it, we allow a "template predicate" to be added. A template predicate takes the form of: !SameAuthor(A1,A2) v !AuthorOfPaper(A1,P1) v !AuthorOfPaper(A2,P2). Similar "template predicates" are defined for SameTitle/TitleOfPaper, SameVenue/VenueOfPaper, and SameYear/YearOfPaper.

We also restrict the syntax of a clause in the following ways: