next up previous
Next: 3 Social Network Analysis Up: 2 The Basics Previous: 2.2 Binomial Distribution


2.3 Multinomial Distribution

We can easily extend our distribution from binomial to multinomial in Markov logic. For example, we might want to model the outcome of a six-faced die over a number of throws. We enumerate the throws and the faces of the die with:

throw = {1,...,20}
face = {1,...,6}

and the outcome of each throw with the following predicate:

Outcome(throw, face)

Additionally, we want to model the fact that each throw has exactly one outcome. This entails a rule stating that at least one outcome must occur for each throw and one stating that at most one outcome must occur:

Exist f Outcome(t, f).
Outcome(t, f) ^ f != f' => !Outcome(t, f').

These rules must be modeled as hard constraints (denoted by the full stop at the end of the formula. This type of modeling can become cumbersome when we start dealing with MLNs with more formulas. Even more severe, MCMC algorithms such as Gibbs sampling will not converge with these constraints in the model. Fortunately, Alchemy allows us to declare this type of constraint in a much more compact manner. Instead of the last two formulas, we can simply declare the predicate Outcome with the ! operator put on the face argument:

Outcome(throw, face!)

The cumbersome notation is no longer necessary and the inference and learning algorithms enforce these block constraints internally, thus alleviating the problem of convergence and making inference more efficient.

If we run probabilistic inference on this MLN, querying Outcome:

infer -i multinomial.mln -r multinomial.result -e empty.db -q Outcome

we find that each outcome has an (approximately) equal probability. Now, say we want to consider a biased die which does not result in each face with equal probability. In Markov logic, this requires a different weight for each grounding of the face variable, so we would need to write 6 formulas, each a unit clause Outcome(t, f), with f running from 1 to 6. Again, Alchemy can help us with some user-friendly notation, the + operator. If we add the following formula to our MLN:

Outcome(t, +f)

then Alchemy produces the clauses for which we want to learn weights. The file biased-die.db contains data generated from a die biased to roll only a one or a six with equal probability; we can use this to learn weights for each outcome with:

learnwts -i multinomial-biased.mln -o learned.mln -t biased-die.db
  -ne Outcome -noAddUnitClauses

which outputs the learned MLN in the file learned.mln.


next up previous
Next: 3 Social Network Analysis Up: 2 The Basics Previous: 2.2 Binomial Distribution
Marc Sumner 2010-01-22