1 Introduction

Welcome to the Alchemy system! This user's manual is designed for end users wishing to perform learning and inference on Markov logic networks. It consists of the following sections:

0in

- Introduction
- Installation
- Quick Start
- Syntax
- Predicates and Functions

The Alchemy package provides a series of algorithms for statistical relational
learning and probabilistic logic inference, based on the Markov logic
representation. If you are not already familiar with Markov logic, we
recommend that you read the papers *Markov Logic
Networks* [9],
*Discriminative Training of Markov Logic
Networks* [11],
*Learning the Structure of Markov Logic Networks* [3],
*Memory-Efficient Inference in Relational Domains* [12]
and *Sound and Efficient Inference with Probabilistic and Deterministic
Dependencies* [8]
(mln.pdf, dtmln.pdf, lsmln.pdf, lazysat.pdf and mcsat.pdf in the `papers/`
directory) before reading this manual.

We welcome your feedback on any aspect of the Alchemy package. Please
email us at `alchemy@cs.washington.edu` to let us know what you find
easy or hard to use, what results you have obtained with Alchemy, the
features you wish to have but are not currently provided, and any bugs
that you encounter.

Please cite Kok et al. (2008) [5] if you use the Alchemy system.

Please be aware that this is a beta release. Some aspects of the documentation may not be as clear, and some aspects of its usage may not be as user-friendly, as you would like. We have tested the code but some bugs may inadvertently still remain.

This beta release includes: 0in

- Discriminative weight learning (Voted Perceptron, Conjugate Gradient, and Newton's Method)
- Generative weight learning
- Structure learning
- MAP/MPE inference (including memory efficient)
- Probabilistic inference: MC-SAT, Gibbs Sampling, Simulated Tempering, Belief Propagation (including lifted)
- Support for native and linked-in functions
- Block inference and learning over variables with mutually exclusive and exhaustive values
- EM (to handle ground atoms with unknown truth values during learning)
- Specification of indivisible formulas (i.e. formulas that should not be broken up into separate clauses)
- Support of continuous features and domains
- Online inference

In the next release we plan to include: 0in

- Online learning
- Exact inference for small domains
- Specification of probabilities instead of weights for formulas in an MLN, and of probabilities for ground atoms in a database
- Decision Theory
- More extensive documentation

Alchemy uses: 0in

- C++ code from the MaxWalkSat package of Kautz et al. (1997) [2].
- C++ code from the SampleSat algorithm of Wei et al. (2004) [15].
- A port from Fortran to C++ of the L-BFGS-B package of Zhu et al. (1997) [16].
- A port from Lisp to C++ of the CNF conversion code of Russell and Norvig (2002) [10].
- The C++ code to compute the inverse cumulative standard normal distribution of Acklam (2003) [1].
- The C++ command line parsing code due to Jeff Bilmes (1992).

The development of Alchemy was partly funded by DARPA grant FA8750-05-2-0283 (managed by AFRL), DARPA contract NBCH-D030010 (subcontracts 02-000225 and 55-000793), NSF grant IIS-0534881, ONR grants N00014-02-1-0408 and N00014-05-1-0313, a Sloan Research Fellowship, and an NSF CAREER Award (both of these to Pedro Domingos). The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, NSF, ONR, or the United States Government.