-
Notifications
You must be signed in to change notification settings - Fork 26
MLN Query Tool
Start the tool from the command line with mlnquery
.
The MLN query tool is an interface to make inferences in a Markov logic network. The tool allows you to invoke the actual MLN inference algorithms provided by ProbCog's Python and Java implementations, i.e. PyMLNs and J-MLNs respectively, or, optionally, one or more installations of the Alchemy system developed by the University of Washington. (To tell ProbCog about the location of your Alchemy installation(s), edit src/main/python/configMLN.py
).
Once you start the actual algorithm, the tool window itself will be hidden as long as the job is running, while the output of the algorithm is written to the console for you to follow. At the beginning, the tools list the main input parameters for your convenience, and, at the end, the query tool additionally outputs the inference results to the console (so even if you are using the Alchemy system, there is not really a need to open the results file that is generated).
The tool features integrated editors for .db and .mln files. If you modify a file in the internal editor, it will automatically be saved as soon as you invoke the learning or inference method. The new content can either be saved to the same file (overwriting the old content) or a new file, which you can choose to name as desired. Furthermore, the tool will save all the settings you made whenever the inference method is invoked, so that you can easily resume a session. When performing inference, one typically associates a particular query with each evidence database, so the query tool specifically remembers the query you made for each evidence database and restores it whenever you change back to the evidence database.
The main inputs for an inference task are the MLN model itself (MLN), the evidence database (Evidence), the inference method to apply (Method) and the queries that define what is to be inferred (Queries).
A query can be any one of the following:
- a ground atom, e.g.
foobar(X,Y)
- the name of a predicate, e.g.
foobar
- a ground formula, e.g.
foobar(X,Y) ^ foobar(Y,X)
(PyMLNs engine only)
- Max. steps: the maximum steps to take in the inference method (e.g. number of samples to draw), where applicable
- Num. chains: the number of parallel chains to use for MCMC algorithms
- CW preds: the possibility to define closed-world (CW) predicates, i.e. predicates for which we assume that all groundings that are not provided as true in the evidence are false
-
Add. params: additional command line parameters to pass on to inference method. The way these must be provided depends on the engine used.
- For PyMLNs, specify a Python-style comma-separated list of keyword arguments. For example, with exact inference, setting debug to True (i.e. writing "debug=True" into the input field) will print the entire distribution over possible worlds; when using MC-SAT, the option will provide a more detailed report of what the algorithm is doing at each step, the level of detail can be specified via debugLevel (e.g. set "debug=True, debugLevel=30"). Depending on the algorithm many parameters may be supported. Please refer to the source code.
- For J-MLNs or the Alchemy System, the parameters are added when invoking the respective command line tools (see their help screens).