In this directory you will find the model-based tests for the light client.
They are "model-based" because they are based on the formal TLA+
model of the Light Client: see Lightclient.tla.
The tests themselves are simple TLA+
assertions, that describe the desired shape of the Light Client execution;
see LightTests.tla for some examples.
To be able to specify test assertions we have extended the Light Client model with history
variable,
which records the whole execution history.
So, by way of referring to history
you simply specify declaratively what execution history you want to see.
After you have specified your TLA+
test, list it among the tests of a test batch, such as MC4_4_faulty.json.
Such test batches are then executed automatically when you run cargo test
.
When you run cargo test
some machinery will run under the hood, namely:
- Apalache model checker
will execute your
TLA+
test, and will try to find an execution you've described. - The model execution (or counterexample in model checking terms)
will be translated into the format that is understood by the Light Client implementation.
For that translation two other components are necessary:
- Jsonatr (JSON Arrifact Translator) performs the translation by executing this transformation spec;
- Tendermint Testgen takes care of translating abstract values from the model into the concrete implementation values.
timeout
command is used to limit the test execution time.
So, for the model-based test to run, the programs apalache-mc
, jsonatr
, tendermint-testgen
and timeout
should be present in your PATH
.
If any of the programs is not found, execution of a model-based test will be skipped.
First, download the latest Apalache's release from here.
Then, unpack it, find the apalache-pkg-X.Y.Z-full.jar
file, and create an executable bash script named apalache-mc
with the following content:
#!/bin/bash
java -jar apalache-pkg-X.Y.Z-full.jar $@
Please check Apalache's installation instructions for more details and alternative ways of running Apalache.
Note that having an apalache-mc
executable, as shown above, is required. Having a Bash alias
, as recommended here, won't be enough.
$ git clone https://github.com/informalsystems/jsonatr
$ cd jsonatr/
$ cargo install --path .
$ cargo install tendermint-testgen
timeout
should be present by default on all Linux distributions, but may be absent on Macs. In that case it can be installed using brew install coreutils
.
To run your model-based tests, use this command:
$ cargo test -p tendermint-light-client --test model_based -- --nocapture
The results will be printed to the screen, and also saved to the disk in the _test_run directory. Results include:
- Report on which tests were run, and what is the outcome
- For each test, it's relevant files are also saved in the subdirectory of _test_run.
E.g. For the test
TestSuccess
, which is referenced in MC4_4_faulty.json, the results of it's execution will be saved into _test_run/MC4_4_faulty.json/TestSuccess. These results include, for a successful test:- log file, where test logs are stored
- TLA+ counterexample,
the counterexample produced by Apalache in
TLA+
format - JSON test fixture, the actual static Light Client test, generated by Jsonatr from the counterexample.
One important feature of the model-based testing is that each time you run a model-based test, a different static test could be produced. This happens due to the fact that the test you describe is very abstract, and it's requirements can be fulfilled by a multitude of concrete tests. To see what peculiarities a particular test has, please inspect the test report or logs.
We suggest the following workflow for running model-based tests:
- Experiment with your model-based test, by trying different scenarios.
For that you might want to run your test in isolation,
as counterexample generation by Apalache may take a significant time.
In order to run only your test, just create a new test batch with your test,
and temporarily remove/rename other test batches (files or directories starting with
_
are ignored). - After running your model-based test, inspect its report/logs.
- If you find the test details interesting, simply copy the auto-generated
.json
and.tla
files into the single_step directory. - Next time you run
cargo test
these static tests will be picked up and executed automatically.