LTL model checking can be used to test whether any of the infinite paths starting from some state of a state space satisfies a given LTL formula. For a fixed LTL formula, this question may have a different answer in different states of the state space, but the answer should be independent of the details of any (correct) implementation of the LTL model checking procedure.
Therefore, it is possible to test LTL-to-Büchi translators by comparing the results obtained by model checking an LTL formula in a fixed state space several times, using each time a different translator for constructing a Büchi automaton from the LTL formula. Differences in the model checking results then suggest that at least one of the translators failed to translate the LTL formula correctly into an automaton.
To extract as much test data as possible from a state space, lbtt will by default make the model checking result comparison “globally” in the state space, which means using each LTL-to-Büchi translator to find all states in the state space with an infinite path supposedly satisfying some LTL formula and then comparing the resulting state sets for equality. Alternatively, the test can be performed only “locally” in a single state of each state space (i.e., by choosing some state of the state space and checking that all Büchi automata constructed using the different translators give the same model checking result in that state), which may speed up testing, but will reduce the number of comparison tests. In addition, lbtt repeats the result cross-comparison test for the negation of each LTL formula, since model checking also the negated formula permits making an additional consistency check (see below) on the results computed using each implementation.
Note: If the generated state spaces are paths (either random or systematically enumerated, see Random state spaces), lbtt will then include its internal LTL model checking algorithm (a restricted model checking algorithm used normally in test failure analysis, see Analyzing test results) into the model checking result cross-comparison test. This is especially useful if there is only one translation algorithm implementation available for testing (in which case normal model checking result cross-comparison is obviously redundant) but may be of advantage also in other cases by providing an additional implementation to include in the tests.