Package clintest

A test framework for clingo programs.

clintest is a test framework written in Python that makes it easy to write efficient tests for clingo programs. It provides you with numerous off-the-shelf components that allow you to assemble the most commonly used tests quickly, saving you the time to write them yourself. However, should you require a custom-build test, it will work along the others just fine.

In order to avoid time wasted on unnecessary computations, clintest will monitor the outcome of your test while steering the solving process. Once the outcome of your test is certain, it will automatically tell the solver to abort the search for further solutions.

As clintest is focussed on the specifics of clingo programs, it works best if you combine it with a general purpose frameworks like pytest.

Installation

This framework is guaranteed to work with Python 3.8 or greater. You have several options to install it:

Using conda

A conda package is planned but currently not available.

Using pip

The pip package is hosted at https://pypi.org/project/clintest. It can be installed with

$ pip install clintest

From source

The project is hosted on GitHub at https://github.com/potassco/clintest and can also be installed from source. We recommend this only for development purposes.

$ git clone https://github.com/potassco/clintest
$ cd clintest
$ pip install -e .

Usage

This section is meant to guide you through the most important features of clintest using simple examples.

Inspecting models

Imagine you have written the program a. {b}. and want to ensure that all its models contains the atom a. In order to do so, you first need to create a Test.

from clintest.test import Assert
from clintest.quantifier import All
from clintest.assertion import Contains

test = Assert(All(), Contains("a"))

The test Assert inspects the models of a program. It needs to be initialized with an Assertion and a Quantifier. An assertion (here: Contains) is a statement that may or may not hold for a certain model. A quantifier (here: All) specifies how many assertions must hold in order to pass the test.

Tests retain their own outcome which may be either

  • F? (possibly false),
  • T? (possibly true),
  • F! (certainly false), or
  • T! (certainly true).

Having an uncertain outcome means that a test has not (yet) been completed. Once the outcome is certain, it must no longer change in order for clintest to function properly. The outcome of a test can be queried as follows.

>>> print(test.outcome())
T?

Due to the fact this test was never run, its outcome is still uncertain. In order to run the test, we need to create a Solver first.

from clintest.solver import Clingo
solver = Clingo("0", "a. {b}.")

The solver Clingo is a facade around clingo.control.Control. The constructor expects the program (here: "a. {b}.") and a list of arguments for the solver (here: "0", meaning that the solver should compute all models). Once a solver is set up, it may solve your test as follows.

>>> solver.solve(test)
>>> print(test.outcome())
T!

As you realize from the output, any model of a. {b}. does indeed contain the atom a. If you want to ensure this within a framework like pytest, use Test.assert_() to raise an AssertionError with a proper message if the test's outcome is not certainly true (T!).

test.assert_()

Compound tests

Testing real-world programs would still require a lot of boilerplate, if clintest had no support for combining simple tests into more complex ones. The following example illustrates how to build a test that simultaneously ensures

  • any model contains the atom a,
  • all models contain the atom b, and
  • any model contains the atom c.
from clintest.test import Assert, And
from clintest.quantifier import All, Any
from clintest.assertion import Contains

test = And(
    Assert(Any(), Contains("a")),
    Assert(All(), Contains("b")),
    Assert(Any(), Contains("c")),
)

Solving this test with our previous solver leads to the following result.

>>> solver.solve(test)
>>> test.assert_()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "clintest/clintest/test.py", line 42, in assert_
    raise AssertionError(msg)
AssertionError: The following test has failed.
    [F!] And
        operands:
             0: [T!] Assert
                quantifier: Any
                assertion:  Contains("a")
             1: [F!] Assert
                quantifier: All
                assertion:  Contains("b")
             2: [F?] Assert
                quantifier: Any
                assertion:  Contains("c")
        short_circuit:  True
        ignore_certain: True

Because the test has failed, Test.assert_() produces a rather detailed exception which may be used to explain the cause of the failure: Test #1 did fail, probably because there was a model that did not contain the atom b.

But if you watch carefully, there is something else to discover. Test #2 was never completed. This is due to a deliberate optimization as the outcome of test #2 was irrelevant once the outcome of test #1 was certainly false (F!).

Debugging

Understanding why a test has failed can be challenging if one has no insight into the interaction between the test and the solver. This is where Record comes in handy. Record can be wrapped around any other test to save crucial information.

from clintest.test import Record
record = Record(test)

To the solver or the surrounding compound test, record behaves just as test. Any call to one of its on_*-methods is forwarded to the respective method of test. The only difference is that record also keeps a detailed Recordingof these function calls. Using the test from the previous section, the following example is what a recording may look like. In order to obtain the same result, make sure test was not solved before as a previously solved test cannot be solved again and therefore leads to a different recording.

>>> solver.solve(record)
>>> print(record)
[F!] Record
    test: [F!] And
        operands:
             0: [T!] Assert
                quantifier: Any
                assertion:  Contains("a")
             1: [F!] Assert
                quantifier: All
                assertion:  Contains("b")
             2: [F?] Assert
                quantifier: Any
                assertion:  Contains("c")
        short_circuit:  True
        ignore_certain: True
    recording:
        0: [T?] __init__
        1: [F!] on_model
            a
        2: [F!] on_statistics
        3: [F!] on_finish

From this recording we learn that the absence of atom b in a model was indeed the reason for the failure of the test.

Human-friendly error messages

As seen in the previous sections, the default string representation of a test can be quite detailed and therefore difficult to understand. While a detailed report might be preferable for a skilled programmer, it is often unwanted for the end user. This problem can be solved by adding a Context.

from clintest.test import Context
context = Context(
    test,
    str_=lambda test: f"[{test.outcome()}] Models need to know their ABC.",
)

Like a Record, a Context wraps around another test and acts like it. However, it enables the user to overwrite the __str__- and __repr__-methods to give more insight into what really went wrong.

Running the above test with our previous solver leads to a more digestible error message:

>>> solver.solve(context)
>>> print(context)
[F!] Models need to know their ABC.

Custom-build tests

In case you are unable to assemble your test from the off-the-shelf components in clintest, you might consider to custom-build it. Custom-built tests must, just as any other tests, extend Test in order to work with this library. This includes implementing two methods though a third is often needed:

  1. Test.outcome() may be called anytime and should return the current Outcome of your test. Once the outcome is certain, it must not change anymore.
  2. Test.on_finish() is called once solving comes to an end. The call to this method is your last chance to alter the outcome of your test. After the call, the outcome must be certain.
  3. optional: Test.on_model() is called whenever a model is found. You may inspect the model to change the outcome of your test. This method should return if additional models are necessary to decide the test. In case it returns False, the outcome of the test must be certain.

There are further on_*-methods you may override. For detailed information, refer to the class level documentation of Test. You may also draw inspiration from the tests implemented in clintest.test.

Instead of custom-building a whole test, it is often advisable to implement the missing components. This approach is often less labor-intensive as it aligns more seamlessly with the modular approach of clintest. See Quantifier and Assertion.

If you have an idea for an addition to clintest that could benefit other users as well, we encourage you to submit a pull request or open an issue. We are happy to consider including your idea into this project.

Sub-modules

clintest.assertion

The abstract class Assertion and classes extending it.

clintest.outcome

The Outcome of a test, accessible via Test.outcome()

clintest.quantifier

The abstract class Quantifier and classes extending it.

clintest.solver

The abstract class Solver and off-the-shelf solver implementations.

clintest.test

The abstract class Test and off-the-shelf test implementations.