WASP: European Working group on Answer Set Programming (funded by the European commission, FET "Future Emerging Technologies" initiative)
Work package 3: Language Extensions and Software Engineering for ASP
Participating nodes: HUT, UCY, UNILEIPZIG, VUB, BATH, UNIPOTSDAM, TUWIEN, URJC, KULEUVEN, UNICAL, UNIRC, UNINA
The work on ASP systems started from implementations [145]
of stable model semantics for normal logic programs.
Most of the current ASP systems support a (Prolog style)
logic programming language [137] based on
normal rules
of the form
However, ASP systems support similar (deductive) database techniques as Prolog type approaches where logical variables and recursive definitions are combined. This is different from other similar frameworks for problem representation such as constraint satisfaction problems (CSPs) and integer programming which are basically propositional and do not allow effective combination of (deductive) database techniques and search.
While the work on ASP started with normal rules, fairly soon implementations extending the basic language started to emerge with disjunctive programs as the first key enhancement [127]. Below we summarize the most important extensions to ASP based on normal programs with emphasis on the work done within the WASP research groups. We start from the more general purpose extensions in Section 2, then move to application oriented extensions (Section 3) and finish with work on logical foundations (Section 4) which aims at widening the basis of ASP systems and which has led already to some interesting extensions. For each extension we provide brief motivation, comparisons, references to relevant literature and links to available software.
Disjunctive logic programs are logic programs where disjunction is allowed in the heads of rules. They have first been studied by Minker [142] in the context of deductive databases, and are nowadays widely recognized as a valuable tool for knowledge representation and commonsense reasoning [7,138,76,95,131,143,9]. One of the attractions of disjunctive logic programming (DLP) is its capability of allowing the natural modeling of incomplete knowledge [7,138].
DLP has always been at the core of ASP. In fact, the term ``Answer Set Programming'' has been coined by Gelfond and Lifschitz [95] for disjunctive logic programs with explicit negation.
DLP is the natural tool for programming following the ``Guess&Check'' paradigm [76]. This methodology divides problems into a guessing part, which defines the search space, and a checking part, which tests whether a solution candidate is in fact a solution. In most cases, disjunctions are a natural choice for implementing the guessing part.
As an example which matches this scheme, let us consider the well-known 3-Colorability problem.
3-Colorability is a classical NP-complete problem. Assuming that the
set of nodes and the set of edges
are specified by means of
predicates
(which is unary) and
(binary), respectively,
it can be encoded by the following ``Guess&Check'' program:
col(X,r) ![]() ![]() |
![]() |
|
:- edge(X,Y), col(X,C), col(Y,C). |
![]() |
System development for DLP systems had begun very early, and the implementations available today are mature and competitive. The DLV system [127] (http://www.dlvsystem.com) is considered to be the state of the art implementation dedicated to DLP. A competitive alternative is the GnT system [112,111] (http://www.tcs.hut.fi/Software/gnt/), which has been built on top of Smodels (http://www.tcs.hut.fi/Software/smodels/).
Recently, disjunctive logic programs have been extended by parametric connectives (OR and AND) [125]. These connectives allow for a compact representation of disjunctions and conjunctions of a set of atoms having a given property.
The syntax of disjunctive programs has been further extended to programs with nested expressions, or nested programs for short [129], where the bodies and heads of rules may contain arbitrary Boolean formulas.
A rule of a nested program therefore has the form
Given an interpretation and an (arbitrary) program
, the
reduct,
, of
with respect to
is the basic
program obtained from
by replacing every occurrence of an
expression
in
which is not in the scope of any other
negation by
if
is true under
, and by
otherwise.
is an answer set of
iff it is a minimal model (with
respect to set inclusion) of the reduct
.
Consider a logic program whose only rule is given by
As already mentioned, nested programs properly generalise disjunctive logic
programs. One difference is that the so-called anti-chain property
does not hold for nested logic programs, while
each disjunctive logic program, satisfies this property, viz.
for all answer sets
,
of
,
implies
.
Let
. It is easy to see that
violates this anti-chain property since it possesses the two answer
sets,
and
.
For
,
is true under
and
we have
, which has
as
its minimal model.
For
,
is false under
and
we have
, which has
as its minimal model.
Consequently, both
and
are answer sets of
, and we have
, but
.
Nested programs provide a richer syntax for modelling knowledge and declaratively encoding problems. One important property is that so-called weight constraints, see Section 2.3, can be expressed via nested programs, [91]. Several concepts and methods from disjunctive logic programming have been extended to nested programs. For instance, acyclic and head-cycle free programs are examined in [136] and the splitting theorem is generalised to nested programs in [87].
Nested logic programs can be encoded in terms of quantified boolean formulas (QBFs) in linear time. Based on the resulting transformations, complexity results related to logic programs with nested expressions can be derived [149]. A polynomial reduction of nested programs to disjunctive programs, where new atoms are admitted into the language, is studied in [148]. These reductions have led to various implementations of nested programs, see [161,148,134,135]. For information on current systems, see nlp (http://www.cs.uni-potsdam.de/~torsten/nlp/) and NoMoRe (http://www.cs.uni-potsdam.de/~linke/nomore).
Important classes of such conditions are cardinality and weight constraints and optimization criteria. For example, in the product configuration domain typical constraints include conditions of the form
1 { hd1, hd2, hd3, ..., hdn } 4 100 [ hd1=23, hd2=25, hd3=102, ..., hdn=44 ]where the first expression is satisfied in a model if at least one and at most 4 of the hdi atoms are true in the model and the second expression is satisfied in a model if the sum of weights (e.g. 23 for the atom hd1) of the atoms true in the model is at least 100.
General weight constraint rules are expressions of the form
r. p :- q. q :- 2 {r,p,q}.has a unique stable model
In order to handle optimization requirements the weight constraint rule language contains minimization and maximization statements to set optimization criteria. For example, the condition on selecting the cheapest set of hard disks can be expressed as
minimize [ hd1=100, hd2=115, hd3=95, ..., hdn=200 ].Cardinality and weight constraints have proved to be very useful in developing applications. With them typical choices and constraints can be expressed in a compact and easily maintainable way and they are in wide use as can be seen from the WP5 report on model applications and proofs-of-concepts [89].
The Smodels system (http://www.tcs.hut.fi/Software/smodels/) implements cardinality and weight constraints as well as optimization. For details on the algorithms and implementation techniques, see, e.g., [164,165]. The Smodels system also supports variables, conditional literals, built-in functions and limited use of function symbols [165,168,169].
ASP systems such as Smodels can be used as stand-alone systems which take a program as input and compute stable models for it. However, in many applications an ASP system is used as a (key) component integrated with other parts of the system. For this the Smodels system contains a number of APIs so that it can be embedded to a larger system as a software library.
There are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner using ASP. Especially properties that require the use of arithmetic operators (like sum, count, or maximum) on a set of elements satisfying some conditions require rather cumbersome encodings (often requiring an ``external'' ordering relation over terms), if one is confined to classic ASP. The size of these programs is often astonishingly large, which in many cases has adverse effects on solving time.
Similar observations have also been made in related domains, notably database systems, which led to the definition of aggregate functions. Especially in database systems this concept is by now both theoretically and practically fully integrated. When ASP systems became used in real applications, it became apparent that aggregates are needed also here. First, cardinality and weight constraints, which are special cases of aggregates, have been introduced (see previous section). However, in general one might want to use also other aggregates (like minimum, maximum, or average), and it is not clear how to generalize the framework of cardinality and weight constraints to allow for arbitrary aggregates.
To overcome this deficiency, ASP has been extended by aggregate functions. There have been several attempts for defining a suitable semantics for aggregates [119,97,55,152,153,90]. These semantic definitions typically agree in the non-recursive case, but the picture is less clear for recursion. The DLV system (http://www.dlvsystem.com/) contains an implementation of stratified aggregates.
Consider the following example application: A project team has to be built from a set of employees according to the following specifications:
Suppose that our employees are provided by a number of facts of the form emp(EmpId,Sex,Skill,Salary) and then the size of the team, the minimum number of different skills, the budget, the maximum salary, and the minimum number of women are specified by the facts nEmp(N), nSkill(N), budget(B), maxSal(M), and women(W). We then encode each property stated above by an aggregate atom, and enforce it by an integrity constraint.
in(I) v out(I) :- emp(I,Sx,Sk,Sa). :- nEmp(N), not #count{I : in(I)}=N. :- nSkill(M), not #count{Sk:emp(I,Sx,Sk,Sa),in(I)} >= M. :- budget(B), not #sum{Sa,I:emp(I,Sx,Sk,Sa),in(I)} <= B. :- maxSal(M), not #max{Sa:emp(I,Sx,Sk,Sa),in(I)} <= M.
Intuitively, the disjunctive rule ``guesses'' whether an employee is included in the team or not, while the five constraints correspond one-to-one to the five requirements. Thanks to the aggregates the translation of the specifications is straightforward.
In addition, partial stable semantics for logic programs with arbitrary aggregate relations has been defined in [152] and a translation to normal logic programs which preserves the semantics in [151].
Although ASP systems have been extended in many directions, they still miss features which may be helpful towards industrial applications, like capabilities to quickly introduce new predefined constructs or to deal with compound data structures and modules. The DLP system is an extension of ASP with template constructs. ASP systems developers are enabled to fast prototype, making new features quickly available to the community, and later to concentrate on efficient (long lasting) implementations. Template predicates allow to define intensional predicates by means of generic, reusable subprograms, easing coding and improving readability and compactness. For instance, a template program is like
#template max[p(1)](1) { exceeded(X) :- p(X),p(Y), Y > X. max(X) :- p(X), not exceeded(X). }
The statement above defines the predicate max
, which
computes the maximum value over the domain of a generic unary
predicate p
. A template definition may be instantiated as
many times as necessary, through template atoms (or
template invocations), like in :-max[weight(*)](M),M>100.
Template definitions may be
unified with a template atom in many ways. The above rule contains
a plain invocation, while in :-max[student(Sex,$,*)](M),M>25.
there is a compound
one.
Semantics is given through a suitable mapping to usual ASP programs.
Given a DLP program , the Explode algorithm replaces each template atom
with a standard
atom, referring to a fresh intensional predicate
. The
subprogram
(which may have associated more than one template
atom), defining the predicate
, is computed according to the
template definition
. The final output of the algorithm is a
standard ASP program
. Answer sets of the originating program
are constructed, by definition, from answer sets of
.
The work takes inspiration from similar approaches such as the Hilog language [37], and the comprehensive study about generalized quantifiers of [82].
The DLP language has been implemented on top of the DLV system [127], creating the DLT system. The current version is available on the web [69,108,109].
In the context of logic programming, abduction has been first proposed by Eshghi and Kowalski [88], Kakas and Mancarella [113] and, during the recent years, the interest in this subject has been growing rapidly [40,122,117,72,56,160,20,114,58,133].
Unlike most of these previous works on abduction in the logic programming framework, in [154], abduction with penalization from logic programs has been proposed. This form of abductive reasoning, well studied in the setting of classical logics [75], has not been previously analyzed in logic programming and turns out to encode easily and in a natural way several relevant problems, belonging to different domains.
In [154], a formal model for abduction with
penalization from logic programs has been defined. Roughly, a
problem of abduction with penalization (P A P) consists of a logic
program , a set of hypotheses, a set of observations, and a
function that assigns a penalty to each hypothesis. An admissible
solution is a set of hypotheses such that all observations can be
derived from
assuming that these hypotheses are true. Each
solution is weighted by the sum of the penalties associated with
its hypotheses. The optimal solutions are those with the minimum
weight, which are considered more likely to occur, and thus are
preferred over other solutions with higher penalties. As an
example, consider the following diagnosis problem. Let
be the computer network represented by the set of facts
The proposed framework for abduction with penalization over logic programs has been implemented as a front-end for the DLV system. The implementation is based on an algorithm that translates an abduction problem with penalties into a logic program with weak constraints [32], which is then evaluated by DLV. This abductive system is available in the current release of the DLV system (www.dlvsystem.com), and can be freely retrieved for experiments.
This work is evidently related to previous studies on semantic and knowledge representation aspects of abduction over logic programs [113,130,114,57,133]. Another approach is given in [1], where abductive reasoning from ground logic programs is based on the well-founded semantics with explicit negation. In other proposals, the semantics is naturally associated to a particular optimality criterion, as for [110], where the authors consider prioritized programs under the preferred answer set semantics. A similar optimization criterion is proposed for the logic programs with consistency-restoring rules (cr-rules) described in [6]. Furthermore, this work is also related to previous work on abductive logic programming systems [171,116].
The predominant methodology in ASP uses a generate and test method which works as follows:
This methodology allows the programmer to distinguish between solutions and non-solutions. However, in many realistic applications the possibility to make more fine grained distinctions is required, in particular distinctions between more and less preferred solutions. For a discussion of various types of applications where preferences play an important role (like abduction and diagnosis, revision and inconsistency handling) see [23].
For this reason, there has been a substantial amount of work on extending logic programs with preferences. The different approaches can be categorized according to the following criteria:
A (statically) ordered logic program is a pair , where
is a
logic program and
is a strict partial order.
Given,
, the relation
expresses that
has higher priority than
.
For example, consider the following program
, where
There have been numerous proposals for expressing preferences in extended logic programs, including [173,92,123,35,29,163,162,52,121,120,53,146,147]. The general approach has been to employ meta-formalisms for characterizing ``preferred answer sets''. For instance, a common approach is to generate all answer sets for a program and then, in one fashion or other, select the most preferred set(s). Consequently, non-preferred as well as preferred answer sets are first generated, and the preferred sets next isolated by a filtering step. Such approaches generally have a higher complexity than the underlying logic programming semantics.
Unlike this, the preference approaches
in [52,162,25], referred to as -,
-, and
-strategy,
do not increase the complexity.
There, preferred answer sets are (standard) answer sets, where for the rules
which determine the answersetship, the ``question of applicability'' does not
depend on lower ranked rules.
More precisely, in the
-strategy, it is not allowed to apply a rule
whenever the derivation of positive prerequisites (so-called groundedness) of
is done by lower ranked rules.
Furthermore, rules can never be defeated by lower ranked ones.
The
-strategy weakens the
-strategy in that way that groundedness or
defeat by lower ranked rules is allowed whenever the head of this rule can be
derived in another way.
In contrast to that, the
-strategy pays only attention to the defeat of
rules.
In [162], a comparative study of the
-,
-, and
-strategy
is given, where all three approaches are described as uniform fixpoint
characterizations wherein the different ways of deciding the applicability of
rules is represented.
Furthermore, a hierarchy between these three strategies is shown.
That is, every preferred answer set concerning
-strategy, is also
preferred in the
-strategy and every answer set preferred in the
-strategy is also preferred in the
-strategy.
This hierarchy is mainly induced by the decreasing interaction between
groundedness and preferences.
While the
-strategy requires full compatibility between groundedness and
preferences, this interaction is weakened in the
-strategy, before it is
fully abandoned in the
-strategy.
In the above given example with the preference
, all
strategies obtain
as preferred answer set.
Answer set
is in no strategy preferred, since rule
is
defeated by lower ranked rule
and there is no way to derive the head of
rule
in another way.
[52] provides for these strategies a polynomial transformation from ordered logic programs into extended logic programs wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. This compilation process has been implemented in the plp system [155]. Since the result of the compilation is an extended logic program, one can make use of existing answer set solvers, such as dlv [86] and smodels [166].
In contrast to the compilation, [120] provides an operational
characterization for the integration of preference information into an answer
set solver.
An ordered logic program is represented by a rule dependency graph which
includes the preference information.
Preferred answer sets are characterized by non-standard graph colorings on
this graph.
For the -strategy, the GCplp system [94,120] has
been developed, where preferred answer sets are computed by gradually turning
an uncolored rule dependency graph into a totally colored one.
The interaction between groundedness and preference information affects the
computation of deterministic and non-deterministic consequences.
Alternatively, preference information can also be handled by a meta-interpretation [70,73] within answer set programming.
Instead of static preferences, one can also consider dynamic preferences, where the preference information becomes dynamically active. An application of dynamic preferences is, for example, legal reasoning. In (Gordon, 93) [99] the following legal reasoning problem was formulated:
``A person wants to find out if her security interest in a certain ship is perfected. She currently has possession of the ship. According to the Uniform Commercial Code (UCC, §9-305) a security interest in goods may be perfected by taking possession of the collateral. However, there is a federal law called the Ship Mortgage Act (SMA) according to which a security interest in a ship may only be perfected by filing a financing statement. Such a statement has not been filed. Now the question is whether the UCC or the SMA takes precedence in this case. There are two known legal principles for resolving conflicts of this kind. The principle of Lex Posterior gives precedence to newer laws. In our case the UCC is newer than the SMA. On the other hand, the principle of Lex Superior gives precedence to laws supported by the higher authority. In our case the SMA has higher authority since it is federal law.''
This can be modeled as the following ordered logic program with dynamic
preferences, where the predicate is used to refer to the rules:
Ordered disjunction is a form of disjunction where the order of
disjuncts expresses preference. Logic programs with ordered disjunction
[22,26], LPODs for short, use ordered disjunction
() in the head of rules to express preferences among literals in
the head: the rule
An answer set can satisfy rules like
to different degrees, where
smaller degrees are better: if
is satisfied in
, then the
satisfaction degree of
is the smallest index
such that
. Otherwise, the rule is irrelevant. Since there is no reason to blame
an answer set for not applying an inapplicable rule we also define the
degree to be 1 in this case. The degree of
in
is denoted
.
Based on the satisfaction degrees of single rules a global preference
ordering on answer sets is defined. This can be done through a number of
different combination strategies. Let
. For instance, we can use one of the following conditions to
define that
is strictly preferred to
:
As a simple example consider the LPOD representing the dinner preferences of a simple agent (colors denote types of wine):
1.Answer sets consist of 2 atoms combining a dish (![]()
2.![]()
3.![]()
LPODs can be implemented using a generate and improve strategy: we first
generate an arbitrary answer set and then use it as input for a
tester program, generated from
and the original LPOD. The tester
program has only answer sets strictly better than
. We can thus
iterate until the tester program becomes inconsistent and know that the
last answer set found is optimal. For an implementation, see psmodels (http://www.tcs.hut.fi/Software/smodels/priority/)
In the LPOD approach the construction of answer sets is amalgamated with the expression of preferences. Optimization programs [27], on the other hand, strictly separate these two aspects. This allows for greater modularity, flexibility and generality in the expression of preferences -- in certain cases at the cost of somewhat less compact representations.
An optimization program is a pair
. Here,
is an arbitrary logic program (normal, extended, disjunctive, ...) used
to generate answer sets. All we require is that it produces sets of
literals as its answer sets.
is a preference program.
Preference programs consist of preference rules of the form
As in the case of LPODs, answer sets can satisfy preference rules to
different degrees. If the of a preference rule is satisfied in an
answer set
and some boolean combination
in its head is also
satisfied, then the satisfaction degree of the rule in
is the index
of the leftmost satisfied boolean combination. Otherwise the rule is
irrelevant. Since
is not to be blamed for irrelevant rules,
irrelevance is considered as good a satisfaction degree as 1.
Again, the combination strategies described in the context of LPODs -- and many others -- can be used to generate the global preference order on answer sets out of the satisfaction degrees of individual rules. It is also possible to introduce meta preferences among the preference rules themselves by grouping them into subsets with different ranks. Preference rules with highest priority are considered first. Intuitively, rules with lower priority are only used to distinguish between answer sets which are of the same quality according to the more preferred rules.
Since none of the different combination strategies is the most adequate one for all applications, and since it may even be useful to apply different combination methods for different aspects of a single problem, a preference specification language has been proposed in [24]. The idea is to replace preference programs by expressions of the new language. The language is based on preference rules similar to the ones discussed above, but additionally allows the user to specify in a flexible manner how the satisfaction degrees of the rules are to be combined to a global preference order.
It turns out that this approach can be implemented in a similar fashion
as LPODs by compiling the current answer set , the generating program
and the preference expression (respectively preference
program) to a tester program which produces only answer sets of
strictly better than
.
Once the fundamental issues of how to represent and store information have been addressed, one of the most important requirements for any automated reasoning system is to be able to characterise and make decisions. To make a decision a series of meta questions must first be answered; what are the alternatives, how do they relate to each other, how many are there to choose from and what should we do if alternatives are equally preferred or unrelated? Any logic which seeks to characterise decisions must address these questions.
Ordered Choice Logic Programming (OCLP) [45,44,19,41,49] is a conceptual extension of answer set programming to characterise decisions.
Choice rules [42,43] are a formal way of saying which alternatives are available when a decision has to be made. The ordering between components gives a clear, expressive and extensive system for deciding which is the `best' alternative, while the two decision semantics, skeptical and credulous, say what to do when two options are equally preferred. OCLP has been developed over several years and OCT [19], a tool for computing the answer sets of an OCLP program by providing a front-end to Smodels [145], is available from http://www.cs.bath.ac.uk/~mdv/oct/ licenced under the GPL.
Both types of negation (classical negation and negation as-failure) can easily be simulated inside OCLP. Also available is Extended OCLP providing the necessary connectives and definitions that allow direct use of negation.
The algorithm implemented by OCT is a polynomial translation of OCLP to traditional answer set programs. Furthermore it can be shown that a bi-directional transformation exists between OCLP and generalised extended logic programs.
OCLP is well-suited for situations where knowledge about decisions and the preference between their alternatives has to be modelled. Although the notion of defeated (sometimes called overruling) is known in many preference/order-based logic programming languages (e.g. [93,124,28,36,34,2,81], they mostly restrict to comparing literals and their complements, making the ``decisions'' static and predefined. In OCLP the decisions are dynamic and can comprise multiple alternatives. Two systems, dynamic logic programming [2] and update sequences [81], can easily be mapped to the OCLP.
So far OCLP has been used to model and extend classical game theory [47,48] and to model the reasoning capabilities of individual agents in a multi-agent environment [46,49].
Reasoning about actions and change is an important field in knowledge representation and reasoning. In this context the use of logic-based approaches, in particular, for planning dates back many decades.
The usage of non-monotonic logic programs in this context, especially for planning, has been explored in several preliminary works, including [68,167]. Later, the use of ASP for planning has been advocated by Lifschitz, who coined the term Answer Set Planning [132].
Inspired by action languages such as ,
and
, in [74,79] the action language
has been introduced, which is based on answer set
semantics. It follows like these languages a transition-oriented approach, in which
transitions between states (which are described by the values of
fluents, i.e., predicates that may change over time) are governed by
axioms of a language which, loosely speaking, state when certain
fluents are causally explained to be true resp. false in the new
state given the value of other fluents in the new state, in the previous
state, and the actions that have been executed (in parallel). Different from similar
languages, however,
provides default negation ``not'' for
usage in axioms, and allows to consider states in which the values of
fluents also might be unknown. This allows, in particular, for
representing secure planning problems (that is, conformant
planning problems), in which a form of a plan is searched that works
under all circumstances, regardless of the states and possible
nondeterministic action effects, sometimes in a concise way.
An example which illustrates the flavor of is the following
action description of the Bomb-in-the-Toilet (BT) problem. We have been
alarmed that there is a bomb (exactly one) in a
lavatory. There are
suspicious packages which could contain the
bomb. There is a toilet bowl, and it is possible to dunk a package
into it. If the dunked package contained the bomb, then the bomb is
disarmed and a safe state is reached. The obvious goal is to reach a
safe state.
The following action description in serves this purpose.
From the knowledge perspective, nothing definite is known about
(and about
) for a particular
package
, so the initial situation can be represented by one
state in which neither
nor
holds.
Every plan for the goal
will lead to the desired
state of safety. For example, a feasible plan would be to take the
actions
,...,
one after
the other. However,
features also parallel action execution;
thus, dunking all packages in parallel would be another plan. If
undesired, parallel action execution can be easily disabled using
suitable constraints.
For precise definitions and more examples, see [79], where also a complexity study is carried out. For an extension to accommodate optimal planning using action costs, see [77]. An extended discussion, addressing knowledge representation issues and a comparison to related languages is given in [80].
A solid prototype of the system has been implemented, which is described and compared against other systems in [78]. A download and online examples are available at http://www.dbai.tuwien.ac.at/proj/dlv/K/.
The work described in [67] extends further the
link between action languages and ASP, by presenting a translation
of action language [115] into logic programs
under the answer sets semantics. The main purpose of language
is to address, within a unifying framework, the three
major problems of frame, ramification and qualification with
emphasis on the problems of modularity and elaboration tolerance
of the domain descriptions. The following examples illustrates the
main features of language
.
initiates
initiates
when
whenever
happens-at 1
happens-at 2
holds-at 0
holds-at 0
The first proposition says that the effect of action is
that the car is
, whereas the second proposition states
that the execution of action
has the effect that
becomes true if at the time of the execution of
the fluent
is true. The third proposition is
a ramification statement that expresses the fact that any
action that would initiate
it would also terminate
. It also expresses the general domain constraint that at
any time
and
can not hold together. The next
four propositions provide a specific narrative in which two
actions are known to have been executed at time 1 and 2. The last
two propositions are called observations, stating that
was true and that the car was not running at time 0.
The semantics of language sanctions essentially one
model for the above theory. In this model the car continues to
remain not running at time 3 onwards, despite the execution of
action
. This is because the fact that the car is
broken at time 3 (as a result of the earlier
action)
prevents the initiation of
by this action as otherwise
after time 3 we would end up with a state that violates the
constraint that we can not have at the same time
and
. Assume now that in addition to the above we are also
given the observation
holds-at 3. If the actions
of the above theory are strict, the above theory is
inconsistent. If however the actions are default, then the
semantics of language
sanction one model where the
action
fails to produce its effect
.
In [67] we investigated how increasing the
expressiveness of a language so that it can capture the
problems of frame, ramification and qualification, affects its
computational complexity, and how a solution to these problems can
be implemented within Answer Set Programming. Our current work
[66] focuses on identifying subclasses of action
theories in which reasoning is easier than in the general case.
Our main interest is in theories that can be computed efficiently
by current state-of-the-art ASP solvers.
We are currently developing a prototype system that translates
language theories into Smodels programs. We plan to
use this system for experimentation and comparison with the
existing
-RES system (http://www2.
cs.ucy.ac.cy/~pslogic/) for
and other similar
systems.
Two interesting approaches to combining description logics and ASP techniques, open answer set programming and dl-programs, developed in the WASP research groups are discussed in this section.
In traditional ASP one has the domain-closure
assumption [96],
i.e. the relevant domain elements are all supposed to be specified in
the program. In many natural problems, however, this assumption may
lead to wrong conclusions. For example, a DLP consisting of the rules
;
;
, and
,
has the
(normal) answer sets
and
. Since
none of them
contains a
-atom, one might, wrongfully, conclude
that one can never fail. Listing more students might solve the
problem in this case, however, in general, this puts a serious burden
on the knowledge engineer, having to handle ``all" influential
constants.
Using (possibly infinite) open domains, i.e. allowing the universe to
be a superset of the constants in the program, makes sure that one
rightfully concludes that students may fail if they do not study. For
example,
In [104,103,102], one can find the formal semantics of such open answer set programming. Reasoning with open domains is undecidable for arbitrary DLPs. To resolve this, [104,103,102] resort to an expressive subclass of DLPs, i.e., Conceptual Logic Programs (CLPs). Reasoning with CLPs is shown to be decidable in [104,103] by a reduction to automata theory. The type of CLPs in [102] allow for a reduction to normal, closed domain, finite answer set programming.
The CLPs in [104] allow to simulate
reasoning in the Description Logic [5]
,
[103] loosens the syntax of CLPs
and is able to simulate reasoning in the more expressive DL
.
Finally, the CLPs in [102] allow for constants in
programs such that the DL
can be simulated. Since ASP is a logic programming
paradigm, and thus suited for reasoning with rules, those simulations
provide for integrated reasoning with both ontological and rule
knowledge.
Hybrid approaches to reasoning with DLs and rules can be found in [71,159] and [128], the former introduces ([159] extends) AL-log, i.e. a language consisting of two subsystems, a DL part and a (disjunctive) Datalog part, where the interaction between both happens through constraints within Datalog clauses. In [128], knowledge bases contain a set of Horn rules and a DL terminology where concepts and roles may appear as predicates in Horn rules.
Another approach is to reduce DL knowledge bases to logic programs,
e.g., [107] uses an intermediate
translation to first order clauses and then performs resolution-based
decision procedures before effectively translating to a disjunctive
Datalog program. In [144] non-recursive
is
translated in disjunctive databases. [3] presents a
translation from the DL
to answer set programs, using
function symbols to accommodate for an infinite Herbrand universe.
[100] simulates reasoning in DLs through simple Datalog
programs. This necessitates heavily restricting the usual DL
constructors, e.g., negation or number restrictions cannot be
expressed.
In [10], the simulation of a DL with acyclic axioms in open logic programming is shown. An open logic program is a program with possibly undefined predicates, and a FOL-theory; the semantics is the completion semantics, which is only complete for a very restrictive set of programs.
There is currently no implementation for reasoning with CLPs available.
Description logic programs, or dl-programs for short, are
presented in [84,85] as a novel method to couple
description logics with nonmonotonic logic programs. Roughly speaking, a
dl-program
consists of a knowledge base
in a description
logic and a finite set
of generalized logic program rules, called
dl-rules. These are similar to usual rules in logic programs with
negation as failure, but they may also contain queries to
in their
bodies, which are given by special atoms (on which possibly default negation
may apply). For example, a rule
Another important feature of dl-rules is that queries to also allow for
specifying an input from
, and thus for a flow of information from
to
, besides the flow of information from
to
, given by any query to
. Hence, description logic programs allow for building rules on top of
ontologies, but also (to some extent) building ontologies on top of rules. This
is achieved by dynamic update operators through which the extensional part of
can be modified for subjunctive querying. For example, the rule
The description logic knowledge bases in dl-programs are specified in the
well-known description logics
and
, which
underly OWL Lite and OWL DL [105,106], respectively.
Two basic types of semantics have been defined for dl-programs: in [84], a generalization of the answer-set semantics for ordinary logic programs is given, and in [85], a generalization of the well-founded semantics [170,8]. In fact, two versions of the answer-set semantics for dl-programs are introduced, namely the weak answer-set semantics and the strong answer-set semantics. Both semantics coincide with usual answer sets in the case of ordinary normal programs. Every strong answer set is also a weak answer set, but not vice versa. The two notions differ in the way they deal with nonmonotonic dl-queries. While the answer set semantics resolves conflicts by virtue of permitting multiple intended models as alternative scenarios, the well-founded semantics remains agnostic in the presence of conflicting information, assigning the truth value false to a maximal set of atoms that cannot become true during the evaluation of a given program.
An efficient implementation of the answer-set and the well-founded semantics for dl-programs has been realized in a working prototype exploiting the two state-of-the-art solvers DLV [126] and RACER [101]. A major issue in this respect is an efficient interfacing between the two reasoning systems at hand, for which special methods have been devised in [83]. Depending on the type of stratification of the dl-program, a number of different evaluation strategies can be employed, ranging from a refined guess-and-check algorithm to iterative procedures with alternating calls of the external solvers.
In a multi-agent environment it is possible to represent agents' requirements or desires as logic programs. Moreover, in order to pursue his goals, one agent must often be capable of fine-tuning his behavior on that of the other agents. Thus, in [30] we have focused on the interactions among logic-program agents by introducing the SOcial Logic Programming (SOLP).
SOLP is designed for the description of agent mental states and enables
agent social ability, i.e. the ability to take into account (the
models of) the other agents in the community when reasoning. The
language extends COLP [31], the main extension consisting of
constraints over the behavior (i.e. what it can be inferred) of
either a given number of or a specific other agent in the system.
Moreover, the language SOLP allows for social constraints to be nested,
and it is provided with a suitable fixpoint-like semantics. We have
shown that the Social Semantics of SOLP extends the Joint Fixpoint
Semantics of COLP. Importantly, we have shown that the language SOLP can
be translated into the well-known , which is basically
disjunctive logic programming with aggregate functions, by a polynomial
source-to-source transformation. The adoption of the stable model
semantics provides a sound formal characterization to the language.
A very large and widely accepted literature witnesses that logic programming is a suitable framework for representing the agents' subjective perception w.r.t. the dynamic environment in which they act. Thus, we assume that the observe-think-act activity of an agent involves a set of sensors (responsible to scan the external environment) coupled to a reasoning layer (where the events that may occur in the external environment are represented and causally related to agent actions). Now, it is possible to encode into an extended logic program both the sensors (by using specific literals) and the reasoning layer (by means of literals for events and actions and rules possibly using sensor literals).
Unfortunately, such a solution is founded on a simple (yet unrealistic) assumption: The agent always relies on its perception, which is sensitive to failure. In this respect, we proposed in [33] an extension of Answer Set Programming, called sensor-dependent (SD) logic programming, for taking into account possible perception failure of sensors. It is shown by examples that the language is very suitable for the above purpose. Moreover, a suitable semantics has been defined in an answer-set fashion and a polynomial algorithm has been given, allowing any SD program to be translated into an equivalent extended logic program.
Default logic was conceived in [157] as a logical formalism for default reasoning. Since then, it has turned into the best known and most widely studied approach to nonmonotonic reasoning. In particular, it can be seen as the direct ancestor of answer set programming. To see this, observe that a logic programming rule is simply a syntactically restricted default rule that allows for highly efficient implementations.
The very generality of default logic means that it lacks several important properties, including existence of extensions [157] and cumulativity [140]. In addition, differing intuitions concerning the role of default rules have led to differing opinions concerning other properties, including semi-monotonicity [157] and commitment to assumptions [156]. As a result, a number of modifications to the definition of a default extension have been proposed, resulting in a number of variants of default logic. Most notably these variants include constrained default logic [54], cumulative default logic [21], justified default logic [139], and rational default logic [141]. (To be sure, there are other variants of default logic. The variants covered here are arguably the best-known and studied [4].) In each of these variants, the definition of an extension is modified, and a system with properties differing from the original is obtained.
In [51] we have shown how variants of default logic can be expressed in Reiter's original approach. Similarly, we have shown that rational default logic and default logic may be encoded, one into the other. For the most part the provided transformations have good properties, being (with exceptions) faithful, polynomial, modular, and monotonic. This work then complements previous work in nonmonotonic reasoning which has shown links between (seeming) disparate approaches. Here we show links between (seemingly) disparate variants of default logic. As well, the translations clearly illustrate the relationships between alternative approaches to default logic.
As in answer set programming, default logics offer two kind of reasoning principles: a default conclusion that appears in some extension is called a credulous (or brave) default conclusion, while one that appears in every extension is called a skeptical conclusion. Intuitively it might seem that skeptical inference is the more useful notion. However, this is not necessarily the case. In diagnosis from first principles [158] for example, in one encoding there is a 1-1 correspondence between diagnoses and extensions of the (encoding) default theory. Hence one may want to carry out further reasoning to determine which diagnosis to pursue. More generally there may be reasons to prefer some extensions over others, or to somehow synthesize the information found in several extensions. In [50], we have described an approach for encoding default extensions within a single extension. Using constants and functions for naming, we can refer to default rules, sets of defaults, and instances of a rule in a set. Via these names we can, first, determine whether a set of defaults is its own set of generating defaults and, second, consider the application of sets of defaults ordered by set containment. The translated theory requires a modest increase in space: except for unique names axioms, only a constant-factor increase is needed.
Finitary programs [12,15] support function symbols and recursion, without affecting decidability. This extension is mainly motivated by the need of representing and reasoning about recursive data structures (such as XML documents) in a natural and uniform way, without resorting to external preprocessors.
Current prototypes [11] comprise a finitary program recognizer and a relevant program constructor. They can be downloaded from http://people.na.infn.it/~bonatti/ricerca/software/index.html. Work on integrating finitary programs into DLV is in progress.
In open programs [13,14] the definition of some predicates is not complete (or exhaustive), and can be integrated by more rules. Open programs have been introduced for program module analysis, as well as reasoning with open domains. A combination of finitary programs and open programs supports abduction with unbounded, possibly open domains [16,17].
This line of work aims at developing an alternative ASP logic called ID-logic. This logic is an extension of classical logic with inductive definitions. The motivation for this work is to build a logic with solid epistemological foundations in mathematics, suitable as a rich and natural knowledge representation language and for which more efficient problem solvers can be built. There are three components of this project: semantics, knowledge representation and problem solving. Below we discuss each of these components.
The epistemological foundation of ID-logic is the notion of inductive definition as found in mathematics. Inductive definitions that appear in mathematical texts show a very strong correspondence with logic programs, both on syntactical and semantical level. To formalise this idea, we developed a theory of generalised nonmonotone inductive definitions and demonstrated its relationship with answer set programming and non-monotone reasoning. Our theory, called approximation theory, is a fixpoint theory. The standard fixpoint theory is limited to monotone induction and was developed 70 years ago by Alfred Tarski. We have contributed here by extending this theory to the case of general non-monotone induction [62,61]. This result has been obtained in collaboration with Prof. Truszczynski and Prof. Marek from University of Kentucky. We have also showed that our fixpoint theory uniformly describes the semantics of the three main non-monotonic reasoning formalisms, i.e. logic programming, default logic and autoepistemic logic [63,61]. Moreover, we have been able to unify default logic and autoepistemic logic; their relation was an open problem for two decades.
At the knowledge representation level, ID-logic is defined as
an extension of classical logic with non-monotone inductive
definitions. An inductive definition in an ID-logic theory is
represented as a set of rules defining a subset of defined predicates
in terms of other, open predicates. As an example, the following
theory containing two definitions for the predicate and one
axiom expressing that men and women are disjunct categories:
The syntax and semantics of ID-logic have been defined in [59,65], partly in collaboration with Prof. Ternovska from Simon Fraser University. In [64] we have demonstrated the role of ID-logic for knowledge representation by showing how situation calculus, a well-known AI-formalisms for temporal reasoning, contains hidden forms of inductive definitions and has an elegant formalisation in ID-logic. The methodology of ID-logic and its relationship to answer set programming have been studied in [64,98,60]. We have also studied the computational complexity of inference in ID-logic [61,151]. We have also started with the study of extensions of definitions with aggregates in the context of the PhD work of Dr. Pelov [152,151,150].
At the computational level the work ID-logic solvers is still in a early stage but some results have been obtained. One important result that we have already obtained is the development of the A-system [118,172]. This system implements abductive inference in the context of a sublogic of ID-logic. At this moment, this is one of the best abductive systems available. It has been developed in the context of the PhD-work of Dr. Van Nuffelen at the K.U.Leuven, in collaboration with Prof. Kakas of the University of Cyprus (WASP partner). The system integrates CLP-techniques, open functions and aggregates. It is available at http://www.cs.kuleuven.ac.be/~bertv/Asystem/.
We have also started with the development of a model generator for ID-logic. A first version exploits a translation from ID-logic to answer set programming presented in [98] and uses Smodels to generate models of the input ID-logic theory. It is available at http://www.cs.kuleuven.ac.be/~maartenm. We have started a new project to build a model generator tuned for ID-logic.
In this section we summarize available implementations of language extensions described in the report.
implemented as a frontend of DLV (http://www.dlvsystem.com)
plp (http://www.cs.uni-potsdam.de/~torsten/plp)
GCplp (http://www.cs.uni-potsdam.de/~konczak/system/GCplp)
psmodels (http://www.tcs.hut.fi/Software/smodels/priority/)
OCT (http://www.cs.bath.ac.uk/~mdv/oct/)
implemented as a frontend of DLV (http://www.dbai.tuwien.ac.at/proj/dlv/K/)
-RES system (http://www2.cs.ucy.ac.cy/~pslogic/)
NLP-DL (http://www.kr.tuwien.ac.at/staff/roman/semweblp/)
This section recaps interesting software engineering issues for ASP.
The predominant programming methodology for ASP is the generate and test technique briefly explained in Sections 2.1 and 3.2. This methodology has also motivated many of the language extensions. For example, disjunctions and cardinality and weight constraints have been found very useful for programming the generation of solution candidates and, e.g., aggregates and preferences for testing solution candidates.
Many of the language extensions have been implemented using a technique where the idea is to exploit an efficient implementation of a basic ASP language and implement new language features by compiling them to this basic language. For example, DLV (http://www.dlvsystem.com) and Smodels (http://www.tcs.hut.fi/Software/smodels/) have been extensively used as such core engines for implementing new language extensions ranging from preferences to action languages.
To complement the core engine approach there is new work on combining ASP engines with other approaches such as satisfiability (SAT) checkers and constraint programming systems. One trend is to use SAT checkers to implement answer set computation as, e.g., in ASSAT (http://assat.cs.ust.hk/) and Cmodels (http://www.cs.utexas.edu/users/tag/cmodels.html). Another interesting line of development is to integrate ASP solvers and constraint programming systems. The Smodels-ag system (http://www.cs.nmsu.edu/~ielkaban/smodels-ag.html) is an example of this.
The need for better programming tools such as debuggers and tracers has been widely recognized. Now there is interesting recent work in this area, for example, in the IDEAS system [18]. Another interesting active area is equivalence testing of logic programs. Equivalence testing tools could provide a useful aid for step-wise development of an ASP program where the correctness of modifications or refinements would be checked against previous versions of the program. Interesting tools for equivalence testing are already emerging such as SELP [38], setest (http://www.kr.tuwien.ac.at/students/prak_setest/) and lpeq (http://www.tcs.hut.fi/Software/lpeq/).