Alternative Approaches to Default Logic

September 30, 2017 | Autor: Torsten Schaub | Categoría: Cognitive Science, Artificial Intelligence, Default Logic
Share Embed


Descripción

Artificial Intelligence Artificial Intelligence 70 (1994) 167-237

ELSEVIER

Alternative approaches to default logic James

P. D e l g r a n d e

* a

' , Torsten

Schaub b'l, W. Ken Jackson a

aSchool of Computing Science, Simon Fraser University, Burnaby, British Columbia, Canada V5A 1S6 bFachgebiet lntellektik, Technische Hochschule Darmstadt, Alexanderstra]3e 10, D-6100 Darmstadt, Germany Received July 1992; revised July 1993

Abstract Reiter's default logic has proven to be an enduring and versatile approach to nonmonotonic reasoning. Subsequent work in default logic has concentrated in two major areas. First, modifications have been developed to extend and augment the approach. Second, there has been ongoing interest in semantic foundations for default logic. In this paper, a number of variants of default logic are developed to address differing intuitions arising from the original and subsequent formulations. First, we modify the manner in which consistency is used in the definition of a default extension. The idea is that a global rather than local notion of consistency is employed in the formation of a default extension. Second, we argue that in some situations the requirement of proving the antecedent of a default is too strong. A second variant of default logic is developed where this requirement is dropped; subsequently these approaches are combined, leading to a final variant. These variants then lead to default systems which conform to alternative intuitions regarding default reasoning. For all of these approaches, a fixed-point and a pseudo-iterative definition are given; as well a semantic characterisation of these approaches is provided. In the combined approach we argue also that one can now reason about a set of defaults and can determine, for example, if a particular default in a set is redundant. We show the relation of this work to that of Lukaszewicz and Brewka, and to the Theorist system.

I. Introduction R e i t e r ' s d e f a u l t logic [18] is o n e o f the b e s t k n o w n a p p r o a c h e s to n o n m o n o t o n i c r e a s o n i n g . I n this a p p r o a c h , d o m a i n - s p e c i f i c rules o f i n f e r e n c e o r defaults a r e a d d e d to classical p r o p o s i t i o n a l o r f i r s t - o r d e r logic to c a p t u r e p a t t e r n s o f i n f e r e n c e * Corresponding author. E-mail: [email protected]. 1 Current address: IRISA, Campus de Beaulieu, 35042 Rennes Cedex, France. 0004-3702/94/$07.00 O 1994 Elsevier Science B.V. All rights reserved SSDI 0004-3702(93)E0085-Z

168

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

of the form "in the absence of information to the contrary, conclude t h a t . . . " . An example of a default representing the assertion "Birds fly" is

Bird(x) :Fly(x) Fly(x) This rule is roughly interpreted as "if something can be inferred to be a bird, and if that thing can be consistently assumed to fly, then infer that it does fly". The meaning of a rule then rests on notions of provability and consistency with respect to a given set of beliefs. A set of beliefs sanctioned by a set of defaults, with respect to an initial set of facts, is called an extension of this set of facts. However, as discussed in the next section, this formalism lacks several important properties, including existence of extensions and semi-monotonicity [18], and cumulativity [15]. In addition, differing intuitions concerning the role of default rules lead to differing opinions concerning other properties, most notably that of "commitment to assumptions" [17]. With regard to commitment to assumptions, the general idea is that the notion of consistency can be employed in either a broad or "global" fashion, or in a more limited or "local" fashion. Various proposals have been put forward to address these issues and difficulties; as we show however, none of these proposals consider the full set of issues. Section 3 describes a variant of default logic, called constrained default logic. This variant essentially marries and extends the work found in [7] and [21]. In this variant, a default extension is characterised by a set of formulas (as before) giving the set of default conclusions, and a second set of formulas giving the "context of reasoning", determined by the justifications and conclusions of applied defaults. Consistency is treated in a "global" fashion, and the justifications of default rules are taken as expressing underlying assumptions, rather than a notion of possibility. As a result, this variant conforms with the notion of commitment to assumptions. Furthermore, the problems of semi-monotonicity and existence of extensions are addressed, and in a simpler framework than other proposals. It may also be argued that in certain situations default logic is overly weak, in that desirable conclusions are not forthcoming. We argue that in such cases the requirement that the antecedent be provable is too strong, and in Section 4 present a variant where defaults are replaced by prerequisite-flee counterparts. The omission of prerequisite conditions leads to a simpler formulation, wherein one may reason by modus tollens and by cases. This variant is referred to as prerequisite-free default logic. This section also examines the system obtained by joining constrained default logic with prerequisite-free default logic. In this variant, problems that arise in the original formulation are fully addressed, including that of cumulativity, and in a simpler system than other variants. We also show how one can now reason about a default in the metatheory to determine, for example, if it may never be applicable in the formation of an extension. For each of these variants, fixed-point, iterative, and semantic characterisations are given and shown to be equivalent. As well, in Section 5, the relation to

J.e. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

169

previous work by Lukaszewicz [13] and Brewka [3] is given; in addition we demonstrate the relation of our modifications to the Theorist system [16]. Hence the results reported in this paper provide links among these systems. Appendix A contains two tables summarising the systems discussed here. The first table lists the names and abbreviations of these systems, together with the sections in which they are discussed. The second table contains a summary of properties of the variants of default logic. Appendix B contains the proofs of the results stated in the body of the paper.

2. Default logic

2.1. Introduction to default logic Default logic (or DL) extends classical first-order logic by the addition of domain-specific rules of inference of the form :

A rule may be informally interpreted as: "If, for some set of instances J, a(c-') is provable from what is known and [3(c-') is consistent, then conclude by default that to(~", a(£) is called the prerequisite; [3(£) is the justification; and oJ(£) is the consequent. For convenience, we denote the prerequisite of a default ~ by Prereq(a), its justification by Justif(8) and its consequent by Conseq(a). These projections extend to sets of defaults in the obvious way. A normal default is one where the justification and consequent are equivalent. A semi-normal default is one where the consequent is a logical consequence of the justification. Almost all "naturally occurring" defaults are normal [10], although semi-normal defaults are required for "interacting" defaults [19]. A default theory is a pair (D, W) where D is a set of defaults and W is a set of closed first-order (or propositional) formulas.2 A closed default theory is one where none of the formulas in the defaults contains a free variable.3 Also, a default theory is said to be normal or semi-normal if it contains only normal or semi-normal defaults, respectively. The set of defaults is intended to capture hypothetical or non-strict inferences. However defaults differ markedly from standard rules of inference in an axiomatic specification of a logic. First, they are "domain-specific" in that a rule makes reference to specific formulas. Second, these rules allow for inferences based not only on what can be proven from the facts, W (as given in a rule's prerequisite), but also from what cannot be proven (as given in the justification). 2 In what follows, we simply say formula instead of closed first-order formula. 3 For simplicity we deal only with closed default theories; open theories are described in [18]. Also we restrict ourselves to default rules with only a single justification. In the systems we develop, multiple justifications correspond to their conjunction.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

170

A set of defaults induces one or more extensions of the formulas in W. An extension can be viewed as a maximal acceptable set of beliefs that one may hold about the world W, suitably augmented by the defaults D. Reiter specifies three properties which should hold for an extension: it should contain the initial set of facts W; it should be deductively closed; and for each default rule, if the prerequisite is in an extension, but the negation of the justification is not, then the consequent is in the extension. If Th(S) is used to denote the deductive closure of S, we obtain: Definition 2.1 (Reiter [18]). Let (D, W) be a default theory. For any set S of formulas, let F(S) be the smallest set satisfying the following properties: (1) W c F(S), (2) F(S)= Th(F(S)), (3) if ~:~ E D and a EF(S) and -qfl~_S, then w ~ F ( S ) , to then E is an extension of the default theory (D, W) iff F(E) = E. That is, E is a fixed point of F. A pseudo-iterative specification of an extension is also given: Theorem 2.2 (Reiter [18]). Let (D, W) be a default theory and let E be a set of formulas. Define

Eo= W , and for i >i 0

I

Ei+l= Th(Ei) U to

to

]

E D , aEEi,-n[3yi~E .

Then E is an extension for (D, W) iff E = (-J?-o El. The above procedure is not strictly iterative since E appears in the specification of

Ei+ 1 .

If we had for example that Quakers are typically pacifists while Republicans typically are not, i.e.

D = { Q(x) : P(x ) R(x) :-qP(x) } P(x) ' ~P(x) ' then, for W = {Q(sue), R(sue)}, there are two extensions, one in which P(sue) is true and one in which ~P(sue) is true. If D' = D U {

Q(x) ' G(x) A(x)_E(x).~ C(x) ' E(x) J

(say, Quakers are typically generous and adults are typically employed) then again we obtain two extensions where P(sue) is true in one and --qP(sue) is true in

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

171

the other, and where additionally G(sue) is true in both, and E(sue) is true in neither. While these characterisations of a default theory are clear and intuitive, they nonetheless are essentially syntactic in nature. As a first step in developing semantic underpinnings for DL, Lukaszewicz [12] provides a semantic characterisation of normal default theories. The general idea is that every normal default can be regarded as a mapping from sets of models into sets of models, such that the range of the mapping is the subset of the domain where the consequent is true. In [9, 10] semantic characterisations of general default theories are given, based on a notion of preference between sets of models. A default 8 prefers a set of models //~ in which the consequent of the default holds, over a superset of models I/2 where the prerequisite is true and the justification is consistent but the consequent is not necessarily satisfied. If we let MOD(W) denote the set of all models of W, the preference relation >~ is defined as follows: Definition 2.3 (Etherington [10]). Let 6 = @ be a default r u l e , / / a set of models and 111,//2 E 2n. The relation corresponding to 8, ~>~, is defined as follows: //1 ~>~HE iff (1) for every 7r ~ / / 2 we have ~" ~ a, (2) there is 7r E / / 2 such that ~r ~/3, (3) H 1 = { ~ - E / / 2 1 7 r g t o }. This order is easily extended to a set of defaults D: Definition 2.4 (Etherin~ton [10]). Let D be a set of default rules, H a set of models and H 1,HE E 2 . The relation corresponding to D, ~>o, is the transitive closure of the union of the relations ~>~ for every ~ E D: H 1 ~>o/I2 iff (1) there exists 6 ~ D such that HI ~>8//2, or else (2) there is a / / 3 E 2 n such that H 1 ~>8//3 and/-/3 ~>~//2. For normal default theories, we need only consider the ~>o-maximal elements of 2 M°o(w) [12]. However for general default theories, we need to ensure that the justification of each default is consistent with the final extension. Etherington defines a property which he calls stability that ensures this condition: Definition 2.5 (Etherington [10]). Let (D, W) be a default theory and H E 2 M°o(w). / I is stable for (D, W) iff there is a set of default rules D' C_D such that (1) H / > o , MOO(W), and (2) for every ~ :~ E D', we have for some 7r ~ / / that 7r ~/3. Stable, maximal sets of models for a default theory are shown to correspond to extensions in a default theory, thus providing an analogue to soundness and completeness results.

172

J.e. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

2.2. Properties of default logic There are a number of properties which are not present in DL. These may be roughly divided into, first, limitations of the formalism itself, and, second, properties that arise due to differing intuitions concerning the nature of a default rule. This division is in some sense an arbitrary one, and some properties that may be problematic to one person or application may appear to be quite reasonable to another. Since the examples illustrating these properties are used throughout the paper, they are developed in some detail here. First, it is not always the case that a default theory has an extension.

Example 2.6 (Existence of extensions). The default theory ({~m}, I~) has no extension. Normal default theories guarantee the existence of extensions, which is not the case for semi-normal default theories. The next example illustrates the property of semi-monotonicity and thus concerns increasing the set of defaults in a theory. Intuitively, we would want that if D ' C_D for two sets of defaults, then if E' is an extension of D ' then there is an extension E of D where E' C_E. While this indeed is the case for normal defaults [18, p. 96], the next example, adapted from [13], illustrates that it is not so in general. Example 2.7 (Semi-monotonicity). • The default theory

({A'BA~C} has a unique extension: • The default theory

({A:BA~CB

Th({A, B}).

' AcC}'{A})

has a unique extension:

Th({A, C}).

As Reiter [18] discusses, semi-monotonicity has the important practical consequence that it allows for local proof procedures that may discard some of the defaults. Moreover, if we interpret justifications as supplying tentative assumptions (or working hypotheses) then the fact that we obtain only one extension, Th({A, C}), in the second part of the preceding example is arguably unintuitive: if we know only W = {A} originally, then informally it seems that the first default should be "applicable". The argument might run: "Initially I know nothing at all; hence B A -1C is consistent and I can conclude that B. However, C is inconsistent with my original assumption of B ^ ~ C , and so I cannot apply the second default". Thus we obtain an extension Th({A, B}) by committing to B ^--qC.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

173

Similar reasoning beginning with the second default yields the second extension Th({A, C}). Under this argument, we would obtain semi-monotonicity, since adding a default would only enlarge or preserve existing extensions. Note that this interpretation of justifications as constituting "working hypotheses" is just that underlying the failure to commit to assumptions, discussed below. However if we don't treat justifications as constituting working hypotheses, but take them simply as consistency conditions (as in original formulations), then in other cases this behaviour can be seen to be quite reasonable. Consider an interpretation of Example 2.7 expressing the information that, wherever I am, if a television picks up Channel 19 and it is possible to turn to the theatre programme and the reception is not poor, then I will turn to the theatre programme. That is, A is "The television receives Channel 19"; B is "I turn to the theatre programme"; C is "The reception is poor". Given the first default theory I will turn to the programme by default. However if we also knew that Channel 19 typically gets poor reception (i.e. ~ - ) , then arguably this default overrides the previous, and by default I don't attempt to watch the programme. Since extensions are intended to represent maximal consistent sets of beliefs, one would expect that distinct extensions would be inconsistent. In a normal default theory this indeed is the case; however, it is not the case for semi-normal default theories.

Example 2.8

(Orthogonalityof extensions). The

({:A^TB~B

default theory

, :TA^B.}__7A '~)

has two extensions:

Th({~B})

and

Th({~A}).

In Section 3 we suggest that a more appropriate property is weak orthogonality where, if a theory has distinct extensions, then the extensions together with a "context of reasoning", supplied by the set of justifications of applied defaults, are inconsistent. The next example, dealing with cumulativity, is due to David Makinson [15]. The intuitive idea is that if a theorem is added to the set of premises from which the theorem was derived, then the set of derivable formulas should remain unchanged. Unfortunately this is not the case for DL:

Example 2.9

(Makinson [15], Cumulativity).

• The default theory

"A AvB:'TA~ O) has one extension:

Th({A}).

Consequently the extension contains A v B.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

174



The default theory ((:AA,AVB:-TA}_~A ,{AvB}) has two extensions:

Th({A})

and

Th({-1A, B}).

This example suggests that the failure of cumulativity is caused by changing the way in which a default's prerequisite has been derived. In the first part of the example, the prerequisite A v B is derived by default from the application of the rule ~-.:A Here the second default is inapplicable. In the second part, A v B constitutes the world knowledge W, so that now we obtain a second extension generated by the second default. However, the failure of cumulativity can also be caused by changing the way in which a justification of a default is refuted, as the next example illustrates. Example 2 . 1 0 (Cumulativity). • The default theory

,

, ?q,

has one extension: • The default theory

has two extensions:

Th({A, B, C}). Consequently the

Th({A, B, C})

and

Th({A, ~B,

extension contains C.

C}).

In the case of the first default theory, the third default is only blocked whenever the first default applies. This is because the consequent of the first default a:B 8 implies C, which contradicts the justification of the third default ~_c. As a consequence, there is no extension generated by the second default. This changes when the default conclusion C is added to the set of facts, as done in the second default theory. Now, we additionally obtain a second extension generated by the second default. Another issue is commitment to assumptions [17]: Example 2.11

(Commitment to assumptions). The

default theory

?10) has one extension:

Th({C, D}).

If we take justifications as specifying strict consistency conditions (as Definition 2.1 suggests) then this is fine. However, an alternative view takes justifications as

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

175

providing "implicit assumptions" or "working hypotheses". Thus one might argue that the default conclusion CAD is implausible, since both default rules are applicable even though they have contradictory justifications. That is, the default conclusion C relies on B being assumed as the consistency condition, whereas D relies on -~B. The implausibility arises from the fact that B and -~B cannot jointly hold, and so it would seem that C and D should not jointly hold. This may have further effects, as the following shows: Example 2.11 (continued). The default theory

C'

D '

E

'

F

J' /

has one extension: Th((C, D}). Thus, a family may decide to do one of two things, go to the beach or to a movie (C or D ) on the weekend, depending on whether it is sunny (B) or not. Of the two children, one doesn't like the beach, the other doesn't like going to movies. So Chris will be happy (E) by default if they don't go to the beach, and Leslie will be happy (F) if they don't go to a movie. We would expect to have three extensions, one in which (C, F ) are true; one in which {D, E} are true; and one in which {E, F ) are true. Thus in the first case, assuming that it is sunny, the family goes to the beach and Leslie is happy; in the second case the family goes to a movie and Chris is happy; in the last case, based on the assumption that the family goes to neither the beach nor to a movie, both children are happy. H o w e v e r in D / w e conclude only that {C, D}, and the family goes to both the beach and a movie. The situation is even worse if we assert that going to the beach and going to a movie are mutually exclusive, and so W = { ~ C v - ~ D } . In this case in DL we have no extensions. This p h e n o m e n o n shows up more subtly in the "broken arms" example: Example 2.12 (Poole [17], Commitment to assumptions). The default theory ({ : Z BAA , :CAD}c , {-~B v --qD}) has one extension: Th({A, C)). Since one of B or D must be false, one of the default conditions, it would seem, cannot hold, and so both default conclusions should not jointly hold. Consider, concretely, where we assert that by default a person's left arm is usable (A) unless it is broken (-nB); similarly a person's right arm is usable (C) unless it is broken (-nD). The preceding theory then directs us to conclude that both arms are usable, even if one of them is known to be broken ( - n B v - n D ) . The difficulty, from a technical point of view, is that in Definition 2.1 (and in T h e o r e m 2.2) justifications need only be individually consistent with an extension.

176

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

H o w e v e r the above line of argument relies on the view that justifications in default rules are intended to function as underlying assumptions. The original formulation of DI_ though took justifications as straightforward consistency conditions. Consider the interpretation of Example 2.11, where we wish to use the default rules to help us decide what to take on a trip: if it is consistent (i.e. possible) that it will be hot (B) then take a T-shirt (C); if it is consistent that it will not be hot then take a sweater (D). So now if I know nothing about the weather I take a T-shirt and sweater. In this case our intended interpretation of the justifications in the rules is closer to "is possible". Hence it makes sense to conclude by default {C, D}, even though the justifications are jointly inconsistent. The failure of DI_ to commit to assumptions then may be seen not as a " b u g " , but rather as reflecting an alternative intuition concerning default rules. In this paper, as mentioned, we primarily explore alternative formulations in which we obtain c o m m i t m e n t to assumptions. T h e r e are a n u m b e r of further properties that one might expect of default inferences that are not present in DL. These, like commitment to assumptions, are a matter of one's intuitions, but here the intuitions concern the intended interpretation of a default rule as a whole, rather than the justification alone. First, DI_ provides a notion akin to defeasible modus ponens, in that from W = {A} and A ~ if B is consistent with A we can infer by default that B. H o w e v e r this does not extend to modus tollens, or reasoning by cases. Example 2.13 (Modus tollens). The default theory

has one extension: T h ( { ~ F } ) . Hence, for example, if birds fly by default 4 and we know that a given individual does not fly, then we cannot conclude by default that the individual is not a bird. Example 2.14 (Reasoning by cases). The default theory

,

has one extension: Th(O). While A v ~ A is a theorem, neither A nor ~ A alone are provable, and so neither default can be applied. This situation also arises where m o r e than one class m a y have the same default property: 4 For simplicity we use a propositional "gloss" in some of the examples, rather than the more appropriate m~l:FIll F(a )

"

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

177

Example 2.14 (continued). The default theory

has one extension: T h ( { Q v V}). Thus if Quakers and vegetarians are both pacifists by default, and if someone is either Quaker or vegetarian, we still conclude nothing about pacifism. These examples rely on the implicit assumption that a default rule should behave like classical implication, except that it is defeasible. Such an assumption also seems to underly the work of Reiter and Criscuolo [19]. There, for example, it is assumed that defaults of the form AB-~Band Bc-~-c should be transitive unless explicitly blocked. Consequently if W = {A} we would conclude C by default, unless the transitivity was explicitly blocked according to the recipe given in that paper. If one accepts the argument that defaults behave like defeasible implications, then it makes sense that one be able to reason using the "contrapositive" of a default, unless it is explicitly blocked. However, again, this argument is not universally applicable. For example, in a diagnostic program, we might want to express that people are normally not diabetic, i.e. p e r s o n :-Tdiabetic -7 diabetic '

clearly in this case we would not want to employ the contrapositive to conclude that a diabetic was by default not a person. In Section 4 we address the issues raised by the last two examples. Finally, while one can draw default conclusions in DL one cannot reason about defaults. In classical logic, for example, if a D to and /3 pro are true, then a v/3 ~ o~ must also be true• In DL, there is no explicit connection between ~,o,, ~:to and ~ v ~:o~, even though we know intuitively that the third default can be applied whenever either of the first two can. That is, in DL, in order to determine whether a default is applicable (for some extension) in a given default theory, we must effectively compute the extensions of the theory. In Section 4.3 we address the issue of reasoning about defaults. In particular, we show that, in the final variant of default logic that we develop, it is possible to deductively reason about defaults in the metatheory to determine, for example, whether the applicability conditions of one default is subsumed by others. to

co

2.3. Related w o r k

This section introduces subsequent work in default logic. The major approaches described here are discussed in detail in Section 5, where they are compared to the presented framework. Lukaszewicz [13] presents a variant of DL that addresses the issues of semimonotonicity and existence of extensions. In this approach, a two-place fixedpoint operator is used in the definition of an extension. The first argument

178

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

accumulates the consequents of applied defaults, while the second accumulates the justifications. For a default to be applicable, Lukaszewicz requires that a consequent be consistent with the consequents of applied defaults, and also consistent with the individual justifications of the other applied defaults. The approach guarantees semi-monotonicity and the existence of extensions. It does not address cumulativity or commitment to assumptions. These last two properties are considered in [3], where a variant called cumulative default logic is described. Two modifications to the original formulation are introduced. First, the applicability condition for a default is strengthened so that a justification must be consistent with both the set of consequents and the set of justifications of the applied defaults. The set of justifications is kept track of by introducing assertions, where an assertion is a first-order formula (the asserted formula) together with a set of reasons (or support) for believing the formula. An extension consists of a set of assertions, where the first element of each assertion consists of either world knowledge or the consequent of an applied default; in the latter case, the second element is the set of justifications and consequents of the other defaults that enabled the default to be applied. This approach yields a variant that makes explicit the notion that justifications are to be employed as tentative assumptions. This variant is cumulative and commits to assumptions. As well it is semi-monotonic and guarantees the existence of extensions. Brewka et al. [5] identify a difficulty with this approach called the "floating conclusions" problem. The difficulty essentially is that, since extensions are no longer sets of first-order formulas but are sets of assertions, it is unclear (at best) how the extensions can be joined to yield conclusions common to every extension. While [3] provides a fixed-point and pseudo-iterative specification of an extension in cumulative default logic, it does not provide a semantic characterisation. This omission is addressed in [20] where an extension of Etherington's semantics is presented, along with a semantical appraisal of the approach. Delgrande and Jackson [7] presents a series of syntactic variants of DL. One of these variants essentially combines the approaches of Lukaszewicz and Brewka, and corresponds to the system described in [21]. In this approach, justifications of applied defaults must be consistent as a group; extensions are as in the original formulation and so the floating conclusions problem does not arise. This approach is further elaborated in [23], where the relationship to the Brewka and Lukaszewicz variants are also investigated. Examples 2.13 and 2.14 illustrate that there are arguably-useful situations where default rules cannot be applied. Besnard [1] describes a transformation of normal default rules which permits modus tollens and reasoning by cases with default rules. Delgrande and Jackson [7] independently proposes the same transformation, but extended to semi-normal defaults.

3. Constrained default logic The "broken arms" example (Example 2.12) illustrates that, depending on one's intuitions, DL may produce conclusions that are stronger than desired. The

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

179

example suggests that the set of justifications used in the specification of an extension should be consistent, rather than each individual justification. In this section we develop and explore this intuition. We introduce the notion of a constrained extension and call the resulting system constrained default logic or ConDk. A constrained extension is composed of two sets of formulas E and C, where E _CC. The extension E contains all formulas which are assumed to be true and the set of constraints C consists of E and the justifications of all applied defaults. In this approach, we regard the consistency assumptions (i.e. the justifications) as constraints on a given extension. This is illustrated in Fig. 1. The figure illustrates the natural set inclusion between the facts W, their deductive closure T h ( W ) , the extension E, and its constraints C. In this respect T h ( W ) constitutes a lower bound whereas the constraints C constitute an upper bound for our set of beliefs given by E. For a default ~:¢ to apply in DL its prerequisite a must be in E and its justification/3 must be consistent with E. In GonDL, however, the prerequisite a must be in extension E whereas the consistency of the justification 13 is checked with respect to the set of constraints C. The constraints can be regarded as a context established by the premises given in W, the nonmonotonic theorems (i.e. conclusions derived by means of defaults), and the underlying consistency assumptions. In this sense, ConDL naturally extends the intrinsic context-sensitive character of defaults by distinguishing between our set of beliefs given in the extension, and the underlying constraints that form a context guiding our beliefs. While this slightly complicates the definition of an extension, it also means that rules and extensions are now represented uniformly, in that both consist of a consistency condition along with conclusions based on the consistency conditions. to

Definition 3.1. Let (D, W) be a default theory. For any set of formulas T let F ( T ) be the pair of smallest sets of formulas (S', T') such that (1) W C S ' C _ T ' , (2) S ' = Th(S') and T ' = Th(T'), (3) for any @ E D , if a E S ' and TU{fl}U{~o}~r/_L, then t o E S ' and /3 A ~o ~ T'. A pair of sets of formulas (E, C) is a constrained extension of (D, W) iff r ( c ) = (e, c )

Fig. 1. A constrained extension (E, C) of a default theory (D, W).

J.P. Delgrandeet al. / ArtificialIntelligence70 (1994)i67-237

180

The set of constraints is generated by accumulating the justifications from the applied defaults along with the conclusions. Thus, each justification is jointly consistent with the extension and all other justifications of applied defaults. Compared with Definition 2.1, the fixed-point condition relies only on the constraints (T). Intuitively, this means that our context of reasoning has to coincide with our set of accumulated constraints. An immediate consequence of this definition is that semi-normal and general default theories are equivalent. This is seen by noting that the definition coincides for defaults of the form ":~ and ~:~^~ For the default theory

instead of a "flat" extension Th({A, C}) as in DL, we obtain in ConDt an extension that is embedded in a context, viz. the constrained extension (Th({A, C}), rh({A, B, C})). For the broken arms example (Example 2.12), we now obtain two constrained extensions.

Example 3.2. The default theory C

7D})

has two constrained extensions: (Th({A, -TB v -~D}), (Th({C, ~ B v --nD}), Th({C A D, --nB})).

Th({A/x B, -riD})) and

We obtain one constrained extension in which A is true and the value of C is unspecified, and another in which C is true and the value of A is unspecified. In the first constrained extension, the constraints consist of the justification A/x B from the first default and --nD from the world knowledge. In the second constrained extension, the constraints contain the justification C ^ D from the second default and --nB from the world knowledge. In Example 2.7 we now also obtain two constrained extensions. This reflects the fact that GonDL is semi-monotonic (cf. Theorem 3.8 below).

Example 3.3. The default theory

( { A : B A ~ C AcC } B

has

two

,

--

constrained

, {A

})

extensions:

(Th({A,B}),

Th({A,B A-1C}))

and

(Th({A, C}), Th({A, C})). Thus, in a constrained extension (E, C), the extension E represents what we believe about the world whereas the constraints C tell us what assumptions we have made in order to adopt our beliefs. Hence, an extension is our envisioning of

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

181

how things are, whereas the constraints represent our assumptions in drawing these conclusions. Similar to [18], we are able to provide a more intuitive but still "pseudoiterative" characterisation of constrained extensions: Theorem 3.4. Let (D, W) be a default theory and let E and C be sets of formulas.

Define Eo = W,

C O= W ,

and for i >i 0 Ei+ 1 = Th(Ei) U {to

a:flED, a~E,, (.0

CU {/3} U (to}V-L},

Ci+1= Th(Ci) U {[~ Ato a :to[ 3 E D , aEE,, CU

{fl) U {to}}zz-J-}.

(E, C) is a constrained extension of (D, W) iff (E, C) = (Ui~=o E i, Ui~=o Ci). When computing an extension, reference is made to the previous partial extension E i whereas the consistency is checked with respect to all constraints. Thus, we obtain that constrained extensions are uniquely determined by their sets of constraints, in that if (E, C) and (E', C') are constrained extensions where C = C', then E = E'.

3.1. Properties of constrained default logic The preceding discussion demonstrates that ConDL commits to assumptions. Constrained extensions are maximal, i.e. for constrained extensions (E, C) and (E', C') of (D, W) we have that if E C_E' and C C_C' then E = E' and C = C'.5 However, the example demonstrating the failure of cumulativity given for DL carries over to ConDL as well, and so the system is not cumulative. Also, the set of beliefs in an extension alone is not necessarily maximal, as the next example illustrates [13]. Example 3.5. The default theory ({z~, ~ } , {A}) has two constrained extensions: (Th({A}), Th({A, B))) and (Th(A, 7 B ) ) , Th({A,-1B, D})). So the actual extension of the first constrained extension is included in the extension of the second constrained extension. Notably, GonDL guarantees the existence of extensions and possesses the property of semi-monotonicity.

Cf. [22, Theorem 4.3.4].

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

182

Theorem 3.6 (Existence of extensions). Every default theory has a constrained extension. Consider the default theory of Example 2.6, which illustrates that DL does not guarantee the existence of extensions:

Example 3.7. The default theory ({~A_), 0) has one constrained extension: (Th(O), Th(O)). The default ~o_ is not applicable since, according to Definition 3.1, its justification and its consequent must be consistent. Theorem 3.8 (Semi-monotonicity). Let (D, W) be a default theory and D' a set of defaults such that D C_D'. If (E, C) is a constrained extension of (D, W), then there is a constrained extension (E', C') of (D', W) such that E C_E' and C C C'. That is, ConDL is monotonic with respect to the defaults. Consider again Example 2.7.

Example 3.9. • The default theory

has one constrained extension: (Th({A, B}), Th({A, B ^ -nC))). • The default theory

has two constrained extensions: (rh( {A, C}), rh( {A, C})).

(Th({A,B}),Th({A,B^~C}))

and

Semi-monotonicity and compactness imply that constrained extensions are constructible in a truly iterative way by applying one applicable default after another. Then, in the case of a finite number of closed defaults, the consistency of each justification can be checked with respect to the previous partial set of constraints induced by the facts and all previously-applied defaults. In the infinite case, one has to employ some sort of diagonalization method. Another property which holds for constrained extensions, we refer to as weak orthogonality. This is analogous to the property of orthogonality in Dr_, which holds for normal default theories, and states that distinct extensions are mutually contradictory.

Theorem 3.10 (Weak orthogonality). Let (D, W) be a default theory. If (E, C)

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

183

and (E', C') are distinct constrained extensions of (D, W), then CLJ C' is inconsistent. That is, given two different constrained extensions the constraints of the extensions are mutually contradictory. Note that weak orthogonality is inapplicable for DL, since no mention is made of the set of constraints used in constructing an extension.

({:~P, :~c},0)

Example 3.11. The default theory has two constrained extensions: (Th({C}), T h ( { C , ~ B } ) ) and (Th({B}), Th((B,-7C})). The two constrained extensions result from incompatible sets of constraints. Thus, and in contrast to standard extensions which hide their underlying consistency assumptions and therefore lack transparency, constrained extensions exhibit their consistency assumptions.

3.2. Constrained versus standard default logic We discuss here the relationship between DL and ConDk. In DL, the properties discussed in the previous subsection (for example, semi-monotonicity and existence of extensions) actually hold for normal default theories. Thus one view of GonDL is of a variant of DL that extends the properties found in normal default theories to general default theories. The tight relationship between DL and ConDl in the case of normal default theories is shown by the fact that the approaches coincide in this case. 6 Theorem 3.12. Let (D, W) be a normal default theory and E a set of sentences. Then, E is a standard extension of (D, W) iff (E, E) is a constrained extension of

(D, W) At first glance, it seems that ConDL is strictly weaker than its classical counterpart and that every constrained extension is "subsumed" by a standard one. To see that this is not the case consider again the family-beach-movie situation of Example 2.11. Example 3.13. The default theory

({:BC ' D:--nB :--aC :FD/ ) 'E' ,0 has one standard extension: Th({C, D}), but three constrained extensions: (1) (Th({C, F}), Th({B, C, -TD, F})), 6We refer to extensionsin Reiter's approach as standard extensions, in order to distinguishthem from others.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-2.t7

184

(2) (Vh((D, E}), rh((~B, ---,C,D, E})), (3) (Th((E, F}), Th({-qC, ~D, E, F})). Hence, ConDL is neither stronger nor weaker than its classical counterpart. In particular, the only standard extension is neither a superset nor a subset of the extensions obtained in ConDk. We can describe the relationship between DL and ConDL by using the justifications of the generating defaults, that is, for a standard extension E,

Ce = { 3 -ff-~ C D, a E E, n3 ~ E } . Theorem 3.14. Let (D, W) be a default theory and let E be a standard extension of (D, W). If E U CE is consistent, then (E, Th(E U Ce) is a constrained extension of

(D, W). Observe that the converse of the above theorem does not hold since DI_ does not guarantee the existence of extensions. However, if the extensions coincide we have:

Theorem 3.15. Let (D, W) be a default theory. If (E, C) is a constrained extension of (D, W) and E is a standard extension of (D, W), then C C_ Th(E U Ce). Finally, it is interesting to observe how the specification of a problem in ConDL can be represented in DL.7 The idea is to shift the information given by the constraints C in a constrained extension (E, C) to the justifications of the defaults. Then, each such justification is supplied with an additional but fixed consistency condition representing the constraints in C. Again, this is done by using the set of generating defaults 8 of a constrained extension:

Theorem 3.16. Let (D, W) be a default theory and E and C sets of formulas. Let = /~ ~c~o((E,c),o) Conseq(6) A Justif(6) with finite GD((E, C), D) and

DP~

a:[3AwAO __

a:fi ED}

Then, if (E, C) is a constrained extension of (D, W), then E is a standard extension

of (o', w). 7We would like to thank the anonymous referees for bringing this relationship to our attention. See Definition B.1 for a formal definition of the set of generating defaults.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

185

3.3. The focused models semantics In order to characterise constrained extensions semantically, we define a preference relation similar to that given in [10]. Instead of sets of models, we consider pairs (H,/~/) of sets of models. These pairs admit more structure; we refer to them as focused models structures. The intuition behind a focused models structure is as follows. If we view the justifications of defaults as "working assumptions", then the consistency condition of DL is no longer adequate. A primary difficulty that arises is non-commitment to assumptions (cf. Examples 2.11 and 2.12). Semantically, we also need to consider those models satisfying our implicit assumptions, given in the totality of the justifications of the applied defaults. Since we do not require that the justifications be valid, there may exist models that falsify them. Consequently, we impose more structure on the sets of models under consideration, viewing the second comp o n e n t / 7 , which is a subset of H, as our focused set of models. We illustrate the corresponding structure of focused models structures in Fig. 2. The set of models /7 represents the set of models in which the set of consequents of applied defaults is true, while /'/ additionally includes those models where the justifications are true. In order to illustrate this, consider again the default theory ( { ~ - ~ ) , {A}). Etherington [10] characterises the standard extension Th({A, C}) by means of a "fiat" set of models

I I = {~rl~rp A ^ C) . Hence, for example, if B and C are logically independent then there are as many models satisfying our "working assumption" B as there are models falsifying it. The approach taken by the focused models semantics yields a pair

(H, f I ) = ( { ~ r I T r ~ A ^ C}, {rrlTr~ A ^ B A C}) that corresponds to a structured set of models including a focus which additionally satisfies our implicit assumptions. This is the set of models satisfying A, C, and B.

Fig. 2. A focused models structure (H,/~/).

186

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

In other words, we admit more structured sets of models by focusing on those models that satisfy our assumptions. Semantically, a default ~ prefers a focused models structure (II~,//~) to another (//2, 1)2) if its prerequisite a is valid in //2 and the conjunction of its justification and consequent/3/x w is satisfiable in some focused model in//2, and lastly if//~ and //entail the consequent ~o (in addition to the previous requirements). Formally, we achieve all this by defining an order relating the consistency of the justifications with their satisfiability in the focused models. Definition 3.17. Let 6 = ~ and let / / be a set of models, and (//1,//~), (//2,//2) C 2 n x 2 n. The relation corresponding to 6, >~, is defined as follows: (HI,//~) >a (//2,//2) iff (1) for every ~- E//2 we have 1r ( a, (2) there is 1r ~ / / 2 such that ~ ~/3 ^ w, (3) I I l = { T r E H 2 ] r r ~ w } , (4) / / , = { ~ - E / / z ] v r ~ / 3 ^ w }. The induced order >~ is defined as the transitive closure of all orders >6 such that 6 ~ D: Definition 3.18. Let D be a set of defaults, / / a set of models, and (//1,//l), (H2,//2) E 2 u x 2 n. The relation corresponding to D, >~, is the transitive closure of the union of the relations ~>~ for every 6 E D : (//~,//i) >rJ (H2,//2) iff (1) there exists 6 C D such that (//1,//~) >~ (//2,//2), or else (2) there is (//3,//3) E 2 n x 2 n such that (//~,/~/~) >~ (//3,//3) and (//3,//.3) >~

(//2, //2)"

We will refer to the >D-maximal sets above ( M O D ( W ) , M O D ( W ) ) as the preferred focused models structures for a default theory (D, W). Given a preferred focused models structure ( H , / / ) , an extension is formed by all formulas that are valid in H, whereas the focused models // express the constraints surrounding the extension. Accordingly, we have the following correctness and completeness theorem establishing the correspondence between constrained extensions and preferred focused models structure for a default theory (D, W). Theorem 3.19. Let (D, W) be a default theory. Let ( / / , / / ) be a pair of sets of models and E and C deductively closed sets of formulas such that H = { ~ I ~ ~ E} a n d / / = {~ I ~ ~ C}. Then, (E, C) is a constrained extension of (D, W) iff (II, / / ) is a >o-maximal element above ( M O D ( W ) , MOD(W)). As in [10], we obtain a simpler semantical characterisation in the case of normal

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

187

default theories. 9 The larger set of models H collapses to the focused models /~/ since normal defaults re0quire their justifications to be valid after they have been shown to be satisfiable. Compared with [10], we have strengthened the notion of consistency in extensions, by requiring that all justifications and consequents be jointly satisfiable by the focused models. In particular, we do not require a stability condition (cf. Definition 2.5). Technically, this is due to the fact that we are dealing with a semi-monotonic default logic. From the viewpoint of the focused models semantics, however, we ensure the continued consistency of the justifications of applying defaults by allowing only those defaults to be applied that are compatible with the established focus. Fig. 3 illustrates why we obtain two constrained extensions in Example 3.2. Once we have "applied" one of the defaults the other default is no longer applicable: the focus does not satisfy its justification. Applying one of the defaults does not just require the validity of its consequent; it also makes us focus on its underlying assumption (i.e. its justification) in order to preserve its satisfiability. For example, adding A under the assumption that A A B is consistent (by applying the default : Z ~ ) prohibits us from assuming that C ^ D is consistent together with the knowledge that ~Bv~D, and vice versa. The above example shows how the focused models structures semantically account for commitment to assumptions. In view of the fact that the focused models semantics also captures Brewka's cumulative default logic (cf. Section 5.2), and the fact that Brewka's variant commits to assumptions as well, we may

Fig. 3. C o m m i t m e n t to assumptions in ConDL.

9 Recall T h e o r e m 3.12. 10 We will see in Section 5.2 that the focus plays a fundamental role in the case of normal assertional default theories in order to capture semantically the notion of cumulativity.

188

J,P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

regard the focused models semantics as a general approach to commitment to assumptions in default logics. In addition, the semantics supplies us with several insights into the properties of ConDL. The existence of focused models structures and so the existence of constrained extensions, is guaranteed since Definition 3.17 ensures that /~/never becomes the empty set. The same definition also takes care of semi-monotonicity, since there has to exist a focused model satisfying the prerequisite, consequent, and justification of an added default before it is applied. Weak orthogonality is mirrored by the fact that there never exists a focused model which is common to two different preferred sets of focused models. We will see in Section 5.2 that the focused models semantics captures also the property of cumulativity. An alternative semantics for ConDL has been proposed in [2]. The basic idea is to employ Kripke structures such that a focused models structure (H, f / ) corresponds to a set ~ of Kripke structures, where H is captured by the actual worlds in M and #/ by the accessible worlds in M. Then, a "preferred" set of Kripke structures M characterises a constrained extension (E, C) such that E = {a non-modal I~ ~ a } and C = {a non-modal [J~ ~ [:]a }.

4. Prerequisite-free approaches This section discusses variants of DL and ConDL, wherein defaults are replaced by prerequisite-free counterparts. This transformation allows for reasoning by cases and default inferences involving the contrapositive. For DL, the resultant system differs little formally from the original system; the major difference is that one no longer needs to prove the antecedent. This variant is discussed in Section 4.1. Surprisingly, the prerequisite-free variant of ConDL has quite different formal properties from regular ConDL. First, the resultant system is cumulative. Second, in a limited, but nontrivial manner, one can now reason about a set of defaults to determine, for example, whether the applicability condition of one default is subsumed by others. This variant is discussed in Sections 4.2 and 4.3. For clarity, we assume throughout this section that normal and semi-normal defaults constitute distinct classes of defaults. This is to say, for instance, that a semi-normal default is one where the consequent is a logical consequence of the justification but not vice versa.

4.1. Prerequisite-free default logic This subsection deals with the translation of normal and semi-normal defaults into their prerequisite-free counterparts in DL. We have noted that in default reasoning it would be useful to be able to reason by cases or reason by modus tollens, unless such properties are explicitly blocked. A rationale is that a default conditional should retain properties of the classical conditional, unless explicitly blocked. The emphasis then shifts to the conditional itself, rather than a rule involving a prerequisite and justification for a conclusion.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

189

Consider first normal defaults. The default ~ is read as "if a is provably true and/3 is consistent, then/3 is true". The reading we propose is "if it is consistent that a D/3 then conclude that a D/3". Thus a D/3 is concluded (when consistent) regardless of the provability of a. "Birds fly" then is expressed propositionally as :ff-sF. B= F TO be precise, we transform normal defaults into prerequisite-free normal defaults in regular DL in the following way.

~:/3

:(- ~/3)

The resulting subsystem of DL, namely prerequisite-free default logic, is referred to as PfDL. As a first example, suppose that we have that, implausibly, A's are normally B's and A's are normally not B's:

Example 4.1. • The default theory ,

has one extension: Th(O). • The default theory ( { : A D B !_AD~B~ ) A D B ' A D " q B j,O has one extension: T h ( { ~ A } ) . Hence in the translated theory we obtain a reductio result: if A's are normally B's and A's are also normally not B's, then be default we can conclude that --hA. Consider next what this translation means for reasoning by cases. For Example 2.14 we obtain:

Example 4.2. • The default theory

has one extension: Th({Q v V}). • The default theory ([:QDP L"05ff'

:VDP~ V~'-ffJ ' ( Q v V } )

has one extension: Th({Q v V, P}). Thus if Quakers are normally pacifists, and vegetarians are normally pacifists,

190

J.P. Delgrande et al. / Artificial Intelligence 70 (I994) 167-237

then if someone is either a Quaker or vegetarian then, since neither Q nor V are provable, in DL neither rule is applicable and we conclude nothing about P. However, intuitively, we know that one of Q or V must be true, and that P follows by default from either Q or V; so arguably it should follow by default that P. This indeed is the case for the translated defaults. A similar argument applies to the contrapositive. Given that birds normally fly, then if something is known to not fly then it seems reasonable to conclude by default that this thing is not a bird. Again, we can accomplish this if the conditional itself is concluded whenever consistent. Example 4.3. • The default theory

has one extension: Th({-TF}). • The default theory

-b-SWj, has one extension: T h ( { T B , - 7 F } ) . T h e r e are however times when we want to block the contrapositive; this issue is addressed using semi-normal defaults. The primary use of semi-normal defaults is to specify that one default takes precedence over others [19]. For example, consider the statements "university students are typically adults", "adults are typically employed", and "students are typically not employed". Example 4.4. The default theory '

E

'

J'{S}

has two extensions: Th({A, --hE, S}) and Th({A, E, S}). The first extension is obtained by applying the first and last default; the second by applying the first two defaults. Intuitively we want only one extension, where 7 E is true; the more specific default that students are not employed should take precedence over the less specific default that adults are employed. One possible fix is to assert that adults are not normally students (viz. A:7 S~S ~ and then replace I the second default by A ^~s:E. This results in a normal default theory, but requires that we be able to assume that adults are not normally students. In addition, it forces us to conclude that an adult is a non-student, unless we can prove otherwise; it may well be that we would want to remain agnostic about the

J.P. Delgrandeet al. / ArtificialIntelligence70 (1994)167-237

191

fact of a person's studenthood. If we are unwilling or unable to make such an assumption, semi-normal defaults appear to be required. Example 4.4

(continued).

The default theory

({S:A A : T S A E A

'

E

has one extension:

S:TE~ ,

)

2~T j , { s }

Th({A, S, --hE}).

The same considerations apply for PfDL. Since PfDL extensions are generalisations of extensions in regular Dt. (see Theorem 4.5 below), the problem of controlling interactions is more acute. Consider the preceding example, translated into PfDL: Example 4.4

has

(continued).

The default theory

({:SDA :ADE :SD-nE~ ) S D A ' A D E ' S S ~ - E J '{S} three extensions: Th({A, S, ~ E } ) , Th({A, S, E}),

and

Th({-nA, S, 7 E } ) .

The third extension results from applying the third and second defaults. We would like to block the transitivities implicit in the second and third extensions. The approach here is to extend that of Reiter and Criscuolo [19] to prerequisitefree defaults. There are two possible translations: replace

~'/3 A3' /3

with

:(~ D/3)A 3' (a D/3 )

or

replace

~:/3 A3' /3

with

:~D(/3 A3') a D/3

The first possibility carries 3' as a "global" consistency condition, while the second uses 3' as a "local" consistency condition, under the assumption of a. However the second alternative fails to block the third extension in the example above. In addition, this alternative cannot be used to satisfactorily block the contrapositive (see below). Hence we adopt the first alternative. That is, we transform semi-normal defaults into prerequisite-free normal defaults in DL in the following way.

a:/3^~ /3

:(~D/3) A~ ~D/3

In the preceding example, the default "adults are typically employed" is not applicable for students and so we have:

192

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Example 4.4 (continued). The default theory

SPA

'

(A D E)

' - S S ~ - - E J '{S}

has one extension: Th({A, S, --aE}). Blocking the contrapositive of a conditional, :fl~o~,z~ is a special case of this where 3' is w. For example, the contrapositive of the default, "A's are typically B ' s " is blocked by writing the rule as :(A=B)ADB If we are given W = {TB}, then clearly the default is inapplicable and so we cannot conclude A D B or, consequently, --qA. Note that the default :(A=R)^A=BB is equivalent to 7-5~. : B This last form was suggested in [4], and so the above translation may be regarded as a generalisation of Brewka's. The following theorem shows that this translation gives a system in which more conclusions are forthcoming than in the standard approach. ^

B

.

Theorem 4.5. Let (D, W) be a semi-normal default theory and let (D', W) be the

theory where ~: (o ^ v) E D iff :~ ~ ) ^ v E D'. If E is an extension of (D, W) then there is an extension of (D', W), E', such that E C E'. However we may also get more extensions from (D', W).

Example 4.6. • The default theory

({A:BACB

, B'-~A^D}TA

, {A})

has one extension: Th({A, B}). • The default theory

({:(ADB)^C (ADB)

:(BD-nA)^D} '

(BDTA)

) ,{A}

has two extensions: Th({A, B}), Th({A, --aB}). Of course, the same result holds for normal default theories. As an example, consider the previous one, where the last conjunct in the justifications of each default is dropped. At first sight, the above distinction between normal and semi-normal seems to be redundant. However, notice that the preceding translations are syntax-dependent. This is why we distinguish between normal and semi-normal defaults. 11 As an example, consider the default A:BB ^ B This is clearly a normal default. 1~ Recall that according to the definition given at start of this section, normal and semi-normal defaults are mutually exclusive.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

193

However, one could be tempted to transform this default according to the recipe given for semi-normal defaults. However, this yields a different result than transforming the default A~Baccording to the recipe given for normal defaults, as can be easily verified.12 As regards general defaults, we follow [10, 13] in arguing that it is reasonable to replace general defaults by semi-normal ones by conjoining the consequent to the justification. In all, this amounts to the following translation in the case of general defaults in Reiter's DL:

~:~

~:/3^~

:(~ ~ ) ^/3

co

to

c~oJ

Finally, let us summarize the differences between Reiter's DE and its fragment PIDL. In general, prerequisite-free normal default theories enjoy the same advantages as standard normal default theories, and general prerequisite-free default theories suffer from the same drawbacks as standard default theories. Apart from cumulativity, both types of normal default theories have some of the difficulties discussed in Section 2.2, while both types of general default theories have all of these difficulties. As we will see in Section 4.2, cumulativity holds for normal default theories in PfDL 13 while this is not the case for normal default theories in DE, as we have seen in Example 2.9. However, transforming defaults into their prerequisite-free counterparts eliminates one source of the failure of cumulativity, that of changing the way prerequisites of defaults are derivable (cf. Example 2.9). However, the transformation from DL into PfDL does not avoid the other source for this failure, that of changing the way justifications are refuted (cf. Example 2.10). Consider the prerequisite-free counterparts of the default theories given in Example 2.10, which shows that PfDI_ is not cumulative.

Example 4.7. • The default theory

(A--D--/~ ' A D -qB '

'

has one extension: T h ( { A , B, C}). Consequently the extension contains C. • The default theory ({:ADBADB' : A D _ q B ' : C C } ' { A ' C ' B D C } ) has two extensions: T h ( { A , B, C}) and T h ( { A , ~ B , C}). a2We thank one of the referees for this example. 13This is obtained as a corollaryof Theorem 3.12 and Theorem 4.10. The cumulativityof prerequisitefree normal default theories has independently been shown in [8].

194

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

As in Example 2.10, the second default becomes applicable after adding C to the facts, resulting in an additional extension. For prerequisite-free theories, we do have reasoning by modus tollens and reasoning by cases. The definition for constructing an extension is marginally simpler, since we do not have a prerequisite condition to prove. However, it still relies on a fixed-point construction. Without a fixed-point construction, we could apply one default after another by checking the consistency of each default with respect to the extension obtained so far. This amounts to the formal property of semi-monotonicity, which does not hold for PfDL, as a prerequisite-free variant of Example 2.7 demonstrates.

Example 4.8. • The default theory

({:(A D B) D BA ~ C } ' has one extension: Th({A, B}). • The default theory

A

8

' 5 - 5 - C J' {A)

has one extension: Th({A, C}).

4.2. Prerequisite-free constrained default logic This subsection discusses the variant of DL obtained by joining ConDL with PfDL. We refer to this variant as PfConDL. Clearly we obtain all of the properties discussed in Section 3, as well as those covered in the previous subsection. In addition, we can give a simpler specification of an extension in the combined system. Also the resultant system is cumulative. Lastly, in the next subsection, we show how one can now reason about a set of defaults. The fixed-point definition of an extension for the prerequisite-free version of ConDL, PfConDL, is clearly just that of Definition 3.1, but without the necessity of ensuring that the prerequisite of an applied default is in an extension. The specification corresponding to the pseudo-iterative characterisation is a modification of Theorem 2.2. In this definition, again an extension is composed of two sets of formulas: E, the set of formulas assumed by default to be true, and C, the superset of constraints. Again, as in ConDL, rules and extensions are represented uniformly, in that both consist of a consistency condition along with conclusions based on the consistency conditions.

Theorem 4.9. Let (D, W) be a prerequisite-free default theory. Then (E, C) is a

195

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

constrained extension of (D, W) iff there exists some D' C D such that: E = Th(W U Conseq(D')), C = Th(W U Conseq(D') U Justif(D')), and C V ± , but for every D ' C D", C U (Conseq(D")} U {Justif(D")} ~ I . The set C in an extension is again used to accumulate the justifications from the applied defaults. There is clearly no circular reference made in the definition to either E or C. This definition leads easily and immediately to a procedure for forming an extension. Such a procedure is not generally computable though, since it still appeals to nonprovability. It is computable, however, in some restricted cases, notably for propositional theories and in inheritance hierarchies. Moreover, a procedure for inheritance reasoning would obviously be substantially simpler than that given in [19]. For this variant, clearly the broken arms examle is handled in the same manner as in ConDL. Also, PfConDL guarantees the existence of extensions and is semi-monotonic. Again, we can reason by cases and with the contrapositive (unless explicitly blocked). Transitive defaults are blocked using semi-normal defaults if necessary. On the other hand, arguably the system isn't so strong that undesirable results are forthcoming. Also, importantly, we obtain that the resulting system is cumulative: Theorem 4.10. Let (D, W) be a prerequisite-free default theory and let a C E' for all constrained extensions (E', C') of (D, W). Then, (E, C) is a constrained extension of (D, W) iff (E, C) is a constrained extension of (D, W U {a}). Consequently this variant fully address the set of issues identified in Section 2. It immediately follows from Theorem 3.12 that PfDL and PfConDL coincide for normal default theories, and so in this case PfDL also addresses the difficulties discussed in Section 2. Notably, PfDL is cumulative for normal default theories. Lastly the focused models semantics clearly carries over to this variant, with the simplification that again we don't have to worry about the provability of the prerequisite. Clearly PfConDL can be obtained from ConDL according to the recipes given in the preceding subsection for DL. However the translation proposed for semi-normal defaults in DL extends to general ones in ConDL since semi-normal and general defaults are equivalent in ConDL.

4.3. Metatheoretic considerations A consequence of this last variant is that we can now reason about a set of defaults in a straightforward manner. So, given a default theory in PfConDL, we can determine whether there exists an extension for which a default "~ is applicable: If W U {fl} U {to} is consistent, then there must exist such an to

196

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

extension) 4 Similarly, if we have a partial extension, say E', we can determine also whether a default is applicable in the further construction of that extension. Consequently, we do not need to compute extensions in order to determine whether a default is applicable. This clearly is not the case in DL or GonDk, since to determine whether a default is applicable (for some extension), we have to effectively compute the extensions of the theory. This in turn is a result of the fact that the fixed-point and pseudo-iterative characterisations of an extension necessarily refer to the final extension. A second consequence of this variant is that, given a default theory in PfConDk, we can determine in the metatheory whether a default ~ = ~ to- is " s u b s u m e d " by other defaults. That is we can determine that, if 6 has been applied in the formation of an extension, whether other defaults must necessarily have been applied in that extension. Thus as a simple example, any time that : Z A B may be used in forming an extension, so may ~:A - and ~-. However, the converse of this last relation does not necessarily hold: :Z and :~z may individually be applied in forming extensions, but ~(A ^^ ~~A) A ) certainly cannot. It is also possible that one may be able to completely determine such subsumption relations in DL. For example, if W ~ (9lI ~ o~2, W ~ / 3 2 ~ / 3 1 , and W ~ o22 3~Ol, then the default -t:~l can be applied any time that "~:~ can) 5 °21 w2 H o w e v e r it is unclear whether this is the only relation that is required for determining subsumption relations in D l or, if not, how one might determine a complete set of such relations. These various notions are made more precise as follows. For simplicity we restrict ourselves to semi-normal defaults, although the extension to general prerequisite-free defaults is straightforwa rd .16 A default 6 = ~ will be identified with its justification/3. The goal is to determine, for default with justification/3, what other defaults are necessarily applicable, given that the default represented b y / 3 is. We use the notation applic(/3) to assert that a default with justification/3 is applicable. Definition 4.11. A default 6 = ~ is applicable, written applic( /3 ), if its justification is applicable./3 is applicable with respect to a set of formulas S if S U {/3 } is consistent.

Given a set of first-order formulas S, the set of applicable defaults is the subset of D that may be applied to S. We obtain: Theorem 4.12.

(1) 1- applic(ol A [3) D applic(o~) A applic(/3). 14 Since consistency is undecidable in first-order logic, this is not to say that such a determination is in any sense "easy". 15 We thank one of the referees for pointing this out to us. 16 Recall that general and semi-normal defaults are equivalent in GonDL; hence, also in PfGonDL.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237 (2) (3) (4) (5) (6) (7)

197

~- applic(a v [3) =- applic(a) v applic([3). ~- applic(a ) - applic(--7--7a ). ~- "Tapplic(a) D applic(Ta). ~- a D applic(a).

From ~ a D [3 infer ~ applic(a) D applic([3). From ~-Ta infer ~-Tapplic(a).

We obtain two straightforward characterisation results, the first with classical implication, the second with the modal logic K T [6].

Theorem 4.13. (1) ~- applic(a) Dapplic(fl) iff ~ a D fl. (2) ~ applic(a) iff ~ rT ~a. For example, the default :XA~^ if5 B 2)CC can be employed in the specification of an extension iff one of : A D C or ~: B D C can. That is, from T h e o r e m 4.12 we can show that

~- applic(A ^ B D C) =- applic(A D C) v applic(B D C). This is clearly not to say that the corresponding default theories are equivalent: if W = {A, 7 B } , then even though we can apply :~A ^-B~D Ce , we can conclude nothing of interest by default. However applying ~A: D C allows us to conclude C. What this does allow is the development of strategies (again, in the metatheory) for refining or modifying default theories. For example, the default "Quakers are ~P^^D) o) • This in turn is normally pacifist and devout" could be represented as :oq ~z ~e equivalent to :CQze)^~Ozo) ~o~e)^~ozo)" It may be more useful (given domain-specific considerations) to replace the default by the strictly stronger defaults :_9__2£ and QDP :QDD QDD

"

We have not pursued this line of enquiry, but note that it may prove to be a useful means of reasoning about defaults in the metatheory. This means too that in lafGonD/ we can now regard default reasoning as being composed of two distinct and disjoint parts. First, we have the notion of using the defaults to construct an extension, or acceptable set of beliefs that hold by default. Now in addition we have a means of reasoning about a set of formulas and a set of defaults to determine, for example, whether or not a particular default may be replaced by others.

5. Relationship to other approaches This section deals with two well-known variants of DL as well as the Theorist system [16]. The approaches to DI_ that we discuss are justified default logic or simply JDL [13] and cumulative default logic or GumDL [3]. Historically, Lukaszewicz's approach can be regarded as an ancestor of Brewka's. Also,

198

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-2.37

OumDL shares most of the properties of dDL. That is, both variants enjoy the property of semi-monotonicity and guarantee the existence of extensions. In addition, OurnDL commits to assumptions and is cumulative. However, they differ in the way they enforce their results. Lukaszewicz attaches sets of formulas to extensions whereas Brewka labels formulas with sets of formulas. Thus, both employ constraints but differ basically in the location they put them. In some sense, OonDL can be regarded as an amalgamation of these approaches. Hence, it is well suited as an instrument for comparing the descendents of DL. See Fig. 4 for an illustration of this evolution. In order to facilitate the treatment of the various approaches, we concentrate in this section on how far each approach commits to assumptions. The following default theory will serve as an indicator of how far each variant of DL commits to assumptions. We will use the term "commitment to assumptions" in a broader sense so that it also subsumes the notion of semi-monotonicity which may be interpreted as "weak commitment to assumptions". Consider the following example [2]. Example 5.1. The default theory ({.:B,:-nBD ' "-nC A

,0)

has one standard extension: T h ( { C , D } ) , but three constrained extensions: (Th({C}), Th({C, B})), (Th({D}), Th({D,-~B})), and (Th({E}), Th({E, ~ C ^

The above default theory, which combines several potential conflicts, will be used to reveal the degree of commitment to assumptions for each considered default logic. As we have observed in Example 2.11, the first two defaults indicate

DL [Reiter,80] JDL [Lukaszewicz, 88]

CumDL

/

[Brewka, 91]

ConDL Fig.4. FromDLtowardsConDL.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

199

whether a system "strongly commits to assumptions", i.e. whether it detects inconsistencies among the set of justifications. The third default indicates whether or not the considered variant is semi-monotonic; or in other words whether it "weakly commits to assumptions". As we have already seen, and as the preceding example illustrates, DL does not commit to assumptions, nor does it weakly commit to assumptions. In contrast, GonDl_ commits to assumptions and we obtain three extensions. 5.1. Justified default logic Lukaszewicz [13] modifies DI_ in order to guarantee the existence of extensions and semi-monotonicity for general default theories. He attaches constraints to extensions in order to strengthen the applicability condition of defaults. Formally, a justified extension 17 is defined as follows. Definition 5.2 (Lukaszewicz [13]). Let (D, W) be a default theory. For any pair of sets of formulas (S, T) let F(S, T) be the pair of smallest sets of formulas S' and T' such that (1) W C S ' , (2) Th(S') -- S', (3) for any ~ ~ D , if a E S ' and S U {to} U {~/} ~ / ± whenever V3, E T U {/3} then to E S' and/3 E T'. A set of formulas E is a justified extension of (D, W) with respect to a set of formulas J iff F(E, J) = (E, J). We observe two major differences from Definition 3.1. First, Lukaszewicz employs a weaker consistency check than ConDL. A default ~,~ applies if all justifications of other applying defaults are consistent with the considered extension E and to, and if additionally to and/3 are consistent with E (observe that omitting to in the last part of the condition meets exactly the consistency requirement of DL). Second, the set of constraints J merely consists of the justifications of applied defaults. The constraints have neither to be deductively closed nor consistent. All this prevents JDL from committing to assumptions, as is shown below. Example 5.1 (continued). The default theory ({:Bc,:-IBD ' : ~ C ^ E - 1 D } 'fl) has two justified extensions: ( T h ( { C , D ) ) , ~D}).

~7Lukaszewicz calls his extensions modified extensions.

{B,--nB}) and ( T h ( { E } ) ,

{mCA

J.P. Delgrande et al. / ArtificialIntelligence 70 (1994) 167-237

200

As in DL, the first justified extension is generated by the defaults f i and : ~ 8 and so the extension is justified by an inconsistent set of constraints. The second justified extension stems from the fact that JDL is semi-monotonic. Assume we have applied the default :-~C^~D., in order to apply the default f i say, its consequent C must be consistent with 7 C ^ 7 D . Obviously, this is not the case and the default is inapplicable. For the same reason the default ~ is applicable. While Lukaszewicz avoids inconsistencies between justifications and consequents of individual defaults, he neglects inconsistencies among the justifications. So, even though the set of constraints, J, is consistent, it might be inconsistent together with the extension, E, or even the set of premises, W. Consider again the broken arms example, Example 2.12.

Example 5.3.

The default theory

( { : A ^ B A , :CAD.}c

,{TBv-TD})

has one justified extension: (Th({~B v ~D, A, C}), {A/x B, C ^ D}). Obviously, the set of constraints {A/x B, C ^ D} is inconsistent with the set of facts {-7B v -TD}. However, we have the following relationship between the two approaches in the case of no such inconsistencies. Theorem 5.4. Let (D, W) be a default theory and E a justified extension of (D, W) with respect to J. If E U J is consistent, then (E, Th(E U J)) is a constrained extension of (D, W). Thus, if the set of justifications J is consistent we obtain the same extension in JDL and ConDL. At first sight, Example 5.1 suggests that the stronger the consistency check of each default logic the more extensions are obtained. However, this is not the case; consider the default theory given in Example 2.11.

Example 5.5. The default theory ({'B :~B :-7C :TD} 0) C'

D '

E '

F

'

has four justified extensions: (Th({C, D}), {B, -TB}), (Th({C, F}), {B,-TD}), (Th(D, E}), {TB, ~C}), and (Th({E, F}), {7C, ~D}). In Section 3.2, we used this example to show that ConDL is neither stronger nor weaker than DL. Also, for this example we obtained one standard and three constrained extensions. However, we obtain four justified extensions for this example. The reason for this phenomenon is as follows. Since JDL is semimonotonic (or "weakly commits to assumptions"), it allows for the application of

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

201

each default. However, since it discards inconsistencies among the justifications of applying defaults it is not strong enough to exclude the combination of ~-~ and :~B. In other words, although "weak commitment" guarantees the consistency of the justifications of each default, it is not strong enough to prevent inconsistent sets of justifications. JDL allows for the application of more defaults than ConDL, as is shown next. Theorem 5.6. Let (D, W) be a default theory and (E, C) be a constrained extension of (D, W). Then, there is a justified extension (E', J') of (D, W) such that

E C E ' and CC_Th(E'UJ'). Semantically, Lukaszewicz characterises justified extensions by means of pairs (H, J), w h e r e / 7 is a set of first-order interpretations and J is a set of formulas. Then, such a "preferred" pair characterises a justified extension E with respect to J i f f / / i s the set of all models of E. Clearly, the occurrence of a set of formulas inside a semantical structure is unfortunate. However, this is required since justified extensions admit inconsistent sets of constraints; this fact also prevents us from applying the focused models semantics to capture JDL semantically. However, a purely model theoretical semantics for JDL has been given in [2] using Kripke structures.

5.2. Cumulative default logic Brewka [3] describes a variant of DL that commits to assumptions and that is cumulative. This is accomplished by strengthening the applicability condition for defaults and by making the justification for adopting a default conclusion explicit. In order to keep track of implicit assumptions, Brewka introduces assertions, or formulas labeled with the set of justifications and consequents of the defaults which were used for deriving them. Intuitively, assertions represent formulas along with the reasons for believing them. Definition 5.7 (Brewka [3]). Let a, 3'1, • • •, Ymbe formulas. An assertion ~ is any expression of the form (a, {3~t,... ,Ym}), where a =Form(~) is called the asserted formula and the set {Yl . . . . , Ym} = Supp(~) is called the support of a ) 8 To guarantee the proper propagation of the supports, Brewka extends the classical inference relation as follows. Definition 5.8 (Brewka [3]). Let 6e be a set of assertions. Then ~"~(6e), the assertional consequence closure operator, is the smallest set of assertions such that (1) 5e c_ ~"~(Se), 18 The two projections extend to sets of assertions in the obvious way. We sometimes misuse denoting the support of an asserted formula, e.g. (a, Supp(a)).

Supp for

202

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

(2) if ~ : l , . . . , ~:, • r/"h(ow) and Form(~l) . . . . , Form(~,) ~- y, then

(% Supp( ~l ) U . . . U Supp( ~n) ) • T'~h(~ ) . An assertional default theory is a pair (D, °W), where D is a set of defaults and is a set of assertions. An assertional default theory (D, °W) is well-based if Form(~W) U Supp(W) is consistent. An assertional extension is defined as follows. Definition 5.9 (Brewka [3]). Let (D, °W) be an assertional default theory. For any set of assertions 5¢ let F(S) be the smallest set of assertions 5e' such that (1) °W C_5e', (2) rh(b °') -- 5°', (3) for any @ C D, if (a, Supp(a)) E 9" and Form(9 ~) U Supp(9 p) U {/3} U {to} ~ / 1 , then (to, Supp(a) U {/3} tO {to}) E 5¢'. A set of assertions ~ is an assertional extension for (D, ~V) iff F ( ~ ) = ~. Comparing the last definition with that of constrained extensions, we see that the justifications and consequents of applied defaults are recorded locally to the default conclusions. However, a closer look reveals that the applicability condition for a default ~: ~ in ConDL and CumDL require both the joint consistency of its justification /3 and its consequent to with the set of justifications and consequents of all other applying defaults. Therefore, assertional extensions share the notion of "global" consistency with constrained extensions, but in a distributed way. Thus, while ConDk deals with constrained extensions, CumDL deals with "formulas with constraints". GumDk commits to assumptions: to

Example 5.1 (continued). The default theory

([" B :-TB :-7C ^ - 7 D } ) C' D ' E ,0 has three assertional extensions: and T"h({(E, { 7 C ^ --qD, E})}).

Tth({(C, { B , C } ) } ) ,

TAh({{D, {-TB, D } ) } ) ,

Let us examine the extension containing the assertion (D, {-TB, D}). This assertion is derived by applying the default ~ . In order to apply another default the corresponding justification and consequent have to consistent with {D} U {-TB, D}, the asserted formula and the support of the assertion. As can be easily verified, neither of the two other defaults is applicable. So, once we have derived a conclusion, we are aware of its underlying assumptions. Therefore, CumDL prevents the derivation of conclusions that contradict previously derived conclusions or their underlying consistency assumption. In all, CumDL and ConDL are quite similar. To characterise this relation directly, we give a kind of equivalence result between our formulation and that of [3].

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

203

Theorem 5.10. Let (D, W) be a default theory and (D, °/4/') the assertional default theory, where ~V = {(a, t~> [ a E W}. Then, if (E, C) is a constrained extension of

(D, W) then there is an assertional extension g of (D, 74/') such that E = Form(g) and C = Th(Form(g) U Supp(~)); and, conversely if ~ is an assertional extension of (D, 7g') then (Form(g), Th(Form(g ) U Supp(g ))) is a constrained extension of (D, W). Observe, that we get a one-to-one correspondence between the "real" extensions, E = Form(g). However, the constraints of a constrained extension correspond to the deductive closure of the supports and the asserted formulas of the extension. Thus, we can map assertional extensions onto constrained extensions only modulo equivalent sets of supports. CumDL shares several properties with ConDL: the existence of assertional extensions is guaranteed, CumDL is semi-monotonic and all assertional extensions of a given assertional default theory are weakly orthogonal to each other. Also, as its name indicates, CumDL is cumulative [3, Proposition 2.13]. This is illustrated by means of the following example (cf. Example 2.9). Example 5.11 (Cumulativity). • The assertional default theory

•A AvB:-~A~

O)

has one assertional extension: T'h({}). Consequently the assertional extension contains (A v B, {A} >. • The assertional default theory A '

7A

,{D-maximal element above (MOD(Form(°W)), M O D ( F o r m ( ~ ) U Supp(74V))). If (11, [I) is a >D-maximal element above (MOD(Form(74/')), MOD(Form(~W) U Supp(~W))), then there is an assertional extension ~ of (D, °W) such that 11 = { Tr I ¢r ~ Form( ~ ) } and [I = {re 17r ~ Form(Y) U Supp( ~ ) } . Notably, we do not obtain a simpler semantical characterisation for normal assertional default theories in general, in contrast with ConDL. The focus is necessary in the case of normal assertional default theories wherein Supp(T¢) is non-empty. In particular, this will be the case whenever derived assertions are added to the assertional facts. CumDL runs into the "floating conclusions" problem [5]; this difficulty involves reasoning skeptically by intersecting several extensions. Example 5.13. The assertional default theory

A '

B

, ( ( A D C , O ) , ( B D C , O)}

has two assertional extensions: T-~h({(A,{-qB, A}), T"h({(B, {~A, B}), (C, {~A, B})}).

(C, { ~ B , A } ) } )

and

Reasoning skeptically, we cannot draw any conclusion about C. Although C is asserted in both extensions, the corresponding supports differ and so the assertions as such are different and do not belong to the intersection. Since constrained extensions consist of first-order formulas they do not run into this problem; consider the corresponding default theory and its constrained extensions: Example 5.13 (continued). The default theory ( { : A B, :~A}B, { A D C , B D C } ) has two constrained extensions: (Th({A, C}), Th({A, C, 7 B } ) ) and (Th({B, C}),

rh((B, C,--hA})). Reasoning skeptically by intersecting the above extensions and sets of constraints yields the set of skeptical conclusions: Th({A v B, C}) in the context of Th({C,--n(A ~ B)}). Hence, we obtain C as a skeptical conclusion. The crux of the previous example lies in the proper introduction of the exclusive disjunction --n(A-= B). Using assertions we cannot apply any kind of deduction to the supports, apart from considering them when checking consistency. But by encoding the underlying consistency assumptions as a context guiding our beliefs, we have the whole deductive machinery of classical logic at hand. To conclude, observe that ConDL is closer to CumDL than to ODE. Although

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

205

Lukaszewicz attaches constraints to an extension, he employs a weaker consistency check. Similar to DE, justifications need only to be separately consistent with an extension at hand. This is mirrored by the notion of commitment, since CumDk and ConDk (strongly) commit to assumptions, whereas DL and JDL do not. Since additionally every standard extension is also a justified extension (cf. [13]), JDL seems to be closer to DL than to its two constrained descendents.

5.3. Theorist Poole [16] describes a system, called Theorist, for default reasoning based on theory formation from a fixed set of hypotheses. There is a close relationship between PIConDL and the Theorist system. We briefly review Theorist and then show how a Theorist system can be translated to/from a prerequisite-free default theory in GonDk such that the extensions of the Theorist system correspond exactly to the constrained extensions of the corresponding default theory. A Theorist system is a triple (if, A, ~ ) where ~ is a set of closed formulas called the facts, A is a set of formulas called the possible hypotheses, and q¢ is a set of closed formulas called the constraints. Default reasoning is effectively reduced to hypothetical reasoning: any ground instance of a possible hypothesis may be assumed as long as it is consistent with the facts and the constraints. Definition 5.14 (Poole [16]). A scenario of (if, A, q¢) is a set ~ U S where S is a set of ground instances of A such that ~ U S U c¢ is consistent. Definition 5.15 (Poole [16]). A closed formula cr is explainable from (if, za, c¢) iff there is a scenario ~ U S such that ~ U S ~-or. We assume that f f U c¢ is consistent since otherwise there are no scenarios. For default reasoning, za contains conditionals similar to prerequisite-free defaults. For example, the following Theorist system corresponds to the Q u a k e r - R e publican example:

= {a(sue), R(sue)}, A = {Q(x) D P(x), R(x) D "-qP(x)}, P(sue) is explainable from the hypothesis Q(sue)DP(sue) while --qP(sue) is explainable from the hypothesis R(sue)D-qP(sue). For practical reasons, such as printing explanations and the implementation of Theorist, Poole argues for the "naming" of defaults. If to(Y) is a formula intended as a default with free variables ~?= x~, x z . . . . . x, then to(Y) can be named with n,o where no, is a n-ary predicate symbol not appearing in ~, A, or c¢. With named defaults, A contains only the names of defaults and ,~ contains formulas of the

206

J.P. Delgrande et al. / Artificial Intelligence 7(1 (1994) 167 237

form VY(no,(£) D w(£)), for each name n~. The above example may be written as = {Q(sue), R ( s u e ) , Vx ( n e e ( x ) D ( Q ( x ) D P(x))) , Vx (nRe(x) D (R(x) D --qP(x)))},

a

=

(nep(x), nR/x)},

using the names n e e and nRp for the two defaults. Constraints are often used to state when a default is not applicable and are used to deal with interacting defaults by blocking unwanted transitivities and contrapositives. A constraint of the form 3' D-qn states that the default named n is not applicable when y is true. For example, the constraint,

Vx (P(x)

nRAx))

blocks the contrapositive of the Republican-not-Pacifist default by stating that this default is not applicable when P(x) is true. P e e l e has shown that when there are no constraints, naming defaults does not affect what is explainable. However, the example in Fig. 5 demonstrates that when there are constraints, naming defaults does affect what is explainable. Without naming, o~ U {B} is not a scenario because o~ U {B} ~- --q(F D ~ A ) . With naming, however, B is explainable using the hypothesis n B. We have assumed that when a default w is named by no, then any constraints of the form y D m w are replaced by y D 7 n . This translation is not explicitly stated in [16] but seems obvious from the examples given there. Our translation of PfConDL to Theorist produces a system with named defaults. Relation to PfConDL

This section describes the correspondence between Theorist with constraints and constrained extensions of semi-normal prerequisite-free default theories. P e e l e has previously shown that Theorist without constraints is equivalent to

F

\,Vithout, Naming

With Naming

B D /I

I1 D ..I I1DF nR D I3

BDF

nA D A A

C Scena.rios

13 A F D -~A fU{A}

~z8

nA F D ~nA f O { n A } and F U { n B }

Fig. 5. A Theorist example with and without named defaults.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

207

prerequisite-free normal default theories, that is normal default theories in PfDL. This equivalence carries over to PfConDL since DL and ConDL are equivalent for normal default theories. The interesting case is Theorist with constraints. For simplicity, the following assumes that all the formulas in A are closed (otherwise, some formula in A contains a free variable and it would be necessary to use open default theories). Independently, Brewka [4] and Dix [8] have shown that a Theorist system, (~, zl, c~) is equivalent to a prerequisite-free default theory of the following form

20

and vice versa. The above translation also provides a correspondence between a Theorist system and a PfConDL theory as demonstrated by the following theorem. Theorem 5.16. A formula cr is explainable from (,~, zl, ~) iff there exists a constrained extension (E, C) of ({ : - ~ ~ I/3 • A}, ~ ) such that tr C E. It is somewhat unsatisfying to duplicate all the constraints at each default. However this duplication can be avoided by considering a restricted form of a Theorist system that arises frequently in practice. The methodology for naming defaults and using constraints to block defaults often yields Theorist systems that have the following form: (1) zl contains only the names of defaults. (2) For each default name, n, there is exactly one formula in o~ of the form

n~/3. (3) For each default name, n, there is exactly one formula in ~ of the form ~ Y ~ --nn.21

All of the examples in [16] have this form. For Theorist systems in this form there is a translation to PfConDL that avoids duplicating the set of constraints at each default. In addition, the translation eliminates all default names found in the original Theorist system. The function Tr, defined below, translates a Theorist system in the above form to a prerequisite-free default theory.

Tr(~, A, ~) = (D, W ) , where W = {or ] cr E ~ and or does not contain any default name}, D={2-~

nEA, nDflE~and--n~,D--nnE~}.

20 The ~ in the default rules is the conjunction of all formulas in the corresponding (finite) set of constraints. zl & ~ mn or T D-~n can be used for defaults that are always/never applicable.

208

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Theorist

PfConDL

W R

(2 /?,

7~Qp D ((2 D P) nRp ::)

A

(R D ~P) D

n0t,

: (RD~P)A~P

?~,R]9

C

:QDP QDP

(RD~P)

P ~ "711Rp

Fig. 6. A n example translated from Theorist to PIConDL.

Fig. 6 shows a simple Theorist system along with its translation. Anything explainable from the Theorist system will occur in some constrained extension of the corresponding default theory. Theorem 5.17. A formula or is explainable from (,~,A, q¢) iff there exists a constrained extension (E, C) of Tr(o%, A, c¢) such that or E E. This relationship between Theorist and constrained extensions implies that the focused models semantics developed for ConDl_ can be used for Theorist. In addition, the method for reasoning about defaults, developed in Section 4.3, can be applied to Theorist. A prerequisite-free default theory can also be translated into a Theorist system such that the constrained extensions are equivalent to what is explainable. The function Tr-I maps a prerequisite-free default theory to a Theorist system in the above form.

Tr ~(D, W) = (~, A, ~ ) where

~=WU{n~Dfl

J-~

q~ = {--7T D-Tnt~ J - ~ @

E D},

D},

where each n~ is a new name for a default. Theorem 5.18. Let (D, W) be a semi-normal prerequisite-free default theory. A

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

209

formula or is in E for some constrained extension (E, C) of (D, W ) iff or is explainable f r o m Tr-I(D, W). 22 Therefore, the implementations for Theorist can be used to determine if a formula is in some constrained extension of a (semi-normal) prerequisite-free default theory.

6. Conclusions

We have developed variants of default logic that address situations not dealt with in the original system, that formalise differing intuitions in approaches to default reasoning, and that rectify difficulties in the original system. In all these variants, fixed-point, iterative, and model-based characterisations of default extensions are given. In DL a default rule is applicable when the prerequisite is provable and the justification is consistent with the final extension. Section 3 develops the major variant, GonOl_. In this formulation the justification must be consistent with the final extension together with the set of justifications of all the other applied rules. Consequently, the set of justifications forms a context of which the default conclusions are a subset. While slightly more complex than the original formulation, GonDL arguably better conforms to intuitions regarding assumption-based default reasoning. This variant commits to assumptions, and possesses the properties of semi-monotonicity and weak orthogonality; as well it guarantees the existence of extensions. With regards to a model-based characterisation of extensions, the stability property, required in Etherington's semantics, is not required here. The variants examined in Section 4 weaken the requirement that the prerequisite be provable. In these variants, the reading of a conditional "typically if a t h e n / 3 " is such that "typically" applies to the conditional as a whole. Thus the conditional is a component of the justification, as well as constituting the conclusion. In the first modification, PIDL, if the conditional can be concluded then one can reason using the conditional, without the need to prove the antecedent true. Thus one obtains contrapositive defeasible inferences, as well as reasoning by cases. Interacting defaults are dealt with by incorporating seminormal defaults where necessary, in the manner described in [19]. Semi-normal defaults are also used to block other possibly-unwanted inferences, such as those involving contrapositives.

22The Theorist system contains new predicate symbolsfor the names of defaults so there are formulas, containing default names, that are explainable but are not in any extension.We consider such formulas to be non-explainable.

210

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

In the combination of these approaches, PfConDL, we retain the properties of ConDL and PfDL. Thus this variant fully addresses the set of issues identified in Section 2. From a technical standpoint, several benefits are obtained. First, the formulation is conceptually cleaner and simpler. The modifications result in a uniformity of notation, in that both rules and extensions in a default theory are composed of two parts: a justification and a conclusion. The definition of an extension is simplified and, in particular, we can now give a non-fixed-point characterisation of a default extension. While the overall complexity remains the same for default inferencing, a number of heuristic benefits obtain, including the fact that there are no prerequisites to be proven; and, in the construction of an extension, if a default is inapplicable at some point, it is always inapplicable. Lastly, we can now reason about the defaults. Thus for example we can deductively determine in the metatheory that, given the knowledge that -TB v -TD, the defaults ~ and ~ cannot be jointly applied. That the suggested changes are indeed reasonable is justified by appealing to the intuitions motivating these changes. As well, numerous examples are cited, and difficulties identified in the literature are addressed in these variants. Lastly these variants are extensively compared to the major alternative approaches to default logic, those of Lukaszewicz and Brewka. We have also shown that the resulting system is closely related to the Theorist system of [16] and so provide a link between Theorist and default logics. The discussed default systems and their properties are summarized in the tables given in the following appendix.

Acknowledgements The first author wishes to acknowledge support from the Natural Science and Engineering Research Council of Canada grant A0884, as well as the Institute for Robotics and Intelligent Systems (IRIS) in the Networks of Centres of Excellence Program of the Government of Canada. The second author was supported by the German Federal Ministry for Research and Technology within the project TASSO under grant no. ITW 8900 C2. The third author wishes to acknowledge support from a Natural Science and Engineering Research Council of Canada postgraduate scholarship. We thank the referees for suggesting one of the themes of this paper: that properties such as commitment to assumptions are more or less desirable depending on one's intuitions (and so do not necessarily constitute faults of the formalism). We thank Philippe Besnard, Gerd Brewka, and Aaron Rothschild for their helpful comments and discussions.

Appendix A. Summary of systems Table A.1 lists the names and abbreviations used for the default systems dealt with in the paper, together with the sections in which these systems are discussed.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

211

Table A. 1 Name

Abbreviation

Section(s)

Default logic Constrained default logic Prerequisite-free default logic Prerequisite-free constrained default logic Justified default logic Cumulative default logic Theorist

DL GonDL PIDL PfConDL JDL CumDL

2.1,2.2 3 4.1 4.2 2.3, 5.1 2.3, 5.2 5.3

In T a b l e A . 2 , for c o m p l e t e n e s s we h a v e i n c l u d e d results which w e r e n o t e x p l i c i t l y s t a t e d h e r e (cf. [22]). O t h e r s w e r e o m i t t e d since t h e y d o n o t r e f e r to e n t i r e s u b s y s t e m s o f DL. F o r i n s t a n c e , we h a v e s u p p r e s s e d t h e fact t h a t semin o r m a l d e f a u l t t h e o r i e s which are o r d e r e d in a c e r t a i n w a y g u a r a n t e e the e x i s t e n c e o f e x t e n s i o n s [10]. T h e p r o p e r t i e s o f n o r m a l d e f a u l t t h e o r i e s e x t e n d to n o r m a l d e f a u l t t h e o r i e s having an a d d i t i o n a l b u t fixed c o n s i s t e n c y c o n d i t i o n , as g i v e n in T h e o r e m 5.16 (cf. [8]). A dash i n d i c a t e s t h a t t h e c o n s i d e r e d p r o p e r t y is m e a n i n g l e s s for t h e r e s p e c t i v e d e f a u l t logic. I t e m s in p a r e n t h e s e s i n d i c a t e y e t unproven conjectures.

Table A.2

Maximality Pairwise maximality Existence Semi-monotonicity Orthogonality Weak orthogonality Cumulativity Commitment Reasoning by cases Reasoning by contraposition Reasoning about defaults Skeptical reasoning

DL

ConDL

JDL

CumDL

G -N N N -PfN N Pf Pf PfN G

N G (3 (3 N (3 Pf G Pf Pf Pf (3

N (3 (3 (3 N N (Pf) N Pf Pf (PfN) (3

N G (3 (3 N (3 (3 (3 Pf Pl (Pf) W

G ~ general default theories, N ~ normal default theories, Pf ~ prerequisite-free default theories, PfN ~ prerequisite-free normal default theories, W ~ default theories without defaults.

Appendix B. Proofs of theorems B.1. Proofs o f theorems in Section 3

Theorem 3.4. Let (D, W ) be a default theory and let E and C be sets o f sentences.

J.F. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

212

Define E0= W ,

C O= W ,

and for i >10

{I C;+,=Th(C~)u{/3^o) E;+,=Th(E;)U

to

to

~D, ~E;,

oo

CU{13}U(to}~±

~D, o,~E.

}

,

CU(/3IU(to}t/±

}

.

(E, C) is a constrained extension of (D, W) iff (E, C) = (U~_o Ei, U~=o Ci).

Proof. First, observe that we have the following properties: (1) W C_U~=o E,, E; C_UT=o C;. (2) UT~o E; = Th(UT=o E,) and U~=o C; = Th(U~=o C;). (3) For any ~ , ~ E D , if a E U ~ _ o E ; and C U { / 3 } U { t o } V ± , U~-o E; and/3 ^ to E U~:o Cr By the minimality of F(C), we have 23

F~(C) C_0 E, , i-O

F2(C) C 0 C, ,

then w C

(B.1)

i-O

regardless of whether (E, C) is a constrained extension or not. Only-if part: Assume (E, C) is a constrained extension. (_D) We have to show that E i C_E and C; C_C for i~>0. Base: Clearly, E o = W C _ E and C o=WC_C. Step: Assume E; C_E and C; C_C and consider r/EE;+~ U C;+ 1. (1) r/E Th(E;). Since E; C_E and E = Th(E) we obtain r/E Th(Ei) C T h ( E ) = E.

(2) B ~ Th(Ci). Analogous to (1). (3) r/E {/3, to} for some ~ such that a E E i and C U {fl} U {to} ~/_L. Since E;C_E we have a C E . Together, a E E and C U { / 3 } U {to} }TZ± imply to ~ E and /3 A to C C, and both cases for ~7 are covered. Thus, we have E;+I C E and C;+~ C C, respectively. (C_) From (B.1) and the fact that (E, C ) = F ( C ) we obtain EC_Uf_oEi and C C U~=o Ci, respectively. We obtain (E, C) = ((-)~=0 Ei, U ~ o Ci). If part: Assume (E, C) = (U~_o Ei, Uf=o Ci). (D) Now, we have to show that E;C_F~(C) and C;C_F2(C) for i~>0. Base: Clearly, E 0 = W C_F1(C) and C o = W C_F2(C ). Step: Assume E; CFI(C ) and C; C F2(C ) and consider t i e Ei+ 1 U Ci+ 1.

23 W e r e f e r to t h e c o m p o n e n t s of F as F t a n d F 2, r e s p e c t i v e l y .

J.P. Delgrande et al. / ArtificialIntelligence 70 (1994) 167-237

213

(1) 7/E Th(Ei). Since E i C Fii(C) and Fl(C ) = Th(FI(C)) we obtain r/E Th(Ei) C_ Th(FI(C)) = F~(C). (2) • E Th(Ci). Analogous to (1). (3) -q E {/3, w} for some ~ such that a E E i and C U {/3} U {o~}~ / ± . Since E i C_F~(C) we have a E FI(C ). Together, a E FI(C ) and C U {/3} U {~o} V ± imply w EFI(C ) and/3 ^ w EF2(C ) and both cases for 7/ are covered. Accordingly, we have Ei+ 1 C_FI(C ) and Ci+ i C F2(C), respectively. (C_) Follows from (B.1). We have shown that (U~=0 Ei, U~=0 ci) = [ ' ( c ) . Together with the assumption (E, C) = (U~=0 Ei, U~=0 Ci), we obtain that (E, C) is a constrained extension of (D, W). [] In the sequel, we frequently use the following definition and results. Definition B.I. Let (D, W) be a default theory and S and T sets of formulas. The set of generating defaults for (S, T) with respect to D is defined as follows:

G D ( ( S ' T ) ' D ) = { a:~o E D

a E S , T U { / 3 } U { w } V - L }.

Theorem B.2. Let (E, C) be a constrained extension of a default theory (D, W).

We have E = Th(W U Conseq(GD((E, C), D))), C = Th(W U Conseq(GD((E, C), D)) U Justif(GD((E, C), O))). Proof. See [22].

[]

Theorem B.3 (Groundedness). Let (E, C) be a constrained extension of (D, W).

Then, there exists an enumeration (6i)ie I of GD((E, C), D) such that for i E I W U Conseq({6o,... , 6i-1}) Proof. See [22].

F Prereq(6~).

[]

Theorem 3.8 (Semi-monotonicity). Let (D, W) be a default theory and D' a set of defaults such that D C_D'. If (E, C) is a constrained extension of (D, W), then there is a constrained extension (E', C') of (D', W) such that E C E' and C C C'.

Proof. The inconsistent case is easily dealt with, so that we prove below the theorem for E and C being consistent. We define a sequence (zl) of subsets of D' as follows. For the sake of simplicity, let us abbreviate Th(WUConseq(A)) by E* and T h ( W U

214

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Conseq(AL) U Justif(A )) by C K. (GD((E, C ) , D , i f ~ = 0 , if ~ is a limit ordinal, A, = {A~ U {S }, if L = K + 1 is a successor ordinal in the case there e x i s t s 6 = ~ E D ' \ A such that ~ E E K

IU 0 E,+,=Th(E,)U

o~

/

69

C,+,=Th(C,)U /3^ o

~ D , o~@E,, CU{/3}U{w}JTZ..L

(.O

cu{/3}u{

}

,

o}t/_L

}

,

Proof by induction on i.

Base: By definition. Step: The induction hypothesis is: E~ C_F~(C') and C~ _CF2(C' ). Consider ~ E E~+~ U C~+~. Then, one of the three following cases holds.

:4 We refer to the components of F as F~ and F2, respectively.

215

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

(1)

"oETh(Ei). By the induction hypothesis and the fact that FI(C' ) is deductively closed, we have r/@ E~+ t. (2) r i g Th(C~). By the induction hypothesis and the fact that F2(C' ) is deductively closed, we have , / E Ci+ ~. (3) "OE {/3 A o~ [ e ~ ¢ D, a E E~, C U {/3} U {to} W_L}. That is, rl is either to or /3 such that there is a default " 6O : ~ E D with a C E i and C U { / 3 } U {~o} ~ ' ± . By the induction hypothesis, a ~ F~(C'). By definition, /3 A o~ E C C_C'. Since C' is consistent, we have C' U {/3} U {oJ} ~ ' ± and we obtain together with a ¢ F I ( C ' ) that to EF~(C') and /3 A ~o E F : ( C ' ) and both cases for "O are covered. From the three cases, we obtain El+ 1 C_FI(C' ) and Ci+ 1 C_F2(C' ). []

Hence, we have shown that E i C FI(C' ) and C i C_F2(C' ) for i ~> 0.

Theorem 3.6 (Existence of extensions). Every default theory has a constrained extension. Proof. Let (D, W) be a default theory. Then, there is a default theory (0, W) which has a unique constrained extension ( T h ( W ) , Th(W)). From this and Theorem 3.8 the result follows immediately. [] Theorem 3.10 (Weak orthogonality). Let (D, W ) be a default theory. I f (E, C) and (E', C') are distinct constrained extension of (D, W ) , then C U C' is inconsistent. Proof. The case where W is unsatisfiable is trivial. According to Theorem 3.4, (E, C) = (Ui~o Ei, U °~i=oCi) such that E o = W and C 0 = W , and f o r i / > 0 Ei+ 1 ~-- T h ( E i ) U

Ci+l = T h ( C i ) U

{I o,)

{

o)

/3 Ao9

~ D, a @ Ei, C U { / 3 } U {o.)} ~.J._

o.)

eD,

o~eEi, CU{fl}U{og}~,zI

}

I

.

Also (E', C ' ) = (U~=0 El, U~=0C'i) where E i and C'i are defined analogously. Without loss of generality, we can assume that C and C' are distinct. Then, there exists a least k such that Ck+ 1 # C'k+ 1 in which case Ck = C'k (and E k = E~,). Then, there is a default ~ : 8 ~ D such that a ~ E k = E ' k and C U{/3}U{o2}V_L and oJ t t t /3A~OECk+ 1 but / 3 A w J ~ C , + 1. But a @ E k and /3At0~-Ck+ a implies C ' U {/3} U {0o} ~-J_. Since /3 A w C C and C and C' are consistent, we have by monotonicity that C U C' ~- 1. That is, C U C' is inconsistent. [] Theorem 3.12. Let (D, W ) be a normal default theory and E a set of sentences. Then, E is a standard extension of (D, W ) iff (E, E) is a constrained extension of (D, W).

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

216

Proof. In order to prove the claim we reduce the characterisation of constrained extensions given in Theorem 3.4 in the case of normal default theories. By definition, for any normal default 4: ~ we have/3 -= to. Now, according to Theorem 3.4, (E, C) is a constrained extension of the normal default theory (D, W) iff (E, C ) = (U~=0 El, U ~ = o c i ) and E 0 = W and Co = W , and fori~>0 oJ

{L

E~+,= Th(E~)U to Ci+l =

to

{

Th(C3u /3 ^ to

/

ED, o~EE~, CU{/3}U{to)J;/_L

°:,, ~ D, ,~ ~ e~, C U {/3} U {to} 1 / ± } • to

Clearly, since/3 --- to we have Th(Ei) = Th(Ci). So, since U~'=o Ei and U~-0 Ci are deductively closed we also have U~=0 Ei--U[=0 Ci. Notice also that due to the equivalence of /3 and to the condition C U{/3} U { t o } ~ ± reduces to C U {/3} ~ ± , and furthermore, since C is deductively closed, we obtain -n/3~'C. Therefore, (E, E) is a constrained extension of a normal default theory (D, W) iff (E, E) = (ULo E~, U~-o Ei) such that E 0 = W, and for i i> 0 Ei+l= Th(Ei) U

{

to

~:/3 to

ED,

}

o ~ E E i , -q/3y~-_g .

Obviously, this amounts to the same characterisation of standard extensions as given in Theorem 2.2 for standard extensions. [] Theorem 3.14. Let (D, W) be a default theory and let E be a standard extension of (D, W). If E U C E is consistent, then (E, Th(E U CE) ) is a constrained extension of

(D, W). Proof. Let E be a standard extension of (D, W) and C~ = (/31 ~ f E D, a E E, ~ / 3 ~ E } . Define C = Th(E U CE). We show that (E, C) is a constrained extension of (D, W). First, observe the following properties. (1) By definition, W C E C C. (2) Also, by definition, E = Th(E) and C = Th(C). (3) For any ~,~ E D , if a ~ E and C U {/3} U {w} VA_ then w ~ E and /3 ^ to E C. Because, by monotonicity and the fact that E is deductively closed, C U {/3} U {w} V ± implies --n/3,~-.E (since g C_C). Then, by the minimality of F(C), we have 25 Q(C)C_ E and ~(C)C_ C. That is, iv(C) C_2 (E, C). z6 Since E is a standard extension of (D, W) we have according to Theorem 2.2 that E = U~=0 Ei such that E o = W, and for i ~> 0

Ei+ 1 = Th(Ei) U { to a :to/ 3 E D , a E E i , 7 / 3 ~ . E } . zs We refer to the components of F as F~ and Fz, respectively. 26 We sometimes abbreviate F~(C) C_E and F2(C ) C C by F(C) C_2 (E, C).

J.P. Delgrandeet al. / ArtificialIntelligence 70 (1994) 167-237

217

Define C O= 0, and for i i> 0 Ci+1 = {

] a to: /3 E D , a ~ E i , "-l/3 f~ E

Clearly, CE = U~=0 C~. We will show that U~=0 Ei C_F~(C) and U~=o Ci c_ F2(C ), in order to show that E C_F~(C) and C C_F2(C ). Therefore, we show by induction E~ C F~(C) and C~ C_F2(C ) for i>~0.

Base: Clearly, E o = W C_F~(C) and C o = 0 C_72(C). Step: Assume E~ C F~(C) and C~ C_F2(C ) and consider ,7 E E~+, U C~+,. (1) If ,7 E Th(E~) then, by the induction hypothesis and the fact that F~(C) is deductively closed, we obtain ,7 E F~(C). (2) If 7/E Ci then, by the induction hypothesis, also ~/E F2(C ). (3) Otherwise, there exists a default ~:o~~ E D such that a E E~ and ~ / 3 ~ E . By the induction hypothesis, a EFt(C). By assumption, C is consistent. Since E is a standard extension of (D, W), a E E, and ~ / 3 ~ E implies to E E. Also, by definition of CE, we have/3 E C E. Therefore, C U {/3} U {to} ~ ± (since C = Th(E U C~)). From a ~ FI(C ) and C U {/3} U {to} ~ z / we conclude, by Definition 3.1, that to ~ F~(C) and /3 A to ~ F2(C ). Since F~(C) is deductively closed the last membership implies/3 ~ F2(C ). Clearly, both cases for v are covered. Accordingly, Ei+ 1 C FI(C ) and Ci+ 1 C I'2(C ). With this, we have shown that U~=oEiC_F~(C) and U~=0 CiC-F2(C). Since U~=0 Ei = E and U~=0 Ci = Ce, that is E c_ F~(C) and Ce c F2(C). Since F~(C) C_ F2(C ) we have E U Ce C F~(C). So, since F2(C ) is deductively closed, C C_F~(C). Hence, (E, C) C_2F(C). [] Theorem 3.15. Let (D, W) be a default theory and let E and C be sets of sentences. If (E, C) is a constrained extension of (D, W) and E is a standard extension of (D, W), then C C_Th(E U Ce). Proof. Let (E, C) be a constrained extension of (D, W) and let E be a standard extension of (D, W) and C E = {/3 I a ~ ~ D, a E E, -q/3~E}. Then, according to Theorem 3.4, E = (U~=o El, U~=0 C/) such that E o = W and C O= W, and for i >/0 E,+, = Th(e~) U to

{

60

C,+l=rh(C,)U 13,,,o

E D, a U E~, C U {/3} U {to}} / ±

tO

eD,

}

ei, C u (/3} u (to} F z

]

'

We will show U~=0 Ci c_ Th(E U Ce), in order to show C C_Th(E U Ce). Therefore, we show by induction C~ C_Th(E U Ce) for i ~ O.

Base: Clearly, C o = W C E C Th(E U Ce). Step: Assume C~ C_ Th(E U Ce) and consider ~ E C~+1. (1) If '7 E Th(C~) then, by the induction hypothesis, ~ E Th(E U CE).

218

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

(2) Otherwise, 71E {/3, oJ} for some default 4:~ ~ D such that a ~ Ei and C U {13} U {w} ~ ± . Clearly, ~o E E. According to the definition of C E we have /3 E C E only if a E E and 7 / 3 ~ E. Clearly, a ~ E since a E E~ and E~ C_E. Since C U {/3} U {~o) ~ ± by monotonicity, E U {/3} V ± . That is, since E is deductively closed -1/3~ E. Thus, both cases for r/ are covered. From the two cases, we obtain C~+~C_ Th(E U CE). Therefore, we have shown that U~-0 Ci c_ Th(E U CE).

[]

Theorem 3.16. Let (D, W) be a default theory and E and C sets of formulas. Let = A aeGD(~E,C),D) Conseq(6) ^ Justif(6) with finite GD((E, C), D) and D'=

{

a:/3 ^,o ^ C

-0)

~:/3 E D } . £0

Then, if (E, C) is a constrained extension of (D, W), then E is a standard extension of (D', W). Proof. Since (E, C) is a constrained extension of (D, W) we have according to Theorem 3.4 that (E, C) = (U~=0 Ei, U~_o Ci), where E 0 = W and C O= W, and for i~>0

{ C,+l=rh(C, l u {/3^~o E/+,=Th(Ei)U

oJ

ED, aEEi, CU{/3}U{og}~/I

(.O

}

~ a , ~ ~ e , , CU {/3} U { ~ o } / ±

}

'

Since (according to Theorem B.2)

E = rh(W U Conseq(GD((e, C), 17))), C = rh(W U Conseq(GD((E, C), 19)) U ].stif(aV((e, O , V))), we have

C = Th(E U {(~}) by definition of C. Consequently, we have

CU{/3}U{w}J:/±

iff

EU{/3/,,wAC'}~"J_.

(8.2)

Consider E~ = W and

Ei+I=Th(E;)U 09

O)

ED',aeE~,EU{/3AwA

JzZ_L

for i >t 0. Now, condition (B.2) implies that E i = E~ for i/> 0. Since E = U~=o Ei this implies according to Theorem 2.2 that E is a standard extension of (D', W). [] Theorem 3.19. Let (D, W) be a default theory. Let (H, [ / ) be a pair of sets of

j.p. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

219

models and E and C deductively closed sets of sentences such that 11 = { ~r l Tr ~ E} and H = {~ 17r ~ C}. Then, (E, C) is a constrained extension of (D, W ) iff (11, F1) is a >D-maximal element above (MOD(W), MOD(W)).

Proof. First, we need the following definition. Definition B.4. Let (D, W) be a default theory. Given a possibly infinite sequence of defaults a = (80, ~51,~2. . . . ) in D, also denoted (8~)/e J where I is the index set for A, we define a sequence of focused models structures ((1I/, ~ ) } ~1 as follows: (Ho,/~/o) = (MOD(W), MOD(W)),

where 8~-

O~i : ~ i

The unsatisfiable case is easily dealt with, so that we prove below the theorem for E and C being satisfiable. Proof of Theorem 3.19. Correctness: Assume (E, C) is a consistent constrained

extension of (D, W). Then according to Theorem B.3, there exists an enumeration (~Si)i~1 of the set of generating defaults GD((E, C), D) such that for i E I W U C o n s e q ( { ~ o , . . . , ~i-i}) ~- Prereq(tSi).

(B.3)

Let ((//~,/~/~))~ be a sequence of focused models structures obtained from the enumeration (~5~)/cI according to Definition B.4. We will show that (11, [/) coincides with f'-~ic~(Hi, [/~) and is >u-maximal above (MOD(W), MOD(W)). Since (E, C) is a constrained extension, we have according to Theorem B.2 that E = T h ( W U Conseq(GD((E, C), D))), C = T h ( W U Justif(GD((E, C), D)) U Conseq(GD((E, C), D ) ) ) .

Then, since (11, [ / ) = (MOD(E), MOD(C)) we have obviously that (H, [ / ) =

c//).

Firstly, let us show that (///+1, [//+1)>~i (//i,/~//) for i UI. • Since 11~C_MOD(W) and M O D ( W ) ~ W, then by definition of Hi we have 11~ ~ W U Conseq(6~_l) for i E I . Now, //~+1 C Hi for i E I implies that //~ W U C o n s e q ( ( ~ o , . . . , ~_1}). By (B.3), it follows that //~ ~ Prereq(~/) for i@I. • Let us assume that (11i+t,/~/÷1) >~i (//,'/~//) fails for some k E I. By definition of ((~,/~//)) i~/and the fact that we have just proven that//~ ~ Prereq(8/) for i E I , this means that [ / k ~ ( t O k A / 3 k ) for 6k = ~ kto:k~ Let us abbreviate W U Conseq({~o,... , 6k_~}) U Justif({6o,. • •, 6k_~}) by C k. By definition, /Tk = MOD(Ck) • Then, C k ~ -7(% A/3k). That is, C k U (%} U {ilk} }- _L. By monotonicity, CU (a~k} U (ilk} ~- _L, contradictory to the fact that 6k GD((E, C), D).

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

220

Therefore, (//i+l,/*/i+l) >6, (///, /~/i) for i E I. As a consequence, N i e , (Hi,/~/s) >ao((e.C),D) ( S O D ( W ) , SOD(W)). That is, (H,/1) >D ( S O D ( W ) , MOD(W)), Secondly, assume (H,/~/) is not >o-maximal. Then, there exists a default ~,~ E D\GD((E, C), D) such that H ~ oz and [/~'m(o~ A /3).27 First, since/7 ~ E we have E ~ a. Second, since [ / = S O D ( C ) , we also have C~-7(to ^/3). Of course, E ~ ~ and C ~e'-l(~o ^/3) implies ~/3 E GD((E, C), D), a contradiction. Completeness: Let (H, [/) be a >D-maximal element above ( S O D ( W ) , S O D ( W ) ) such t h a t / / = {~-Irr ~ E} and [ / = {rr I rr ~ C}. According to Theorem 3.4, (E, C) is a constrained extension iff (E, C ) = (Ui=0 Ei, U,-o Ci) such that E. = W and CO= W, and for i/> 0 o~

/I {

Ei+,=:rh(Ei)u ,o q+l=rh(q)u

o)

/3^~o

GO, , •

O)

eEi,

cu(/3}u{o.,}F±

}

,

}

ED,~EE,,CU{/3}U{~o}~/Z

'

We will show that (E, C ) = (U~=0 Ei, U~-0 Ci). Therefore, we consider the following two cases.

Case 1: U,=o Ei C_E, U,=o Ci _cC. We show by induction that E i C E and CiC_C fori>/0. Base: By definition, S O D ( W ) ~ E o. Since H C_S O D ( W ) , we have E ~ E 0. That is, E 0 C E. Analogously, we obtain C OC_C. Step: Let Ei C_E and C~ C_C. Consider ~7EEi+ I U Ci+l. (a) If r/E Th(Ei) then, by the induction hypothesis and the fact that E is deductively closed, we obtain "0C E. (b) Similarly, if rl E Th(Ci) we obtain rl E C. (c) Otherwise, ~7E {/3, o~} such that there is a default ":o,~ E D where a E E i and C U {/3} U {w} ~ & . By the induction hypothesis a E E. That is, H ~ oz. Also, by definition of C, we have [I y--q(w A /3). Since (H, [/) is >D-maximal we also have H ~ ~o and [ / ~ oJ. That is, co E E and/3 A o~ E C and both cases for r/ are covered. From the three cases, we obtain Ei+ 1 C E and Ci+ 1 C C. Case 2: E cU~[=oEi, c C_U~=o C~. Since (/7, [/) is a >D-maximal element above ( S O D ( W ) , S O D ( W ) ) for (D,W), we have that (H, [ / ) = (n~=o//i, n~=o i/i) where {(///, [/~))i~, is a sequence of focused models structures defined for some (6i)ic , according to Definition B.4 such that (//,+1, ~+1) >¢ (Hi, i/g) for i ~ I . Then, we define E/n = {v 117i ~ v} and C~7 = {v I i / / ~ v} to be the sets of //i-valid and ~-valid sentences, respectively. Clearly, E = U ~ o E~/ and C---

UT=o Hence, we show inductively

that

27 F o r r e a d a b i l i t y , w e a b b r e v i a t e 3~- E / 1 .

Eli , C UT-o Ei and C~ C_UT-o Ci for i ~>0.

7r ~ / 3 A oJ b y /1 ~ / 3 A O~.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

221

Base: Since Eon = C on = MOD(W) and E o = C o = W, the result is obvious. Step: According to the induction hypothesis, E n c_UT=o Ei and C~ c Ui~o Ci. Because (Hi+ 1,/'/,+1) >~ (///, ~ ) we h a v e / / / ~ o~i and ~ ~-a(/3~ ^ toi), and

(n,+,, t ( + , ) = ((# G n, I # p ,o,}, {~r G ~ I zr p/3i A ~,}), where 6i = -d: &. By the induction hypothesis and the fact that Hi ~ a~ we obtain e=o Ei. By compactness and monotonicity, there exists a k such that O~i G U = a~ G E k. By definition, [//+a ~/3~ A o~. Therefore, [/~/3g A to~ since /'/= Oi~=o/'/~. Thus, toi A /3~G C. Since C is satisfiable, C U {/3~} U {toz}~ / ± . Then, E k ~ a,oe and C U {/3~} U {to~}~ ± , implies oJ~G Ek+ 1 and to~ A/3~ G Ck+ 1. Hence, to~G Ui=0 E~ oo and toi A/3i G Ui=0 Ci. By the definition of//~+~ and ~+~, (or E~n+l and C~+1, respectively) and the fact that UT=0E~ and U~=o Ci are deductively closed, we obtain E~n+lC_U ~ o E~ and C//'/+I ~ U i = o Ci. [] B.2. Proofs of theorems in Section 4 Theorem 4.5. Let (D, W) be a semi-normal default theory and let (D', W) be the

theory where ~ G D iff :(~z')^r~Dt3G D'. If E is an extension of (D, W) then there is an extension of (D', W), E', such that E C_E'. Proof. Let E be an extension of (D,W). Let E' be a maximum (under containment) consistent set such that E' = Th(E U F), where F={aD/3

afirE, and :(aD/3) ATaD/3 ED'}.

We show that E' is an extension of (D', W). Clearly (1) W C_E', since W C_E and E C E'. (2) rh(E') = E'. We need then to show that (3) If :(~z~)^~ G D ' and 7 ( ( a D / 3 ) ^ T ) ~ _ E ' , then a D / 3 G E ' . aD/3 This is done as follows. Assume that :(~Do)^v G D ' and --n((a D / 3 ) A T ) ~ E ' . aDO Now -l((a D/3) A y ) g ~ E ' iff (a ^ --1/3) v --ny~ E' iff (a A - n / 3 ) ~ E ' and --nT~E'. Since (a A -n/3)g~E' we have that a ~ . E ' or -n/3g~E'. (1) If a G E ' , then -n/3~E' and so -q/3~E. If a G E also, then by the definition of E, /3 G E. Consequently a D/3 G E and so a D/3 G E'. (2) If a ~ E , then a D/3 G E' by the definition of E'. [] Theorem 4.9. Let (D, W) be a prerequisite-free default theory. Then (E, C) is a

constrained extension of (D, W) iff there exists some D' C D such that: E = Th(W U Conseq(D')), C = Th(W U Conseq(D') U Justif(D')), and C k / ±, but for every D' CD", CU {Conseq(D")} U {Justif(D")} ~- ±.

222

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Proof. From Theorem 3.4 we have that (E, C) is a constrained extension of a prerequisite-free (D, W) iff (E, C) = (Ugh0 E~, U~-0 C~), where E o = W and C o = W, and for i/> 0 Ei+ 1 = T h ( E i ) U

{

- 60 -ED,

w

CU{/3}U{~}J/Z

}

Ci+ 1

Consequently,

Ej =Th(W)U ,o

C1= T h ( W ) U { / 3

--~D,

cu{/3}u{~}l/±

,

A~ --~-eD, C U { / 3 } U { w } ~ Z } .

For E 2 and C2, and for E i and Ci in general, we have conditions identical to those for E l and C~ respectively, and so since (E, C) = (Mr_0 Ei, g f - o Ci), we obtain that

( {~^~o

C=Th Wu

--eD,

CU{~}U{~o}Vz

})

.

Clearly E and C are consistent and maximal in the set of applicable defaults. Clearly also

C--Th(WU{~Aw

-~-ED})

where C U { ~ } U } U { w } ~ / _ l _ .

But W U {3 ^ I ~o C D} is just W U Conseq(D') U Justif(D') for maximal D' C_ D. An analogous argument for E gives the desired result. [] Theorem 4.10. Let (D, W) be a prerequisite-free default theory and let a E E' fbr all constrained extensions (E', C') of (D, W). Then, (E, C) is a constrained extension of (D, W) if./: (E, C) is a constrained extension of (D, W U {a}). Proof. Assume that for every constrained extension (E', C') of (D, W) that aEE'.

Only-if part: If (E, C) is a constrained extension of (D, W), then there is a maximum D' C_D, E = Th(W U Conseq(D')), C = Th(W U Conseq(D') U Justif(D')),

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

223

and C is consistent. Since E ~- a, a E T h ( W U Conseq(D')), so

T h ( W U Conseq(D')) = Th((W U {a}) U Conseq(D')) . Since a E E

and EC_C, so a E C . So

T h ( W U Conseq(D') U Justif(D')) = Th((W U {a}) U Conseq(D') U Justif(D')), and so (E, C) is a constrained extension of (D, W U {a}). If part: We have that (E, C) is a constrained extension of (D, W U {a}), and need to show that (E, C) is a constrained extension of (D, W). Let D' be a subset of defaults such that (E, C) is a constrained extension where

E = T h ( W U {a} U Conseq(D')), C = T h ( W U {a} U Conseq(D') U Justif(D')). Consider

E' = T h ( W U Conseq(D')) , C' = T h ( W U Conseq(D') U Justif(D')) . Now E' and C' are consistent (since E and C are); assume that (E', C') is not a constrained extension of (D, W) (otherwise we are finished: E ' } - a by assumption, and so E = E' and C = C'). So (E', C') is not maximal with respect to the set of defaults; also clearly E' ~ a and C' ~ a (since again we would have E = E' and C = C'). Let D* be a subset of D such that D' U D* results in a constrained extension. That is (E*, C*) is a constrained extension, where:

E* = T h ( W U Conseq(D' U D*)) , C* = T h ( W U Conseq(D' U D*) U Justif(D' U D*)) . Now we also have that (1) C* = T h ( W U Conseq(D') U Conseq(D* ) U Justif(D') U Justif(D* )). (2) C* ~-a (by assumption, since (E*, C*) is a constrained extension of

(/9, w)). However, (E, C) is a constrained extension of (D, W U {a}); this means that for every 6 E D where 6 ~ D',

T h ( W U {a} U Conseq(D' U {6}) U Justif(D' U {6})) }- _1_ or

T h ( W U {a} U Conseq(D') U Conseq( {6 } ) U Justif(D') U Conseq({6})) }- ± .

224

J.P. Delgrandeet al. / Artificial Intelligence70 (1994) 167-2_t7

Hence

Th(W U Conseq(D') U Conseq( {6 }) U Justif(D') U Conseq( {6 })) ~- ~c~ for every 6 ~ D ' . But this contradicts (1) and (2) above. Hence (E, C) is a constrained extension of (D, W). [] Theorem 4.12. (1) [- applic(a /x /3 ) D applic(a) ^ applic(/3 ). (2) ~- applic(a v/3) =--applic(a) v applic(/3). (3) ~- applic(t~) ~ applic(-7-Ta). (4) [- -7applic(a ) D applic(-7a ). (5) ~- a D applic(a). (6) From ~ a D fl infer ~ applic(a) D applic(/3). (7) From ~ -la infer ~ -Tapplic(a). Proof. (1) If applic(a /x/3), then {a A /3} U S is consistent for some given (consistent) S. Hence {a} U S and {/3} U S are consistent. Hence applic(a) and

applic( /3 ). (2) applic(av/3) iff { a v / 3 } U S consistent iff applic(a) v applic(/3).

is consistent iff { a } U S

or { f l } U S are

(3) Trivial. (4) If -Tapplic(a) (with respect to some given S), then {a} U S is inconsistent. Consequently {Ta} U S is consistent, assuming that S is consistent. Consequently

applic(-1 a ). (5) If S ~- a, then it follows trivially that applic(a). (6) If ~ a D / 3 , then if { a } U S is consistent then {/3}US is consistent. Consequently ~- applic(a) D applic(/3 ). (7) If ~- -Ta, then {a} U S is inconsistent, and so -7applic(a). [] Theorem 4.13. (1) ~- applic(a) D applic(/3) iff ~ a D/3. (2) ~- applic(a) iff ~ T Oa. Proof. (1) S ~- a D/3 iff S ~- -7/3 D -Ta iff if S [- -7/3 then S ~- -Ta iff if {/3} U S is inconsistent then {a} U S is inconsistent iff if {a} U S is consistent then {/3} U S is consistent iff ~- applic(a) D applic(/3 ). (2) The proof proceeds by showing that for a proof in KT that there is a proof involving applic, and vice versa. We note that the following is adequate as an axiomatisation of KT [6]:

From ~ - ~ : T ~ infer ~ - ~ r T O a . From ~ w a D/3 infer ~-~:TOa D/3 .

J.P. Delgrandeet al. I Artificial Intelligence 70 (1994) 167-237

225

From this it follows that any proof in K T can be transformed to a proof using applic (by direct substitution of applic for O), and vice versa. []

B.3. Proofs of theorems in Section 5 Theorem 5.4. Let (D, W) be a default theory and E a justified extension of (D, W) with respect to J. If E U J is consistent, then (E, Th(E U J)) is a constrained extension of (D, W). Proof. Let E be a justified extension of (D, W) with respect to J. Define C = Th(E U J). We show that (E, C) is a constrained extension of (D, W). At first, let us observe the following properties. (1) By definition, W C E C C. (2) Also, by definition, E = Th(E) and C = Th(C). (3) For any ~ t ~ • D , if a • E and C U{/3}U{to}W_l_, then t o • E and /3 A to • C, since, by monotonicity, C U {/3} U {to} ~ 2 implies V'O • J U {/3}. E U {to} U {7/} W.I_ (since C = Th(E U J)). Then, by the minimality of F(C), we have 28 F~(C)C_ E and F2(C ) C_C. That is,

r ( c ) c 2 (E, c). It remains to be shown that (E, C) C_2 F(C). Since E is a justified extension of (D, W) with respect to J we have according to [13, Theorem 4.4] that (E, J ) = (tJ,= 0 E~, [J,=o J/) where E 0 = W and J0 = it, and for i/> 0

Ei+ 1 = Th(Ei) U to

to • D, a • Ei,

v,o • J u (/3}. E u (to} u ( n } / ± } , J i + l : Ji U

fl

a:fl to

E D , a EEi,

(/3}. Eu (to) u(,o}F±}. We will show that [._.J~=oEi C_I'l(C ) and [,_.Ji~=oJiC_lF2(C), in order to show that E C_F~(C) and C C_F2(C). We show by induction E i C_ FI(C ) and J~ C_F2(C ) for i ~> 0.

Base: Clearly, E 0 = W C FI(C ) and Jo = itC F2(C ). Step: Assume E i C FI(C ) and J~ C_F2(C ) and consider v E El+ 1 U Ji+l. (1) If o E Th(Ei) then, by the induction hypothesis and the fact that F~(C) is deductively closed, we obtain v E FI(C ). (2) If v E Ji then, by the induction hypothesis, also o • F2(C ). (3) Otherwise, v • {/3, to} for some default ~ • D such that a • E i and

28We refer to the components of F as F1 and F2, respectively.

226

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Vr/E J U {/3}. E U {to} U {r/} ~r/&. By the induction hypothesis, a E F~(C). By assumption, E U J ~ Z . Since E is a justified extension of (D, W) with respect to J, aGE, and Vr/~EJU {/3}. EU{to} U{'r/}J;/± imply t o E E and /3 E J . Therefore, C U {/3} U {to}W± (since C = Th(EUJ)). From a ~FI(C ) and C U {/3} U {to} ~/L we conclude, by Definition 3.1, that to C ~ ( C ) and /3 ^ to EF2(C ). Since F2(C ) is deductively closed the last membership implies/3 ~ F2(C ). Clearly, both cases for v are covered. Accordingly, Ei+ ~ C_~ ( C ) and Ji+l C_~(C). Therefore, we have shown that Uf-o Ei C_FI(C ) and U~=o J~ c-F2(C). Since U~=o E~ = E and Uf_0 Ji = J, that is E _CF~(C) and J C_F2(C ). Since F~(C) _CF2(C ) we have E U J C_72(C). So, since 72(C) is deductively closed, C _CF2(C ). Hence,

(E, C) c_2r(c).

[]

Proposition B.5. Let (E, C) be a constrained extension of (D, W). Then, (1) (E, Justif(GO((E, C), D ))) is a justified extension of (GD((E, C), D), E),

and (2) GD((E, C), D) = GD((E, Justif(GD((E, C), D))), GD((E, C), D)). Proof. (1) Obvious. (2) By definition,

GD((E, Justif(GD((E, C), D ))), GD((E, C), O )) C GD((E, C), D ) . Assume,

GD((E, C), D ) ~ GD((E, Justif(GD((E, C), D ))), GD((E, C), D )) . Then, there is a default ~

~ GD((E, C), D) but

a:/3

-----~--~_ GD((E, Justif(GO((E, C), D ))), GD((E, C), D )). By Definition B.1, ~ :~ E GD((E, C), D) implies a E E and C U {/3} U {to} [ J ± . That is, according to Theorem B.2, oJ

Th(E U gustif(GD((E, C), D))) U {/3} U {to} ~,z±. By monotonicity, we obtain

V~7E Justif(GD((E, C), D)) U {/3}. E U {w} U {~} • & . This and a ~ E imply

-a-:0.)E/ 3

GD((E, gustif(GD((E, C), D ))), GD((E, C), D)),

a contradiction. Theorem 5.6.

[]

Let (D, W) be a default theory and (E, C) be a constrained

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

227

extension of (D, W). Then, there is a justified extension (E', J') of (D, W) such that E C E ' and C C T h ( E ' U J ' ) . Proof. Let (E, C) be a constrained extension of (D, W). By Proposition B.5, (E, Justif(GD((E, C), D))) is a justified extension of (GD((E, C), D), E). By semi-monotonicity, there is a justified extension (E', J') of (D, E) such that E C E',

(B.4)

Justif(GD((E, C), D )) C J' .

(B.5)

As a consequence, C C_Th(E' U J') since C -- Th(E U Justif(GD((E, C), D ))). It remains to be shown that (E', J ' ) is a justified extension of (D, W). Since (E', J ' ) is a justified extension of (D, E), E' and J' are the smallest sets of sentences such that (1) E C E ' . (2) E ' = Th(E'). (3) For any ~,~ ~ D , if a G E ' and V r l E J ' U {/3}. E' U {to} U {rl} ~_L, then t o E E ' and/3 E J'. Also the following conditions hold. (1) Clearly, W C_E', since W C_E and E C_E'. (2) By definition, E ' = Th(E'). (3) By definition, for any ~,~ E D, if ~ E E' and Vr/E J' U {/3). E' U {to} U {~7} J/_L, then to E E' and/3 E J'. Then, by the minimality of F(E', J'), we have 29 FI(E', J') C E' and F2(E', J') C J'. That is, F(E', J') C 2 (E, C). It remains to be shown that (E',J')C_2F(E',J'). Since E' is a justified extension of (D, E) with respect to J' we have according to [13, Theorem 4.4] that (E', J ' ) = (Ui=0 El, U i = 0 Ji) where E 0 = E and J0 = ~, and for i i> 0 oc

oe

Ei+ 1 = T h ( E i ) U

to

to

E

D, a E g i ,

v,o eJ' u {/3}. E' u (to} u ( n } l / ± } , L+,=Lu

/3

to vn

J' u (/3}. e ' u ( t o } u ( n } V ± } .

We will show that Ui~-0 E i C FI(E' , J') and U /=0 ~ Ji c_ F2(E', J'), in order to show that E' C_F~(E', J') and J' C_Fz(E' , J'). We show by induction E i C_FI(E' , J') and Ji C Fz(E' , J') for i i> 0.

Base: Clearly, Jo = ~ C_F2(E' , J'). By definition, E 0 = E. That is, E 0 = Th(W U

29 We refer to the components of F as F x and F2, respectively.

228

J.P. Delgrandeet al. / Artificial Intelligence70 (1994) 167-237

Conseq(GD((E, C), D)). By definition, W _CFI(E' , J'). By Proposition B.5, GD((E, C), D) = GD((E, Justif(GD((E, C), D ))), GD((E, C), D )) . Also

GD((E, Justif(GD((E, C), D))), GD((E, C), D)) C GD((E', J'), D ) , by semi-monotonicity. Therefore, we have for each ~

E GD((E, C), D)

V n~Eg' U {/3}. E' U {w} U {n} J / ± •

(B.6)

Since (E, C) is a constrained extension there exists according to Theorem B.3 an enumeration (6i)i~ t of GD((E, C), D) such that

W U Conseq({6 o. . . . ,6i x}) ~- Prereq(6i)

for i E l .

We show that Conseq(6i) E I'~(E', J') for i E I.

Base: By definition, W ~ Prereq(6o). Since W C_FI(E',J' ) and the fact that FI(E', J') is deductively closed, we have Prereq(6o)~F~(E', J'). This and (B.6) imply by Definition 5.2 that Conseq(6o) E F~(E', J'). Step: Assume Conseq( {6o, . . . , 6i} ) C_F~(E', J' ). By definition, W

U

Conseq({6o,...

,

6i} ) ~- Prereq(6i+l).

Since W U Conseq({6o,..., 6i) ) C_I](E', J') by the induction hypothesis, and the fact that F~(E', J') is deductively closed, we have Prereq(6i+l)E F~(E', J'). This and (B.6) implies by Definition 5.2 that Conseq(6i+l)E~(E', J'). We have shown that Conseq(6i)C FI(E', J') for i @I; hence,

Conseq(GD((E, C), D )) ~ FI(E', J') . From W C FI(E' , J') and Conseq(GD((E, C), D)) C_F1(E' , J') we conclude that Th(W U Conseq(GD((E, C), D))) C_F~(E', J'), since F~(E', J') is deductively closed. Consequently, E 0 C_F2(E', J'). Step: Assume E i C_F~(E', J') and Ji C_Fz(E', J') and consider v EEi+ I U J~+~. (1) If v E Th(Ei) then, by the induction hypothesis and the fact that F~(E', J') is deductively closed, we obtain v ~ F~(E', J'). (2) If v E Ji then, by the induction hypothesis, also v E F2(E', J'). (3) Otherwise, v E {/3, w} for some default ~:~ E D such that a E E i and W / E J ' U { / 3 } . E ' U { ~ o } U { T / } ~ ± . By the induction hypothesis, a E FI(E', J'). From a E FI(E', J') and W/E J' U {/3}. E' U {w} U {T/} ~ ± we conclude, by Definition 5.2, that w @ F~(E', J') and/3 E F2(E', J'). Clearly, both cases for v are covered. Accordingly, Ei. ~ C_F~(E', J') and J~+~C_F2(E' , J'). ¢o

Therefore, we have shown that Uf=0 Ei C_Fl(E', J') and Uf-o J~ c_ F2(E', J'). Since Ui=o E i = E' and Ui=o Ji = J', that is E' C ~ ( E ' , J') and J' C £2(E', J'). Hence, (E', J') C 2 F(E', J'). [] ov

oc

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

229

Theorem 5.10. Let (D, W) be a default theory and (D, ~V) the assertional default

theory, where °/4/"= {(a, 0) I a E W}. Then, if (E, C) is a constrained extension of (D, W) then there is an assertional extension g of (D, °W) such that E = Form(g) and C = Th(Form(g) U Supp(g)); and, conversely if g is an assertional extension of (D, °W) then (Form(g), Th(Form(g ) U Supp(g ))) is a constrained extension of (D, W). Proof.

Only-if part: Assume (E, C) is a constrained extension of (D, W). Let 4 be a set of assertions induced by GD((E, C), D), i.e. 4 = L_J~0 ~/ such that 40 -{(a,l~) [a EW}, and for each i~>0 ~/+1

=

r'h(~/)

U

{(to, Supp(a) U {/3} U {to}) a "/3 ~ GD((E, C), D) to

(a, Supp(a)) ~ ~ , C U {/3} U {to} ~,"±}. Observe that due to our construction of 4 we have E = Form(4) and also that C = Th(Form(4) U Supp(4)), and furthermore 4 = Th(4). It remains to be shown that 4 is an assertional extension of (D, {(a, 0) [a E W}). According to [3, Proposition 1] we have that 4 is an assertional extension iff 4 = U /=o °~ gi such that g0 = {(a,O) i a E W } , a n d f o r e a c h i ~ > 0 gi+l

=

Th(gi) U (to, Supp(a) O {/3} U {to} )

c~:/3 @ D , to

(a, Supp(a) ) E gi, Form(4) O Supp(4) 0 {/3} O {to} V ± } . We have the following two cases.

Case 1: U~=o g/C_ 4. Therefore, we show by induction that gi C_4 for i ~>0. Base: Clearly, we have g0 C_4 since g0 = 40. Step: Let gi C_4. Regard, (to, Supp(to)) E g~+l. (a) If (to, Supp(to))E Th(g,), then by the induction hypothesis and the fact that 4 = ~r"h(4) we also have (to, Supp(to)) E 4. (b) Otherwise, there is a default ~:~ED~ where (a, Supp(a))Eg~ and Form(4) USupp(4) U {/3} U {to} ~ ± . By the induction hypothesis (a, Supp(a)) E 4 . By compactness there exists a k such that (a, Supp(a) ) E 4 k. By definition, C = Th(Form(4) U Supp(4)). Hence, Form(4) U Supp(4) U {fl} U {to) ~ ± implies C U {fl} U {/3} ~ ± . From (a, Supp(a)) ~ 4 k and C U {/3} U {/3} ~/± we conclude (to, Supp(a) U {/3} U (to)) E 4k+ 1. By monotonicity, (to, Supp(a)U {/3} U (to}) @ 4. From the two cases, we obtain gi+l C_4.

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

230

Case 2: ~ c_ U;=0 ~/- Therefore, we show by induction that ~ _CU~-0 ~i for i~>0.

Base: Clearly, we have ~-o C_U?=o ~i since ~*o = ~o. Step: Let o~ c_U?=o ~. Regard (w, Supp(to)) E ~+~. (a) If (to, Supp(to))E T"h(o~) then by the induction hypothesis and the fact that U~_0 ~i = T'h(U~_o ~i) we also have (to, Supp(to)) E U ~ 0 Y~. (b) Otherwise, there is a default ~o~ :p E GD((E, C), D) where (a, Supp(a)) E ,~ and C U {/3} U {to} ~ ± . By the induction hypothesis (a, Supp(a)) @ U~=0 ~i. By compactness there exists a k such that (a, Supp(a)) C ~k. By definition, C = Th(Form(o~) U Supp(~)). Hence, C U {/3} U {to} ~ 2 implies Forrn(o~) U Supp(~) U {/3} U {to} ~ 2 . From (a, Supp(a)) ~ ~k and Forrn(~) U Supp(o~) U {/3} U {to} ~ / ± we conclude (to, Supp(a) U {fl} U {to}) ~ ~+~. By monotonicity, (to, Supp(a) U {/3} U {to}) ~ ~. If part: Assume ~g is an assertional extension of (D, {(a, ~) [a ~ W}). We show that (Form(~), Th(Forrn(~)USupp(~))) is a constrained extension of (D, W). Let us abbreviate Form(~) by E ~ and Th(Forrn(~)U Supp(~)) by C ~. Notice, that E ~ is also deductively closed, i.e. E ~ = Th(E ~) since To = ~r'~(To). According to Theorem 3.4 we have that (E ~, C ~) is a constrained extension iff ( E~, C~) = (U,-o Ei, U~=oCi) such that E o = W and C O= W, and for i/> 0 Ei+l =

Th(E,) u {to a :t o3

Ci+l = Th(C,) U

/ /3 A to

~

to

D, ~ ~ e,, C ~ u ~

D, a ~ Ei,

{/3} u {to} l / ±

c

According to [3, Proposition 1] we have that { ( a , O ) [ a ~ W } and, for each i~>O ~i+1 =

U {/3} U {to}

=Ui:o ~

},



}.

such that ~o =

Th(~i) U (w, Supp(a) U {/3} U {w} )

to

~D,

] ( a, Supp( a ) ) E %, Form(To) O Supp( TO) O {/3} U { t o } / 2 ~ . We have to consider the following two cases.

Case 1:U~=0 Ei C_E ~, U~20 (7,. c_ C ~. We show by induction that E~ C_E ~ and C~C_C ~ for i~>0. Base: (a) Clearly, E 0 = W = Form({(a, 0) [a E W}) = E o C E ~. (b) Also, C o = W C _ E ~C_C ~. Step: Let E~ C_ E ~ and C~ C_ C ~. Regard ~ @ El+ 1 U Ci+ 1. (a) If ~ E Th(E~) then, by the induction hypothesis and the fact that E ~ is deductively closed, we obtain ~ E E ~. (b) Analogously, if ~ ~ Th(C~) then ~ ~ C .

231

J.P. Delgrande et al. / ArtificialIntelligence 70 (1994) 167-237

(c) Otherwise, 7 ~ {/3,/3 ^ to) such that there is a default ~ :t~ E D where a E E i and C ~ U {/3} U {to} ~ ± . By the induction hypothesis a E E ". That is, a E Form(g). Hence, (a, Supp(o-)) ~ g. Also, C ~ U {/3} U {/3} ~z± implies Form(g)USupp(Ig)U{/3} U{to}~/±. Since g is an assertional extension ( a, Supp(a)) E g and Form(g) U Supp(g ) U {/3} U {to} V ± imply (to, Supp(a) U {/3} U {to}) E g. Thus, to ~ E ~ and {to,/3} C_C ~. Since C ~ is deductively closed the latter implies to ^ 13 @ C ". From the three cases, we obtain Ei+ I C_E ~ and Ci+l C_C ~. to

Case 2: E" c U?=0 El, C ~ _C[..J~=oC/. We show by induction that Form(gi)C_ U~=0 Ei and Form(~i) U Supp(~i) C_U~=0 Ci for i/> 0. Base: (a) Clearly, Form(go) = W = E o C_U~=o E~. (b) Similarly, Th(Form(go) U Supp(~o) ) = Th(W) C C a C_U~=o Cr Step: Let Form(~i) C_U~=o Ei and Form(gi) U Supp(gi) C U~=0 Ci. Consider (7, SupP(7) ) E ~,+1. (a) If (7, Supp(7)) E T'h(K) then, by the induction hypothesis and the fact that U~=o E~ and LJ~=oC, are deductively closed, we obtain Form(7 ) U~=o E~ and Supp(7 ) C. U ~ 0 C~. (b) Otherwise, there exists a default ~:~ E D where (a, Supp(a))E g~ and Form(g) tO Supp(g) tO {/3} U {to} V_~. Then, Supp(7) = Supp(a) U {/3} U {to}. By the induction hypothesis a E U~=o Ei. Then, by compactness there exists a k such that a E E k. Clearly, Form(g) U Supp(g) U {/3} U {to} ~_L implies C U { / 3 } U { t o } ~ ± . From a E E k and C~U{/3}U{to}~/'± we conclude to ~ Ek+ ~ and /3 ^ to ~ Ck+ ~. Also, by the induction hypothesis, Supp(a) C_U~=o (7,. Therefore, by monotonicity and the fact that Supp(7) = Supp(a) U {/3} U {to}, we have Form(7 ) E U~=0 E~ and Supp(rl) C

UT=oci. [] Theorem 5.12 (Correctness and completeness). Let (D, 74/') be an assertional

default theory and let (1I, F1) be a pair of sets of models. If g is an assertional extension of (D, 74/'), then (MOD(Form(g)), MOD(Form(g ) U Supp(g ))) is a >o-maximal element above (MOD(Form(~W)), MOD(Form(7,ff) U Supp(~W))). If (H, P1) is a >o-maximal element above (MOD(Form(?d/')), MOD(Form(74r) U Supp(°W))), then there is an assertional extension g of (D, 747) such that H = { 7r ] rr ~ Form(g)} and F1 = {11111 ~ Form(g) U Supp( g ) } . Proof. In what follows, we abbreviate (MOD(Form(~W)), MOD(Form(7,ff)U Supp(?4~))) by (H~, [/~¢). First, we need the following definition. Definition B.6. Let (D, 74/') be an assertional default theory. Given a possibly infinite sequence of defaults A = (60, 61,62,. " .) in D, also denoted (6i)i~ 1 where

232

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

I is the index set for zl, we define a sequence of focused models structures ((H~i,/~/~))i~, as follows: (//o, no) =

where 6 i -

O~i : J~i

w~

The case where (D, ~V) is not well-based is easily dealt with, so that we prove below the theorem for Form(~V)U Supp(~V) being satisfiable. Proof of Theorem 5.12.

Correctness: Let (D, 74/) be a well-based assertional default theory. Assume is an assertional extension of (D, off.). Then, by assumption, also F o r m ( ~ ) U S u p p ( ~ ) is consistent. Then according to [22, Theorem A.2.4], there exists an enumeration (6 i )iet of the set of generating defaults GD(~, D ) such that for i E 1 Form(~W) U Conseq( {6 o . . . . .

6i_ 1}) ~- Prereq(6i) .

(B.7)

Let ((Hi, [/~))iel be a sequence of focused models structures obtained from the enumeration (6~)~e ~ according to Definition B.2. We will show that (H, [I) coincides with (-'liet (///, f/i) and is >o-maximal above (/Iv, [/~). Since ~ is an assertional extension, we have according to [22, Theorem A.2.2] that Form(~') = Th(Form(°W) U Conseq(GD(~, D ) ) ) , Supp(~ ) = Supp(~V) U Conseq(GD(~, D )) U Justif(GD(~, D )) .

Then, since (//,/~/) = ( M O D ( F o r m ( ~ ) ) , M O D ( F o r m ( ~ ) U S u p p ( ~ ) ) ) we have that (//,/~/) = A i c i (//i,/~,). Firstly, let us show that (//~+1,/~i+l) >~i (//g'/~/~) for i E I . • Since ~ _C//W and H~. ~ F o r m ( ~ V ) , then by definition of //i we have Hi ~ Form(~W) U Conseq(6~_ 1) for i E I. Now,//~+~ C_//~ for i E I implies that 1I~ ~ Form(~V) U Conseq({6 o . . . . . 6~_~}). By (B.7), it follows that //~ Prereq(6i) for i ~ I. • Let us assume that (//i+l, -f/~+~) >~i (//,, f/i) fails for some k C I. By definition of ((//~, ~ ) ) ~ c , and the fact that we have just proven that H i ~ Prereq(6i) for i E I , this means that f/k~--n(Wk/X/3k) for 6k =~k:~k Let us abbreviate ok Form(~V) U C o n s e q ( { 6 o , . . . , 6k ~}) U Justif({6 o, . . . , 6k 1}) by Supp k. By definition, //k = MOD(Suppk) • Then, Supp k ~ --n(o~k ^ ilk)" That is, Supp k U {Wk} U {/3k} ~- I . By monotonicity, Form(~) U S u p p ( ~ ) U {t0k} U (/3k} ~ ±, contradictory to the fact that 6k ~ GD(~, D ). Therefore, (//i+~, ~+1)>~(//~, ~ ) for i E I . As a consequence, [']~ei(//i, ~ ) >GD(~,D)(HW.,[I~). That is, (H, [/) >D (//~r,/I~)Secondly, assume (H, [/) is not >D-maximal. Then, there exists a default

J.P. Delgrandeet al. / ArtificialIntelligence70 (1994) 167-237

233

~:~ 6_D\GD(~,D) such that II[¢a and [/F-m(w ^/3). 30 First, since H~= Form(~) we have Form(~)b a. Second, since /~/= MOD(Form(~)USupp(~)), we also have Form(~)USupp(~)~--n(o)^/3). Of course, Form(~)~a and Form(~) tOSupp(~)[/--n(oo ^/3) implies ~ 6_ GD(~, D), a contradiction. Completeness: Let (H, [/) be a >D-maximal element above (//~, [/~) for (D, °W). Then, we have that (/7, [ / ) = (O~=0//~, O,=0 ~ ) where ((Hi, ~ ) ) , e l is a sequence of focused models structures defined for some (6i)ie ~ according to Definition B.2 such that (~+1, ~ + ~ ) > ~ (//~, ~ ) for i6_I. Let O% be a set of assertions induced by (6~)~e~, i.e. 0%= Uiei ~ such that o%0= °W and for each i/> 0 A+I =

u

{w, Supp(~)U{/3}U{o)})

~i=0l:/309 '

(a'Supp(a))6_A} "

Observe that due to our construction of O%we h a v e / / = {~r I ~r ~ Form(o%)} and we have //i = {~r l Tr

fI = {lr IIr ~ Form(o%) U Supp(o%)}. In particular, Form(A)} and ~ = {~" let ~ Form(A ) U Supp(o~)}.

It remains to be shown that 0% is an assertional extension of (D, {(a, 0) I a 6_ W}). According to [3, Proposition 1] we have that o% is an assertional extension iff O%= U~=o ~ such that ~o = { ( a , O ) [ a 6_W}, and for each i~>O

~'~+~ = Th(~) U (oJ, Supp(a) U {/3} U {o)}) a :~o/ 3 E D ' (a, Supp(a)) E ~, Form(o%)U Supp(o%) U {/3} U {w} V_L}. We have to regard the following two cases.

Case 1:Ui=0 ~i _c o%. Therefore, we show by induction that ~i C_O%for i i> 0. Base: Clearly, we have ~0 C_O%since ~0 = °%0. Step: Let ~ C_O%. Regar.,d (oJ, Supp(w)) E ~i+1. (a) If (o9, Supp(w))E Th(~g~), then by the induction hypothesis and the fact that O%= ~r"h(o%)we also have (o9, Supp(o)))E O%. (b) Otherwise, there is a default ~ E D where (a, Supp(a))6_~gi and Form(o%) U Supp(o%) tO {/3} U {~o}~ ± . By the induction hypothesis (a, Supp(a))6_ O%. That is, H ~ a. Also, by compactness there exists a k such that (a, Supp(a))6_ O%~.By definition, FI = MOD(Form(O%) to Supp(o%)). Thus, since/) is non-empty, Form(o%) to Supp(o%) U {/3} to {oJ} ~Tz± implies/~/~7z±~(/3 ^ w). By the >o-maximali30For readability, we abbreviate 37r E[L "n"~ 13,', w by [/~/3 ^ o).

234

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

ty of (/7, [/) we conclude H ~ w and /'/~/3 ^ w. Hence, there is a j/> k such that (/7j+l, ~+1) >~, (//j, ~ ) . Therefore, by definition, (~o, Supp(a) U {/3} U {oJ}) E ~+i- By ~aonotonicity, (o), Supp(a) U {/3} U {w}) E 0%. From the two cases, we obtain ~i+ 1 C_0%.

Case 2 : 0% C_U~-0 ~i. Therefore, we show by induction that o~ C U~=0 ~gi for i~>0. Base: Clearly, we have 0%0C_Uf-0 ~i since 0%0= ~0. Step: Let ~ c_U~_0 ~i. Consider (w, Supp(o))) E o~+~. (a) If (w, Supp(o)))E T-"h(o~) then by the induction hypothesis and the fact that U;-o ~i i T'h(U~=0 ~i) we also have {w, Supp(w)) E U~=o ~. (b) Otherwise, there is a default 6i = ~:¢ where (a, Supp(a))EWI. Then, by definition (Hi+l,/'/i+1) >a, (/~i,/~i)' As a consequence, /)i+, b/3 ^ w. By the induction hypothesis (a, Supp(a)) C Ui~0 ~. Then, by compactness there exists a k such that (a, Supp(a)) E fgk. By definition, [/i+l ~/3/x oJ. Since [ / = A~_0 ~ , we have /'/~/3/~ w. Also, by definition, [ / ~ Form(o%) U Supp(0%). Hence, Form(o%)U Supp(~)U {/3} U {w} V ± since [/ is nonempty. From (a, Supp(a)) E ~ and Form(0%)USupp(0%)U {/3} U {o9} V ± we conclude (w, Supp(a) U {/3} U {w}) E ~k+l' By monotonicity, (w, Supp(a) U {/3} U {w}) E ~. [] to

B.3.1. Theorist proofs The following proofs use a corollary of Theorem 4.9. Corollary B.7. Let (D, W) be a prerequisite-free default of PfConDL. If D' C D such that Th(W U Conseq(D') U Justif(D'))V ± then there exists a constrained extension (E, C) of (D, W) where

E D Th(W U Conseq(D')), C D Th(W U Conseq(D') U Justif(D')). The subset relation (_D) occurs because D' is not necessarily maximal (cf. Theorem 4.9). Theorem 5.16. A formula tr b explainable from (0%,A, ~ ) iff there exists" a

constrained extension (E, C) of ( { ~ Proof. Let (D, W) =

I/3

113 ~ A}, 0%) such that tr E E.

~ a ) , 0%)

If part: Assume ~ is in E for some constrained extension (E, C) of (D, W). Let D' be a subset of D such that: E = Th(W U Conseq(D')), C = Th(W U Conseq(D') U Justif(D')), and C~TZ_l_ (Theorem 4.9). The set S = Conseq(D') is a scenario that explains obecause:

J.P. Delgrandeet al. / ArtificialIntelligence 70 (1994) 167-237

235

(1) cr ~ E and E = Th(W U Conseq(D')) = Th(~ U S) so F U S ~- or, (2) Justif(O')={/3^c~l ~~- E D ' } so T h ( ~ U S U ~ ) C C and therefore U S U ~ is consistent since C V_L. Only-if part: Assume ~r is explainable from (~, A, ~ ) and let S be a scenario that explains or. Let D' = { ~ - ~ ~ I/3 E S}. Now,

C = Th(W U Justif(D') U Conseq(D')) = Th(~ u s u

~).

So C ~ _ L because ~ U S U q~ is consistent. Therefore, there is an extension (E, C) where Th(W U Conseq(D')) C E (Corollary B.7) so Th(~ U S) C_E and therefore o- E E. [] The main difficulty in proving Theorems 5.17 and 5.18 is dealing with the naming of defaults. We start with two lemmas that show that after we have fixed a scenario, naming becomes irrelevant. For a Theorist scenario .~ US with constraints q¢, we use .~.., S.n, and q¢.. to be the corresponding sets of formulas without names (nn is read "no names"). That is, ~ . . = {f I f E ~ and f does not contain any default n a m e } ,

Snn = { / 3 1 n ~ S } , q~.. = {~3, D--q/3 ]n~ E S and ~3, D--qn E c¢} . Note that c¢.n contains only the constraints that are relevant to the defaults in S... Lemma B.8. ~ U S ~ o" iff o~n, U ann F

0", for any formula o- that does not contain

any names. Proof.

If part: ~ U S ~ o ' . Assume that o%.. U S..~,Zo ". Then there must be an interpretation, call it In., that is a model for -~.n U S.. U {-no-}. We can extend In. to handle names by letting, ~True, I(ne) = [False, I(a)=I(a),

if Inn ~/3 , if Inn ~ / 3 ,

i f a is n o t a n a m e .

So I ~ ~ because I ~ n~ D/3 for all names n~. Also, I ~ S and, therefore I is a model for ~ U S U {Tcr}. But this contradicts ~ U S ~- tr, Only-if part: .'~.n U S.. ~ or. Assume .~ U S V ~ , and let I be an interpretation that models ~ U S U {-~o-}. Now, ~ . . C F, so I models ~ . . . Also, ~ U S ~-/3, for each /3 ~ S.. so I models S.n. Therefore, I models ft.. U S.. U {-nor}, but this contradicts . ~ U Sn. ~- tr. [] Lemma B.9. o%U S U ~ is consistent iff ~ , , U Snn U ~,, is consistent.

236

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

Proof. If part: ~ U S U ~ is consistent. Note that Th(~,~ U S~, U ~,,,,) C Th(~ U S U ~ ) because o~.~C_off; for all / 3 E S n . , ~ U S t - / 3 ; and for all -3yD-3/3 E~,,n, U S U C~- -33' D 7/3. Therefore ~ U S~. U q~.n is consistent. Only-if part: o ~ U S.~ U qg.. is consistent. Let I.~ be an interpretation that models o ~ U S~. U q¢... Extend I.. to an interpretation I that handles names as in the proof for L e m m a B.8. It is easy to see that I models ~-U S. There are two subsets of C to consider: (1) Consider -37 C -~n~ where nt~ E S. I models n~ and 7 so I models -33, D -3nl3. (2) Consider -33, D-3nt~ where nt~fi(_S. Then I models -3nt~ and so I models -37 D -3nt3. Therefore, I models o%U S U ~ and so ~- U S U ~ is consistent. [] Theorem 5.17. A formula ~r is explainable from (if, A, q~) iff there exists a constrained extension (E, C) of Tr(~, A, ;g) such that o" E E. Proof. Let (D, W) = Tr(~, A, q~).

If-part: o" is in E for some constrained extension (E, C) of (D, W). Let D ' be a subset of D such that E = Th(W U Conseq(D')), C = Th(W U Conseq(D') U Justif(D')), and C ~ / L ( T h e o r e m 4.9). Now W = ff, n and Conseq(D')= Sn,, s o ~nn U ant` ~-(3I". Further, ofnn U Sn, U ~n, is consistent because: for all -37 D -3/3 @ q¢,~, /3 ^ 7 E Justif(D'), Justif(D') C C, and C is consistent. Finally, applying Lemmas B.8 and B.9 gives f f U S ~- o- and ~ U S U q¢ is consistent, so o%U S is a scenario that explains ~r. Only-if part: Let ~ U S be a scenario that explains ~r. First, ~,n U Sn, ~- o" and ff~, U Sn~ U ~ , , is consistent (Lemmas B.8 and B.9). Let D' =

E D and /3 E Sn. and -37 D -3/3 E ~ n

}

-

Now "~n = W and Sn. = Conseq(D'), so o- E Th(W U Conseq(D')). Also,

Th(W U Conseq(D') U Justif(D')) C Th(~nn U S~ U ~n~) because for all /3 D 7 E Justif(D'), ~ g S~n U ~nn ~ [3 A 3`. Hence Th(W U Conseq(D') U Justif(D')) is consistent. Therefore, there must be an extension (E, C) where E D_Th(W U Conseq(D')) (Corollary B.7) and where or ~ E. [] Theorem 5.18. Let (D, W) be a semi-normal prerequisite-free default theory. A formula o" is in E for some constrained extension (E, C) of (D, W) iff ~r is explainable from T r - l ( D , W ) .

J.P. Delgrande et al. / Artificial Intelligence 70 (1994) 167-237

237

Proof. The proof of this theorem is same as for T h e o r e m 5.17 except that we let

(~,A,C~)= Tr-I(D,W).

[]

References [1] P. Besnard, An Introduction to Default Logic, Symbolic Computation (Springer-Verlag, Berlin, 1989). [2] P. Besnard and T. Schaub, Possible worlds semantics for default logics, Fund. Inf. 20 (1-2) (1993). [3] G. Brewka, Cumulative default logic: in defense of nonmonotonic inference rules, Artif. lntell. 50 (2) (1991) 183-205. [4] G. Brewka, Nonmonotonic Reasoning: Logical Foundations of Commonsense (Cambridge University Press, Cambridge, England, 1991). [5] G. Brewka, D. Makinson and K. Schlechta, Cumulative inference relations for JTMS and logic programming, in: J. Dix, K.P. Jantke and P.H. Schmitt, eds., Proceedings of the First International Workshop on Nonmonotonic and Inductive Logic, Lecture Notes in Artificial Intelligence (Springer-Verlag, Berlin, 1991) 1-12. [6] B.F. Chellas, Modal Logic (Cambridge University Press, Cambridge, England, 1980). [7] J.P. Delgrande and W.K. Jackson, Default logic revisited, in: Proceedings KR'91, Cambridge, MA (1991) 118-127. [8] J. Dix, On cumulativity in default logic and its relation to Poole's approach, in: B. Neumann, ed., ECAI-92, Vienna, Austria (Wiley, New York, 1992) 289-293. [9] D.W. Etherington, Formalising nonmonotonic reasoning systems, Artif. lntell. 31 (1987) 41-85. [10] D.W. Etherington, Reasoning with Incomplete Information, Research Notes in Artificial Intelligence (Morgan Kaufmann, Los Altos, CA, 1988). [11] D.W. Etherington and R. Reiter, On inheritance hierarchies with exceptions, in: Proceedings AAAI-83, Washington, DC (1983) 104-108. [12] W. Lukaszewicz, Two results on default logic, in: Proceedings IJCAI-85, Los Angeles, CA (1985) 459-461. [13] W. Lukaszewicz, Considerations on default logic: an alternative approach, Comput. lntell. 4 (1) (1988) 1-16. [14] W. Lukaszewicz, Non-monotonic Reasoning: Formalizations of Commonsense Reasoning, Ellis Horwood Series in Artificial Intelligence (Ellis Horwood, Chichester, England, 1990). [15] D. Makinson, General theory of cumulative inference, in: M. Reinfrank, ed., Proceedings of the Second International Workshop on Non-Monotonic Reasoning, Lecture Notes in Artificial Intelligence 346 (Springer-Verlag, Berlin, 1989) 1-18. [16] D.L. Poole, A logical framework for default reasoning, Artif. lntell. 36 (1) (1988) 27-48. [17] D.L. Poole, What the lottery paradox tells us about default reasoning (extended abstract), in: Proceedings KR'89, Toronto, Ont. (1989). [18] R. Reiter, A logic for default reasoning, Artif. lnteU. 13 (1980) 81-132. [19] R. Reiter and G. Criscuolo, On interacting defaults, in: Proceedings IJCAI-81, Vancouver, BC (1981) 270-276. [20] T. Schaub, Assertional default theories: a semantical view, in: Proceedings KR'91, Cambridge, MA (1991) 496-506. [21] T. Schaub, On commitment and cumulativity in default logics, in: R. Kruse, ed., Proceedings of European Conference on Symbolic and Quantitative Approaches to Uncertainty (Springer-Verlag, Berlin, 1991) 304-309. [22] T. Schaub, Considerations on default logic, Ph.D. Thesis, Technische Hochschule Darmstadt, FB Informatik, FG Intellektik, Darmstadt, Germany (1992). [23] T. Schaub, On constrained default theories, in: B. Neumann, ed., Proceedings ECAI-92,Vienna, Austria (Wiley, New York, 1992).

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.