Learning Assumption-based Argumentation Frameworks1


Abstract

We propose a novel approach to logic-based learning which generates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. These ABA frameworks can be mapped onto logic programs with negation as failure that may be non-stratified. Whereas existing argumentation-based methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformation rules, including some adapted from logic program transformation rules (notably folding) as well as others, such as rote learning and assumption introduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we also propose a variant that handles the non-stratified case. We illustrate the benefits of our approach with a number of examples, which show that, on one hand, we are able to easily reconstruct other logic-based learning approaches and, on the other hand, we can work out in a very simple and natural way problems that seem to be hard for existing techniques.

1 Introduction↩︎

Various forms of computational argumentation, notably abstract argumentation [1] and assumption-based argumentation (ABA) [2][4] and [5], have been advocated as unifying frameworks for various forms of non-monotonic reasoning, including logic programming. However, with very few early exceptions (notably [6]), computational argumentation has received little attention as a basis to support logic-based learning. Here, we fill this gap by proposing a novel approach to logic-based learning which generates ABA frameworks from examples, using background knowledge also in ABA format.

In general, ABA frameworks amount to a deductive system (consisting of a logical language to specify sentences and of a set of inference rules built from these sentences), a set of assumptions (which are sentences with a special status) and their contraries (which are sentences). Rules are then used to construct arguments, which support claims by means of assumptions, and attacks between assumptions/arguments are achieved by means of arguments supporting contraries of assumptions. The semantics of ABA frameworks is then defined in terms of various notions of (dialectically) acceptable extensions, amounting to sets of assumptions/arguments that can be deemed to defend themselves against attacks, in some sense (see Sect. 2). Thus, the goal of ABA learning amounts to identifying rules, assumptions and their contraries (adding to those in the given background knowledge) that cover argumentatively the positive examples and none of the negative examples according to some chosen semantics (see Sect. 3).

Logic programs can be seen as restricted instances of ABA frameworks (where assumptions are negation as failure literals, and the contrary of \(not \, p\) is \(p\)) [2]. Moreover, ABA frameworks of a special kind (namely when all sentences are atomic and no assumption occurs in the head of rules) can be mapped to logic programs. These mappings hold under a range of semantics for ABA and for logic programming [2]. Thus, learning ABA frameworks can be seen as learning logic programs. However, given that ABA frameworks admit several other non-monotonic reasoning instances [2], learning ABA frameworks can also in principle provide a way to learn other forms of non-monotonic logics.

Our approach to learning ABA frameworks relies upon transformation rules applied to ABA frameworks (to obtain new ones, see Sect. 4): some of these transformation rules are adapted from logic programming, such as folding [7], whereas others are new, such as rote learning and assumption introduction. We define the transformation rules and outline a general strategy for applying them (see Sect. 6) in order to learn ABA frameworks that achieve the goal of ABA learning. We also illustrate, with the use of examples (see Sect. 57), the benefits of ABA learning in comparison with existing methods. We focus on similarly inspired methods, notably [6] and [8]: the former learns argumentation frameworks of a different kind (where arguments are built from clauses and preferences over them), whereas the latter learns logic programs with negation as failure, but can be seen as being guided by argumentative principles (while not being explicitly so). Both these methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks (namely attacks between arguments with conflicting claims), and result in stratified logic programs [9]. Instead, our approach interprets exceptions as undercutting attacks (from an argument to the support of another) while also accommodating rebuttal attacks, and may give rise to logic programs with non-stratified negation as failure.

1.0.0.1 Related work.

Several approaches to learning logic programs with negation as failure exist, in addition to the aforementioned [6], [8]. These include methods for learning exceptions to default rules [10], Sakama’s inverse entailment [11] and brave induction  [12] in non-monotonic logic programs, and induction from answer sets [13]. Answer sets are also the target of other logic-learning methods, notably as in ILASP  [14]. These approaches learn non-stratified logic programs, rather than ABA frameworks. ABA can be seen as performing abductive reasoning (in that assumptions are hypotheses open for debate). Whereas other approaches combine abductive and inductive learning [15], again, they do not learn ABA frameworks. Some approaches learn abductive logic programs [16], which rely upon assumptions, like ABA frameworks. Overall, providing a formal comparison between our method and existing methods learning instances of ABA frameworks is left for future work. Finally, several existing works use argumentation to perform learning (e.g., see overview in [17]) while others learn argumentation frameworks (e.g., recently, [18]): we differ from the former in that our method learns ABA frameworks rather than using argumentation to aid learning, and from the latter in that we learn a different type of argumentation frameworks.

2 Background: Assumption-based argumentation (ABA)↩︎

An Assumption-based argumentation (ABA) framework (as originally proposed in [2], but presented here following recent accounts in [3], [4] and [5]) is a tuple \(\langle {\cal L}, \, {\cal R}, \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) where

  • \(\langle \mathcal{L}, \mathcal{R}\rangle\) is a deductive system, where \(\mathcal{L}\) is a language and \(\mathcal{R}\) is a set of (inference) rules of the form \(s_0 \leftarrow s_1,\ldots, s_m\) (\(m \ge 0, s_i \in \mathcal{L}\), for \(1\leq i \leq m\));

  • \(\mathcal{A}\) \(\subseteq\) \(\mathcal{L}\) is a (non-empty) set of assumptions;2

  • \(\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\) is a total mapping from \(\mathcal{A}\) into \(\mathcal{L}\), where \(\overline{a}\) is the contrary of \(a\), for \(a \in \mathcal{A}\).

Given a rule \(s_0 \gets s_1, \ldots, s_m\), \(s_0\) is the head and \(s_1,\ldots, s_m\) is the body; if \(m=0\) then the body is said to be empty (represented as \(s_0 \gets\) or \(s_0 \gets true\)). If assumptions are not heads of rules then the ABA framework is called flat.

Elements of \(\mathcal{L}\) can be any sentences, but in this paper we will focus on (flat) ABA frameworks where \(\mathcal{L}\) is a set of ground atoms. However, in the spirit of logic programming, we will use schemata for rules, assumptions and contraries, using variables to represent compactly all instances over some underlying universe.

Example 1. Let \(\mathcal{L}=\{p(X), q(X), r(X), a(X), b(X) | X\in \{1,2\}\}\). Then, an ABA framework may be \(\langle {\cal L}, \, {\cal R}, \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) with the other components defined as follows:

\(\mathcal{R}=\{p(X) \gets a(X), \; \;\; q(X)\gets b(X), \;\; \; r(1) \gets true\}\);

\(\mathcal{A}=\{a(X), b(X) \}\); \(\overline{a(X)}=q(X)\), \(\overline{b(X)}=r(X)\).

In ABA, arguments are deductions of claims using rules and supported by assumptions, and attacks are directed at the assumptions in the support of arguments. More formally, following [3][5]:

  • An argument for (the claim) \(s\in \mathcal{L}\) supported by \(A\subseteq \mathcal{A}\) and \(R\subseteq \mathcal{R}\) (denoted \(A \vdash_{R} s\)) is a finite tree with nodes labelled by sentences in \(\mathcal{L}\) or by \(true\), the root labelled by \(s\), leaves either \(true\) or assumptions in \(A\), and non-leaves \(s'\) with, as children, the elements of the body of some rule in \(R\) with head \(s'\).

  • Argument \(A_1 \vdash_{R_1} s_1\) attacks argument \(A_2 \vdash_{R_2} s_2\) iff \(s_1=\overline{a}\) for some \(a\in A_2\).

Example 2 (Ex. 1 cntd). Let rules in \(\mathcal{R}\) be named \(\rho_1(X)\), \(\rho_2(X)\), \(\rho_3\), respectively. Then, arguments include \(\{b(1)\} \vdash_{\{\rho_2(1)\}} q(1)\), \(\emptyset \vdash_{\{\rho_3\}} r(1)\), and \(\{a(1)\} \vdash_{\emptyset} a(1)\),3 where, for example, \(\emptyset \vdash_{\{\rho_3\}} r(1)\) attacks \(\{b(1)\} \vdash_{\{\rho_2(1)\}} q(1)\).

Given argument \(\alpha=A \vdash_{R} s\), we will refer to the single rule \(\rho \in R\) such that the sentences in \(\rho\)’s body label \(s\)’ children in \(\alpha\) as the top rule of the argument.

Note that, in ABA, attacks are carried out by undercutting some of their supporting premises (the assumptions). Other forms of attacks can be also represented in ABA, including attacks by rebuttal, where arguments disagree on their claim (see [4] for details).

Given a flat ABA framework \(\langle {\cal L}, \, {\cal R}, \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\), let \(Args\) be the set of all arguments and \(Att=\{(\alpha,\beta) \in Args\times Args\mid \alpha\) attacks \(\beta\}\), for ‘arguments’ and ‘attacks’ defined as above. Then \((Args,Att)\) is an Abstract Argumentation (AA) framework [1] and standard semantics for the latter can be used to determine the semantics of the ABA framework [4].4 For example, \(S\subseteq Args\) is a stable extension iff (i) \(\nexists \alpha,\beta \!\in \!S\) such that \((\alpha,\beta) \!\in \!Att\) (i.e. \(S\) is conflict-free) and (ii) \(\forall \beta \!\in \!Args\setminus S, \exists \alpha \!\in \!S\) such that \((\alpha,\beta) \!\in \!Att\) (i.e. \(S\) “attacks” all arguments it does not contain).

In the following example and in the remainder we will omit to specify the supporting rules in arguments (because these rules play no role when determining extensions, as attacks only depend on supporting assumptions and claims). Thus, e.g., argument \(\{b(1)\} \vdash_{\{\rho_2(1)\}} q(1)\) in Ex. 2 is written simply as \(\{b(1)\} \vdash q(1)\).

Example 3 (Ex. 2 cntd). The AA framework is \((Args,Att)\) with \(Args\) as given earlier and \(Att=\{ (\emptyset \vdash \!r(1),\{b(1)\} \vdash q(1)), (\emptyset \vdash r(1),\{b(1)\} \vdash b(1))\} \cup \{(\{b(X)\} \vdash q(X),\{a(X)\} \vdash p(X)), (\{b(X)\} \vdash q(X),\{a(X)\} \vdash a(X)) | X \in \{1,2\}\}\). Then, \(\{\emptyset \vdash r(1), \{a(1)\} \vdash p(1), \{a(1)\} \vdash a(1), \{b(2)\} \vdash q(2), \{b(2)\} \vdash b(2)\}\) is the only stable extension. This captures the “model” \(\{r(1), p(1), a(1), q(2), b(2)\}\), amounting to all the claims of accepted arguments in the stable extension [4].

Note that, in general, ABA frameworks may admit several stable extensions, in which case one can reason credulously or sceptically, by focusing respectively on (claims of arguments accepted in) a single or all stable extensions.

Example 4. Given \(\langle {\cal L}, \, {\cal R}, \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) as in Ex. 1, consider \(\langle {\cal L}, \, {\cal R}', \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\), where \(\mathcal{R}'=\mathcal{R}\cup\{r(2) \gets a(2)\}\). Now, there are two stable extensions: the same as in Ex. 3 as well as \(\{\emptyset \vdash r(1), \{a(1)\} \vdash p(1), \{a(1)\} \vdash a(1), \{a(2)\} \vdash p(2), \{a(2)\} \vdash a(2), \{a(2)\} \vdash r(2)\}\), with accepted claims \(\{r(1), a(1),\) \(p(1), a(2), p(2), r(2)\}\). Then, e.g., \(r(1)\) is sceptically accepted whereas \(r(2)\) is only credulously accepted.

Finally, note that ABA frameworks of the form considered here can be naturally mapped onto logic programs where assumptions are replaced by the negation as failure of their contraries, such that, in particular, stable extensions of the former correspond to stable models [19] of the latter.5

Example 5 (Ex. 4 cntd). For instance, \(\langle {\cal L}, \, {\cal R}', \, {\cal A},\, \overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\)from Ex. 4 can be mapped onto the logic program6

\(p(X)\gets not\,q(X), \quad q(X)\gets not \, r(X), \quad r(1) \gets, \quad \;\, r(2) \gets not \,q(2).\)

This admits two stable models, identical to the claims accepted in the two stable extensions for the ABA framework.

Thus, learning ABA frameworks of the form we consider amounts to a form of inductive logic programming (ILP).

In the remainder, without loss of generality we leave the language component of all ABA frameworks implicit, and use, e.g., \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) to stand for \(\langle \mathcal{L},\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) where \(\mathcal{L}\) is the set of all sentences in \(\mathcal{R}\), \(\mathcal{A}\) and in the range of \(\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\). We will also use \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\modelss\) to indicate that \(s\in \mathcal{L}\) is the claim of an argument accepted in all or some stable extensions (depending on the choice of reasoning) of \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\).

3 The ABA learning problem↩︎

We define the core ingredients of ABA learning, in a similar vein as in ILP.

Definition 1. A positive/negative example (for a predicate \(p\) with arity \(n \geq 0\))* is a ground atom of the form \(p(c)\), for \(c\) a tuple of \(n\) constants.*

The predicate amounts to the concept that inductive ABA aims to learn (in the form of an ABA framework whose language component includes the examples amongst its sentences). Positive and negative examples, albeit having the same syntax, play a very different role in ABA learning, as expected. Like standard ILP, ABA learning is driven by a (non-empty) set of positive examples and a (possibly empty) set of negative examples, with the help of some background knowledge compactly providing information about the examples, as follows:

Definition 2. The background knowledge* is an ABA framework.*

An illustration of possible background knowledge \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) and examples for a simple inductive ABA task is shown in Fig. 1 (adapted from [6]).

None

Figure 1: A simple illustration of background knowledge as an ABA framework \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) (showing only \(\mathcal{R}\), with \(\mathcal{A}\) consisting of a single bogus assumption), and of positive/negative examples for ABA learning of concept \(\mathit{flies}/1\).

Then, in ABA learning, we establish whether examples are (not) covered argumentatively:

Definition 3. Given an ABA framework \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\), an example \(e\) is covered* by \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) iff \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\modelse\) and is not covered by \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) iff \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\not \modelse\).*

In the case of Fig. 1, none of the examples is covered by the background knowledge (no matter what \(\models\)is, as no arguments for any of the examples exist).

Definition 4. Given background knowledge \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\), positive examples \(\mathcal{E}^+\) and negative examples \(\mathcal{E}^-\), with \(\mathcal{E}^+\cap \mathcal{E}^- =\emptyset\), the goal of ABA learning* is to construct \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) such that \(\mathcal{R}\subseteq \mathcal{R}'\), \(\mathcal{A}\subseteq \mathcal{A}'\), and, for all \(\alpha \in \mathcal{A}\), \(\overline{\alpha}'=\overline{\alpha}\), such that:*

(Existence) \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) admits at least one extension under the chosen ABA semantics,

(Completeness) for all \(e \in \mathcal{E}^+\), \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\modelse\), and

(Consistency) for all \(e \in \mathcal{E}^-\), \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\not \modelse\).

This definition is parametric wrtthe choice of semantics, and also whether this is used credulously or sceptically. For choices of semantics other than stable extensions (e.g., grounded or preferred extensions [5]) the Existence requirement may be trivially satisfied (as these extensions always exist).

None

Figure 2: An ABA framework \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) achieving the goal of ABA learning, given the background knowledge and examples in Fig. 1.

The ABA framework \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) in Fig. 2 admits a single stable extension and covers all positive examples and none of the negative examples in Fig. 1 (for either the sceptical or credulous version of \(\models\)). Thus, this \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) achieves the goal of ABA learning.

But how can this ABA framework be obtained? In the remainder we will outline our method for achieving the goal of ABA learning.

4 Transformation rules for ABA frameworks↩︎

Our method for solving the ABA learning problem makes use of transformation rules, some of which are borrowed from the field of logic program transformation [7], [20][22], while others are specific for ABA frameworks.

We assume that, for any ABA framework, the language \(\mathcal{L}\) contains all equalities between elements of the underlying universe and \(\mathcal{R}\) includes all rules \(a=a\leftarrow\), where \(a\) is an element of the universe. We also assume that all rules in \(\mathcal{R}\) (except for the implicit equality rules) are normalised, i.e. they are written as:

\(p_0({X}_0) \leftarrow eq_1, \ldots, eq_k, p_1({X}_1), \ldots, p_n({X}_n)\)

where \(p_i({X}_i)\), for \(0\leq i \leq n\), is an atom (whose ground instances are) in \(\mathcal{L}\) and \(eq_i\), for \(1\leq i \leq k\), is an equality whose variables occur in the tuples \({X}_0,{X}_1, \ldots, {X}_n\). The body of a normalised rule can be freely rewritten by using the standard axioms of equality, e.g., \(Y_1=a, Y_2=a\) can be rewritten as \(Y_1=Y_2, Y_2=a\).

Given an ABA framework \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\), a transformation rule constructs a new ABA framework \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\). We use transformation rules R1–R5 below, where we mention explicitly only the components of the framework that are modified.

R1. Rote Learning. Given atom \(p({t})\), add \(\rho : p({X}) \leftarrow {X}={t}\) to \(\mathcal{R}\). Thus, \(\mathcal{R}'\!=\!\mathcal{R}\cup \{\rho\}\).

In our learning strategy, Rote Learning is typically applied with \(p({t})\) a positive example. The added rule allows us to obtain an argument for \(p({t})\).

Example 6. Let us consider the learning problem of Fig. 1. By four applications of R1, from the positive examples \(\mathit{flies}(a),\mathit{flies}(b),\mathit{flies}(e),\mathit{flies}(f)\), we get

\(\mathcal{R}_1 = \mathcal{R}\cup \{\mathit{flies}(X) \gets X=a, \mathit{flies}(X) \gets X=b,\)

               \(\mathit{flies}(X) \gets X=e, \mathit{flies}(X) \gets X=f\}\).

R2. Equality Removal. Replace a rule \(\rho_1: H \leftarrow eq_1, Eqs, B\) in \(\mathcal{R}\), where \(eq_1\) is an equality, \(Eqs\) is a (possibly empty) set of equalities, and \(B\) is a (possibly empty) set of atoms, by rule \(\rho_2: H \leftarrow Eqs, B.\) Thus, \(\mathcal{R}'=(\mathcal{R}\setminus \{C_1\}) \cup \{C_2\}\).

Equality Removal allows us to generalise a rule by deleting an equality in its body. We will see an example of application of this rule in Sect. 5.

R3. Folding. Given rules

\(\rho_1\): \(H \leftarrow Eqs_1, B_1, B_2\) and \(\rho_2\): \(K \leftarrow Eqs_1, Eqs_2, B_1\)

in \(\mathcal{R}\), where \(\textit{vars}(\textit{Eqs}_2) \cap \textit{vars}((H,B_2))=\emptyset\), replace \(\rho_1\) by \(\rho_3\): \(H \leftarrow Eqs_2, K, B_2.\) Thus, \(\mathcal{R}'=(\mathcal{R}\setminus \{\rho_1\})\cup\{\rho_3\}\).

Folding is a form of inverse resolution [23], used for generalising a rule by replacing some atoms in its body with their ‘consequence’ using a rule in \(\mathcal{R}\).

Example 7. By R3 using rule \(bird(X) \gets X=a\) in \(\mathcal{R}\), rule \(\mathit{flies}(X) \gets X=a\) in \(\mathcal{R}_1\) (see Ex. 6) is replaced by \(\mathit{flies}(X) \gets bird(X)\), giving \(\mathcal{R}_2 \!=\! \mathcal{R}\cup \{\mathit{flies}(X) \gets bird(X),\; \mathit{flies}(X) \gets X=b, \mathit{flies}(X) \gets X=e,\; \mathit{flies}(X) \gets X=f\}.\)

R4. Subsumption. Suppose that \(\mathcal{R}\) contains rules

\(\rho_1: H \leftarrow Eqs_1, B_1\) and \(\rho_2: H \leftarrow Eqs_2, B_2\)

such that, for every ground instance \(H' \leftarrow Eqs'_2, B'_2\) of \(\rho_2\), there exists a ground instance \(H' \leftarrow Eqs'_1, B'_1\) of \(\rho_1\) (with the same head) such that, if for each atom \(B\) in \(Eqs'_2, B'_2\) there is an argument \(S \vdash_{R} B\), then for each atom \(A\) in \(Eqs'_1, B'_1\) there is an argument \(S' \vdash_{R'} A\) with \(S'\subseteq S\). Thus, \(\rho_2\) is subsumed by \(\rho_1\) and can be deleted from \(\mathcal{R}\), and hence \(\mathcal{R}'=\mathcal{R}\setminus \{\rho_2\}\).

In particular, R4 generalises the usual logic program transformation that allows the removal of a rule which is \(\theta\)-subsumed by another one [7].

Example 8. Rule \(\mathit{flies}(X) \gets X\!=\!b\) in \(\mathcal{R}_2\) (from Ex.7) is subsumed by \(\mathit{flies}(X) \gets bird(X)\). Indeed, \(b\) is the only ground \(x\) such that \(\emptyset \vdash_{\{x=b \gets \}} x\!=\!b\), and when \(x\) is \(b\), we have \(\emptyset \vdash_{\{bird(x) \gets x=b,\, x=b \gets \}} bird(x)\). Similarly, rules \(\mathit{flies}(X) \gets X\!=\!e\) and \(\mathit{flies}(X) \gets X\!=\!f\) are subsumed by \(\mathit{flies}(X) \gets bird(X).\) Thus, we derive \(\mathcal{R}_3 = \mathcal{R}\cup \{\mathit{flies}(X) \gets bird(X)\}\).

R5. Assumption Introduction. Replace \(\rho_1: H \leftarrow Eqs, B\) in \(\mathcal{R}\) by \(\rho_2: H \leftarrow Eqs, B, \alpha({X})\) where \(X\) is a tuple of variables taken from \(\textit{vars}(H) \cup \textit{vars}(B)\) and \(\alpha({X})\) is a (possibly new) assumption with contrary \(\chi({X})\). Thus, \(\mathcal{R}'=(\mathcal{R}\setminus \{\rho_1\})\cup\{\rho_2\}\), \(\mathcal{A}'= \mathcal{A} \cup \{\alpha({X})\}\), \(\overline{\alpha({X})}'=\chi({X})\), and \(\overline{\beta}'=\overline{\beta}\) for all \(\beta \in \mathcal{A}\).

Example 9. By applying R5, we get \(\mathcal{R}_4 \!\!=\!\! \mathcal{R}\!\cup \{\mathit{flies}(X) \!\!\gets\!\! bird(X),\alpha_1(X)\}\), with \(\overline{\alpha_1(X)}'\!\!=\!c-\alpha_1(X)\). Now, we complete this example to show, informally, that other approaches based on learning exceptions [6], [8] can be recast in our setting. For the newly introduced \(c-\alpha_1(X)\), we have positive examples \(\mathcal{E}^+_1 =\{c-\alpha_1(c),\;c-\alpha_1(d)\}\), corresponding to the exceptions to \(\mathit{flies}(X) \gets bird(X)\), and negative examples \(\mathcal{E}^-_1 = \{c-\alpha_1(a),c-\alpha_1(b),c-\alpha_1(e),c-\alpha_1(f)\}\), corresponding to the positive examples for \(\mathit{flies}(X)\) to which \(\mathit{flies}(X) \gets bird(X)\) correctly applies. Then, starting from these examples, we can learn rules for \(c-\alpha_1(X)\) similarly to what we have done for \(\mathit{flies}\). By Rote Learning, we get \(\mathcal{R}_5 = \mathcal{R}\cup \{\mathit{flies}(X) \gets bird(X),\alpha_1(X),\;c-\alpha_1(X) \gets X=c,\;c-\alpha_1(X)\gets X=d\}\). The rules for \(c-\alpha_1(X)\) can be viewed as rebuttal attacks against \(\mathit{flies}(X) \gets bird(X)\), as \(c\) and \(d\) are birds that do not fly. By Folding and Subsumption, we generalise the rules for \(c-\alpha_1(X)\) and obtain \(\mathcal{R}_6 = \mathcal{R}\cup \{\mathit{flies}(X) \gets bird(X),\alpha_1(X),\;c-\alpha_1(X) \gets penguin(X)\}\). The new rule also covers the negative examples \(c-\alpha_1(e)\) and \(c-\alpha_1(f)\), and hence we introduce a new assumption \(\alpha_2(X)\) with contrary \(c-\alpha_2(X)\) having \(c-\alpha_2(e)\) and \(c-\alpha_2(f)\) as positive examples. By one more sequence of applications of Rote Learning, Folding and Subsumption, we get exactly the ABA framework in Fig. 2.

In the field of logic program transformation, the goal is to derive new programs that are equivalent, wrta semantics of choice, to the initial program. Various results guarantee that, under suitable conditions, transformation rules such as Unfolding and Folding indeed enforce equivalence (e.g., wrtthe least Herbrand model of definite programs [7] and the stable model semantics of normal logic programs [20]). These results have also been generalised by using argumentative notions [22].

In the context of ABA learning, however, program equivalence is not a desirable objective, as we look for sets of rules that generalise positive examples and avoid to cover negative examples. In particular, as shown by the examples of this and next section, the generalisation of positive examples is done by applying the Folding, Equality Removal, and Subsumption transformations, while the exceptions due to negative examples are learned by Assumption Introduction and Rote Learning. A general strategy for applying the transformation rules will be presented in Sect. 6. The requirements that R2–R4 should preserve positive examples, while R1 and R5 should be able to avoid negative examples, are formalised by the following two properties that we require to hold. These properties are sufficient to show that, when the learning process terminates, it indeed gets a solution of the ABA learning problem given in input.

Property 1. Let \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) be obtained by applying any of Folding, Equality Removal and Subsumption to \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) to modify rules with \(p\) in the head. If \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\modelsp(t)\) then \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\modelsp(t)\).

Property 2. Let \(p(t_1), p(t_2)\) be atoms such that \(p(t_1) \neq p(t_2)\) and \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) be such that \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\modelsp(t_1)\) and \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\modelsp(t_2)\). Then there exists \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\) obtained from \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) by applying Assumption Introduction to modify rules with \(p\) in the head and then applying Rote Learning to add rules for the contraries of the assumption atoms, such that \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\modelsp(t_1)\) and \(\langle \mathcal{R}',\mathcal{A}',\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}'\rangle\not \modelsp(t_2)\).

We leave to future work the identification of conditions that enforce Properties 12 wrt ABA semantics. However, note that, even though we cannot directly apply the results about logic program transformations to prove these properties, we can leverage the proof methodology and adapt some partial results, and specifically the ones developed by using argumentative notions [22].

5 Learning by rebuttal and undercutting attacks↩︎

Some approaches [6], [8] propose learning techniques for non-monotonic logic programs based on the idea of learning default rules with exceptions. Default rules are learned by generalising positive examples, and exceptions to those rules are learned by generalising negative examples. This process can be iterated by learning exceptions to the exceptions. From an argumentative perspective, these approaches can be viewed as learning rules that attack each other by rebuttal, because their heads are ‘opposite’. We have illustrated through the \(\mathit{flies}\) example in Sect. 4 that our transformation rules can easily reconstruct learning by rebuttal. Through another example, in this section we show that our transformation rules can also learn undercutting attacks, which in contrast seem hard for other approaches that learn by rebuttal.

Suppose that a robot moves through locations \(1,\ldots,6\). Let atom \(step(X,Y)\) mean that the robot can take a step from location \(X\) to location \(Y\). Some locations are busy, and the robot cannot occupy them. Consider the background knowledge with \(\mathcal{R}\) as follows (and a single bogus assumption):

\(\mathcal{R} = \{ step(1,2)\leftarrow, step(1,3)\leftarrow, step(2,4)\leftarrow, step(2,5)\leftarrow,\)

\(step(4,6)\leftarrow, step(5,2)\leftarrow, busy(3)\leftarrow, busy(6)\leftarrow\}\).

We would like to learn the concept \(\textit{free}(X)\), meaning that the robot is free to proceed from \(X\) to a non-busy, successor location. Let

\(\mathcal{E}^+= \{\textit{free}(1), \textit{free}(2), \textit{free}(5)\},\) \(\mathcal{E}^- = \{\textit{free}(3), \textit{free}(4), \textit{free}(6)\}.\)

be the positive and negative examples, respectively. Let us see how we can solve this ABA learning problem by our transformation rules. We start off by applying Rote Learning and introducing the rule

\(\textit{free}(X) \leftarrow X=1\)(1)     

and hence \(\mathcal{R}_1=\mathcal{R}\cup \{(1)\}\). Then, by Folding using the (normalised) rule \(step(X,Y)\leftarrow X=1, Y=2\) in \(\mathcal{R}\), we learn

\(\textit{free}(X) \leftarrow Y=2, step(X,Y)\)(2)     

and \(\mathcal{R}_2=\mathcal{R}\cup \{(2)\}\). By Equality Removal, we get

\(\textit{free}(X) \leftarrow step(X,Y)\)(3)     

and \(\mathcal{R}_2=\mathcal{R}\cup \{(3)\}\). Rule (3) covers all positive examples, but it also covers the negative example \(\textit{free}(4)\). To exclude this, we construct an undercutting attack to rule (3). We apply Assumption Introduction to obtain \(\alpha(X,Y)\) with contrary \(c-\alpha(X,Y)\), and we replace rule (3) by

\(\textit{free}(X) \leftarrow step(X,Y), \alpha(X,Y)\)(4)     

and \(\mathcal{R}_3=\mathcal{R}\cup \{(4)\}\). The assumption \(\alpha(X,Y)\) represents normal values of \((X,Y)\), for which it is legitimate to use default rule (4), while \(c-\alpha(X,Y)\) represents exceptions to rule (4). Then, we add positive and negative examples for \(c-\alpha(X,Y)\):

\(\mathcal{E}_1^+= \{c-\alpha(4,6)\}\), \(\mathcal{E}_1^- = \{c-\alpha(1,2), c-\alpha(2,4), c-\alpha(2,5), c-\alpha(5,2)\}.\)

By Rote Learning, we get \(\mathcal{R}_4=\mathcal{R}\cup \{(4),(5)\}\) with

\(c-\alpha(X,Y) \leftarrow X=4, Y=6\)(5)     

Finally, by Folding, using \(busy(Y) \gets Y=6\), and Equality Removal, we get

\(c-\alpha(X,Y) \leftarrow busy(Y)\)(6)     

which can be viewed as an undercutting attack to a premise of rule (4). The final learnt set of rules is \(\mathcal{R}\cup \{(4),(6)\}\), which indeed are a very compact and general definition of the sought concept \(\textit{free}(X)\).

Now we will show that the approaches based on rebuttal encounter difficulties in our robot example. More specifically, we consider the approach proposed by Dimopoulos and Kakas in [6]. Their formalism consists of rules of the form \(L_0 \leftarrow L_1, \ldots, L_n\) where \(L_0,L_1,\ldots, L_n\) are positive or explicitly negative literals. These rules are understood as default rules that admit exceptions. An exception is viewed as a rule whose head is the negation of some default rule it attacks, i.e., a rule of the form \(\overline{L}_0 \leftarrow M_1, \ldots, M_k.\) Attacks are formalised by a priority relation between rules, whereby the default rule has higher priority with respect to the exception. As already mentioned, this type of attack is a rebuttal, in the sense that the default rule and the exception have opposite conclusions. This type of rules with priorities can easily be translated into an ABA framework, which in turn can be mapped to a normal logic program (see Sect. 2) with stratified negation, as the priority relation is assumed to be irreflexive and antisymmetric.

Now, from rule (3) the Dimopoulos-Kakas algorithm generates a new learning problem for the concept \(\neg \textit{free}(X)\) with positive and negative examples as follows:

\(\mathcal{E}_1^+= \{\neg \textit{free}(4)\}\), \(\mathcal{E}_1^- = \{\neg \textit{free}(1), \neg \textit{free}(2), \neg \textit{free}(5)\}.\)

The new positive example is the exception to rule (3), and the new negative examples are the positive examples covered by rule (3), which we would like not be covered by the rules for \(\neg \textit{free}(X)\). The learning algorithm computes the rule7

\(\neg \textit{free}(X) \leftarrow step(X,Y), busy(Y)\)(7)     

which indeed covers the positive example in \(\mathcal{E}_1\), but unfortunately also covers all examples in \(\mathcal{E}_1^-\). Now, the algorithm should learn the exceptions to the exceptions, which, however, are equal to the initial positive examples. To avoid entering an infinite loop, the algorithm will just enumerate these exceptions to rule (7) by adding the set of rules \(E = \{\textit{free}(1)\leftarrow, \textit{free}(2)\leftarrow, \textit{free}(5)\leftarrow\}\). The rules in \(E\) attack rule \((7)\), which in turn attacks rule \((3)\).

We argue that this result of the Dimopoulos-Kakas algorithm is due to the fact that, by rebuttal, it looks for \(X\)s, i.e., the variable of the head of rule (3) that are exceptions to the rule. In contrast, our approach also looks for exceptions that are instances of \(Y\), which does not occur in the head of the rule.

6 A Learning Strategy↩︎

The transformation rules from Sect. 4 need to be guided to solve an ABA learning problem in a satisfactory way. For instance, by using our transformation rules as shown in Sect. 5 we can learn a very good solution to our robot problem. However, by using the transformation rules in a different way, we could also learn an ABA framework isomorphic to the unsatisfactory Dimopoulos-Kakas solution. In this section we present a template of a general strategy for learning through our transformation rules (see Fig. 3). This strategy is non-deterministic at several points and different choices may lead to different final results.

None

Figure 3: ABA Learning Strategy

Let us briefly comment on the strategy.

After Step 1, all positive examples for \(p\) are covered and no negative example for \(p\) is covered by maximally specific rules that enumerate all positive examples.

At the end of Step 2.2, equalities between variables may be left. After Step 2, by Property 1, all positive examples for \(p\) are still covered, but some negative example might also be covered.

At Step 3, the condition \(X\subseteq vars(B)\) on \(p(X) \gets Eqs, B\) can always be met by adding to its body atoms \(true(X')\) for any missing variables \(X'\). After Step 3, by Property 2, we get a set of rules still covering all positive examples for \(p\) and, after Rote Learning at the next iteration, covers no negative example.

As it stands the ABA learning strategy might not terminate, as it could keep introducing new assumptions at each iteration. However, if we assume a finite universe, divergence can be detected by comparing the sets of positive/negative examples (modulo the predicate names) generated for new assumptions wrt those of “ancestor” assumptions. In that case we can just stop after Step 1 and generate the trivial solution for the predicate at hand by Rote Learning. An alternative way of enforcing termination is to allow the application of Assumption Introduction using assumptions already introduced at previous steps. This variant of the strategy enables learning circular debates as shown in the next section.

7 Learning circular debates↩︎

Existing argumentation-based approaches (notably [6]) are designed to learn “stratified” knowledge bases. Instead, in some settings, it is useful and natural to learn to conduct circular debates, where opinions may not be conclusively held. As an illustration, we work out a version of the Nixon-diamond example [24].

Background Knowledge. \(\langle\mathcal{R},\mathcal{A},\overline{ \vrule height 5pt depth 3.5pt width 0pt \hskip 0.5em\kern 0.4em}\rangle\) with bogus assumption and

\(\mathcal{R}= \{quacker(X) \leftarrow X=a, republican(X) \leftarrow X=a,\)

\(quacker(X) \leftarrow X=b, republican(X) \leftarrow X=b\}\).

Positive Examples: \(\mathcal{E}^+ = \{\textit{pacifist}(a)\}\), Negative Examples: \(\mathcal{E}^- = \{\textit{pacifist}(b)\}.\)

Note that this example can be seen as capturing a form of noise, whereby two rows of the same table (one for \(a\) and one for \(b\)) are characterised by exactly the same attributes (amounting to being quackers and republicans) but have different labels (one is pacifist, the other is not). In non-monotonic reasoning terms, this requires reasoning with contradictory rules [24]. In argumentative terms, this boils down to building circular debates. In the remainder, we show how the ABA learning strategy is able to learn these contradictory rules, in a way that circular debates can be supported.

First iteration of the ABA learning strategy. At Step 1, we consider the positive example \(\textit{pacifist}(a)\) and by Rote Learning we add to \(\mathcal{R}\):

\(\textit{pacifist}(X) \leftarrow X=a\) (1)     

At Step 2, we generalise by Folding using \(quacker(X) \leftarrow X=a\) in the background knowledge, and replace (1) with:

\(\textit{pacifist}(X) \leftarrow quacker(X)\) (2)     

Note that other folds are possible, leading to different ABA frameworks.

At Step 3, there is an argument for the negative example \(\textit{pacifist}(b)\), because \(quacker(X)\) holds for \(X=b\). Thus, we apply the Assumption Introduction rule to replace (2) with:

\(\textit{pacifist}(X) \leftarrow quacker(X), normal\_quacker(X)\) (3)     

with \(\overline{normal\_quacker(X)}=abnormal\_quacker(X)\). The new examples for \(abnormal\_quacker\) are:

\(\mathcal{E}_1^+ = \{\textit{abnormal\_quacker}(b)\},\) \(\mathcal{E}_1^- = \{\textit{abnormal\_quacker}(a)\}.\)

Second iteration of the ABA learning strategy. We learn the \(abnormal\_quacker\) predicate. At Step 1, by Rote Learning we add to \(\mathcal{R}\):

\(abnormal\_quacker(X) \leftarrow X=b\) (4)     

At Step 2, we fold using \(republican(X) \leftarrow X=b\):

\(abnormal\_quacker(X) \leftarrow republican(X)\) (5)     

There is an argument for the negative example \(\textit{abnormal\_quacker}(a)\), and then, at Step 3, we use the Assumption Introduction rule to replace (5) with:

\(abnormal\_quacker(X) \leftarrow republican(X), normal\_republican(X)\) (6)     

with \(\overline{normal\_republican(X)} = abnormal\_republican(X)\).

Third iteration of the ABA learning strategy. We learn the \(abnormal\!\_republican\) predicate from the examples:

\(\mathcal{E}_2^+ = \{\textit{abnormal\_republican}(a)\}\), \(\mathcal{E}_2^- = \{\textit{abnormal\_republican}(b)\}.\)

By Rote Learning we add:

\(abnormal\_republican(X) \leftarrow X=a\) (7)     

We then fold using \(quacker(X) \leftarrow X=a\) and we get:

\(abnormal\_republican(X) \leftarrow quacker(X)\) (8)     

Since there is an argument for the negative example \(\textit{abnormal\_republican}(b)\), we use the Assumption Introduction rule to obtain: \(abnormal\_republican(X) \leftarrow quacker(X), normal\_quacker(X)\) (9)     

Note that here we use the same assumption as in (3): this is allowed by the definition of R5 and is useful given that rule (8) has the same body as (2), which was replaced by rule (3) earlier on. Then, by Folding using rule (3), we get:

\(abnormal\_republican(X) \leftarrow \textit{pacifist}(X)\) (10)     

This leads to an ABA framework with final set of learnt rules {(3), (6), (10)}, encompassing a circular debate whereby an argument for being pacifist is attacked by one for being non-pacifist (and vice versa). This ABA framework has (among others) the following stable extension (corresponding to the choice, as the accepted set of assumptions, of \(\{normal\_quacker(a), normal\_republican(b)\}\)):

\(\{\{normal\_quacker(a)\} \vdash \textit{pacifist}(a),\)

\(\{normal\_quacker(a)\} \vdash normal\_quacker(a),\)

\(\{\textit{normal\_quacker}(a)\} \vdash abnormal\_republican(a),\)

\(\{normal\_republican(b)\} \vdash abnormal\_quacker(b),\)

\(\{normal\_republican(b)\} \vdash normal\_republican(b)\),

\(\emptyset \vdash quacker(a), \emptyset \vdash republican(a), \emptyset \vdash quacker(b), \emptyset \vdash republican(b) \}\).

Here, \(a\) is pacifist and \(b\) is not, thus achieving the goal of ABA learning under credulous reasoning. Note that there are three other stable extensions of the resulting ABA framework (one where \(b\) is pacifist and \(a\) is not, one where both are pacifist and one where neither is), and thus sceptical reasoning would not work.

8 Conclusions↩︎

We have presented a novel approach to learning ABA frameworks from positive and negative examples, using background knowledge. A notable feature of our method is that it is able to learn exceptions to general rules that can be interpreted as undercutting attacks, besides the more traditional rebuttal attacks. Our learning technique is based on transformation rules, borrowing a well established approach from logic program transformation. We have presented a general strategy that applies these transformation rules in a suitable order and we have shown through some examples that we are able, on one hand, to reconstruct other logic-based learning approaches that learn stratified rules and, on the other hand, to also learn rules enacting non-stratified, circular debates.

Our approach can be expanded in several directions. First of all, we need to study conditions guaranteeing that our transformation rules always derive a correct solution (see Properties 12). To do that we may build upon results available in the field of logic program transformation. At present, our learning strategy is a template relying on several non-deterministic choices that need to be realised to obtain a concrete algorithm. Then, we plan to conduct experiments on existing benchmarks in comparison with other systems. In particular, for comparison, we plan to use FOLD [8], which learns stratified normal logic programs, and ILASP [14], which learns Answer Set Programs. We also plan to explore whether solutions to our ABA learning problems are better suited, than existing approaches to ILP, to provide explanations, leveraging on several existing approaches to argumentative XAI [25].

Acknowledgements.↩︎

We thank the anonymous reviewers for useful comments. We also thank Mark Law for advice on the ILASP system. F. Toni was partially funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101020934) and by J.P. Morgan and the Royal Academy of Engineering under the Research Chairs and Senior Research Fellowships scheme. M. Proietti is a member of the INdAM-GNCS research group.

References↩︎

[1]
B. P. M. Dung Bibsource = dblp computer science bibliography, http://dblp.org, “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games , url = http://dx.doi.org/10.1016/0004-3702(94)00041-X,” Artificial Intelligence , Journal = Artif. Intell., vol. 77, no. 2, pp. 321–358, Timestamp = Wed, 19 Nov 2003 16:01:09 +0100, 1995 , Bdsk-Url-1 = {http://dx.doi.org/10.1016/0004-3702(94)00041-X}, doi: 10.1016/0004-3702(94)00041-X , opt.
[2]
[3]
P. M. Dung, R. A. Kowalski, and optauthor =. P. M. D. and R. A. K. and F. T. F. Toni, “Assumption-based argumentation , optbooktitle = Argumentation in Artificial Intelligence, optbooktitle = Argumentation in AI, booktitle = Arg. in AI,” Springer, 2009 , opteditor = {Iyad Rahwan and Guillermo R. Simari}, opteditor = {I. Rahwan and G. R. Simari}, pp. 199–218.
[4]
author=F. T. Francesca Toni, “A tutorial on assumption-based argumentation,” Arg. & Comp, vol. 5, no. 1, pp. 89–117, 2014.
[5]
K. Cyras, X. Fan, C. Schulz, and F. Toni, “Assumption-based argumentation: Disputes, explanations, preferences,” FLAP, vol. 4, no. 8, 2017, [Online]. Available: http://www.collegepublications.co.uk/downloads/ifcolog00017.pdf , timestamp = {Thu, 12 Mar 2020 11:27:04 +0100}, biburl = {https://dblp.org/rec/journals/flap/CyrasFST17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}.
[6]
Y. Dimopoulos and opteditor =. N. L. and S. W. A. C. Kakas, “Learning non-monotonic logic programs: Learning exceptions , optbooktitle = Machine Learning: ECML-95, 8th European Conference on Machine Learning, Heraclion, Crete, Greece, April 25-27, 1995, Proceedings, booktitle = ECML, optseries = Lecture Notes in Computer Science, series = LNCS 912, opt,” 1995, vol. 912, pp. 122–137, doi: 10.1007/3-540-59286-5\_53 , timestamp = {Tue, 14 May 2019 10:00:54 +0200}, biburl = {https://dblp.org/rec/conf/ecml/DimopoulosK95.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}.
[7]
A. Pettorossi and M. Proietti, “Transformation of logic programs: Foundations and techniques,” J. Log. Program., vol. 19/20, pp. 261–320, 1994, doi: 10.1016/0743-1066(94)90028-0.
[8]
F. Shakerin, E. Salazar, and author =. F. S. and E. S. and G. G. Gopal Gupta, “A new algorithm to automate inductive learning of default theories , opt,” Theory Pract. Log. Program. , journal=TPLP, vol. 17, no. 5–6, pp. 1010–1026, 2017, doi: 10.1017/S1471068417000333.
[9]
K. R. Apt, H. A. Blair, and A. Walker, “Towards a theory of declarative knowledge , booktitle = Foundations of Deductive Databases and Logic Programming. Morgan Kaufmann, 1988, pp. 89–148.
[10]
K. Inoue and Y. Kudoh, “Learning extended logic programs , optbooktitle = Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, IJCAI 97, Nagoya, Japan, August 23-29, 1997, 2 Volumes, booktitle = IJCAI,” 1997 , opt, pp. 176–181, [Online]. Available: http://ijcai.org/Proceedings/97-1/Papers/029.pdf , opttimestamp = {Tue, 20 Aug 2019 16:17:27 +0200}, optbiburl = {https://dblp.org/rec/conf/ijcai/InoueK97.bib}, optbibsource = {dblp computer science bibliography, https://dblp.org}.
[11]
opteditor =. J. C. and A. M. F. C. Sakama, “Inverse entailment in nonmonotonic logic programs , optbooktitle = Inductive Logic Programming, 10th International Conference, ILP 2000, London, UK, July 24-27, 2000, Proceedings, booktitle = ILP, optseries = Lecture Notes in Computer Science, series=LNCS 1866, opt,” 2000, vol. 1866, pp. 209–224, doi: 10.1007/3-540-44960-4\_13.
[12]
C. Sakama and K. Inoue, “Brave induction: A logical framework for learning from incomplete information,” Mach. Learn., vol. 76, no. 1, pp. 3–35, 2009, doi: 10.1007/s10994-009-5113-y.
[13]
C. Sakama, “Induction from answer sets in nonmonotonic logic programs,” ACM Trans. Comput. Log., vol. 6, no. 2, pp. 203–231, 2005, doi: 10.1145/1055686.1055687.
[14]
M. Law, A. Russo, and opteditor =. E. F. and J. L. K. Broda, “Inductive learning of answer set programs , optbooktitle = Logics in Artificial Intelligence - 14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24-26, 2014. Proceedings, booktitle = JELIA, optseries = Lecture Notes in Computer Science, series = LNCS 8761, opt,” 2014, vol. 8761, pp. 311–325, doi: 10.1007/978-3-319-11558-0\_22.
[15]
O. Ray, “Nonmonotonic abductive inductive learning,” J. Appl. Log., vol. 7, no. 3, pp. 329–340, 2009, doi: 10.1016/j.jal.2008.10.007.
[16]
K. Inoue and opteditor =. P. F. and A. K. H. Haneda, “Learning abductive and nonmonotonic logic programs , booktitle = Abduction and Induction: Essays on their Relation and Integration,” Kluwer Academic, 2000, pp. 213–231.
[17]
O. Cocarascu and opteditor =. P. B. and T. F. G. and T. S. and M. S. F. Toni, “Argumentation for machine learning: A survey , booktitle = COMMA, optseries = Frontiers in Artificial Intelligence and Applications, opt,” 2016 , opt, vol. 287, pp. 219–230, opt, doi: 10.3233/978-1-61499-686-6-219.
[18]
O. Cocarascu, A. Stylianou, K. Cyras, and opteditor =. G. D. G. and A. C. and B. D. and M. M. and S. B. and A. B. and J. L. F. Toni, “Data-empowered argumentation for dialectically explainable predictions , booktitle = ECAI, optseries = Frontiers in Artificial Intelligence and Applications, opt,” 2020 , opt, vol. 325, pp. 2449–2456, opt, doi: 10.3233/FAIA200377.
[19]
M. Gelfond and opteditor =. R. A. K. and K. A. B. V. Lifschitz, “The stable model semantics for logic programming , optbooktitle = Logic Programming, Proceedings of the Fifth International Conference and Symposium, Seattle, Washington, USA, August 15-19, 1988 (2 Volumes), booktitle = ICLP,” 1988 , timestamp = {Sat, 23 Jun 2018 18:45:26 +0200}, bib, pp. 1070–1080, [Online]. Available: https://dblp.org/rec/conf/iclp/GelfondL88.bib , bibsource = {dblp computer science bibliography, https://dblp.org}.
[20]
.
[21]
.
[22]
pp. 61–75, 1996.
[23]
S. Muggleton, “Inverse entailment and Progol,” New generation computing, vol. 13, no. 3–4, pp. 245–286, 1995.
[24]
R. Reiter and opteditor =. P. J. H. G. Criscuolo, “On interacting defaults , optbooktitle = Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI’81, booktitle = IJCAI,” 1981, pp. 270–276, [Online]. Available: http://ijcai.org/Proceedings/81-1/Papers/054.pdf , timestamp = {Tue, 20 Aug 2019 16:16:48 +0200}, biburl = {https://dblp.org/rec/conf/ijcai/ReiterC81.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}.
[25]
K. Cyras, A. Rago, E. Albini, P. Baroni, and opteditor =. Z.-H. Z. F. Toni, “Argumentative XAI: A survey , optbooktitle = Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, booktitle =IJCAI,” 2021, pp. 4392–4399, opt, doi: 10.24963/ijcai.2021/600 , timestamp = {Wed, 25 Aug 2021 17:11:16 +0200}, biburl = {https://dblp.org/rec/conf/ijcai/Cyras0ABT21.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}.

  1. ILP 2022, 31st International Conference on Inductive Logic Programming, Cumberland Lodge, Windsor, UK↩︎

  2. The non-emptiness requirement can always be satisfied by including in \(\mathcal{A}\) a bogus assumption, with its own contrary, neither occurring elsewhere in the ABA framework. For conciseness, we will not write this assumption and its contrary explicitly.↩︎

  3. The other (ground) arguments are \(\{a(1)\} \vdash_{\{\rho_1(1)\}} p(1)\), \(\{a(2)\} \vdash_{\{\rho_1(2)\}} p(2)\), \(\{b(2)\} \vdash_{\{\rho_2(2)\}} q(2)\), \(\{a(2)\} \vdash_{\emptyset} a(2)\), \(\{b(1)\} \vdash_{\emptyset} b(1)\), and \(\{b(2)\} \vdash_{\emptyset} b(2)\).↩︎

  4. ABA semantics were originally defined in terms of sets of assumptions and attacks between them [2], but can be reformulated, for flat ABA frameworks, in terms of sets of arguments and attacks between them (see [4]), as given here.↩︎

  5. The correspondence also holds under other semantics, omitted here for simplicity.↩︎

  6. We use the same notation for ABA rules and logic programs as, indeed, logic programming is an instance of ABA [2].↩︎

  7. In fact the Dimopoulos-Kakas algorithm is just sketched and we have conjectured what we believe to be a reasonable result of this learning step.↩︎