Recent advancements in algorithms for abstract argumentation make it possible now to solve reasoning problems even with argumentation frameworks of large size, as demonstrated by the results of the various editions of the International Competition on Computational Models of Argumentation (ICCMA). However, the solvers participating to the competition may be hard to use for non-expert programmers, especially if they need to incorporate these algorithms in their own code instead of simply using the command-line interface. Moreover, some ICCMA solvers focus on the ICCMA tracks, and do not implement algorithms for other problems. In this paper we describe pygarg, a Python implementation of the SAT-based approach used in the argumentation solver CoQuiAAS. Contrary to CoQuiAAS and most of the participants to the various editions of ICCMA, pygarg incorporates all problems that have been considered in the main track of any edition of ICCMA. We show how to easily use pygarg via a command-line interface inspired by ICCMA competitions, and then how it can be used in other Python scripts as a third-party library.
The International Competition on Computational Models of Argumentation (ICCMA) focuses on reasoning tasks in abstract argumentation frameworks. Submitted solvers are tested on a selected collection of benchmark instances, including artificially generated argumentation frameworks and some frameworks formalizing real-world problems. This paper presents the novelties introduced in the organization of the Third (2019) and Fourth (2021) editions of the competition. In particular, we proposed new tracks to competitors, one dedicated to dynamic solvers (i.e., solvers that incrementally compute solutions of frameworks obtained by incrementally modifying original ones) in ICCMA’19 and one dedicated to approximate algorithms in ICCMA’21. From the analysis of the results, we noticed that i) dynamic recomputation of solutions leads to significant performance improvements, ii) approximation provides much faster results with satisfactory accuracy, and iii) classical solvers improved with respect to previous editions, thus revealing advancement in state of the art.
In the realm of abstract argumentation, Incomplete Argumentation Frameworks (IAFs), i.e. argumentation frameworks such that some arguments and attacks can be uncertain, have received much attention in the last decade. The most classical reasoning approach consists in using completions, which are argumentation frameworks representing all the possible ways to decide about the actual existence of the uncertain elements. Standard decision problems (like verifying whether a set of arguments is acceptable) can be adapted in two variants: verifying whether the property of interest holds for some completion, or for each completion. In this setting, all completions represent equally plausible scenarios for the agent, which is not always a realistic hypothesis in real world situations. In this paper, we propose IAFs with Plausibility (pIAFs), a generalization of the IAF model where the agent is able to reason about the relative plausibility of completions. We study the complexity of the usual decision problems of IAFs, adapted to this new model. We also introduce new decision problems concerning the relative plausibility of extensions and provide complexity upper bounds for these new problems as well.
Actes des 18èmes Journées d’Intelligence Artificielle Fondamentale et 19es Journées Francophones sur la Planification, la Décision et l’Apprentissage pour la conduite de systèmes (JIAF-JFPDA 2024)
Incomplete Argumentation Frameworks (IAFs) enrich classical abstract argumentation with arguments and attacks whose actual existence is questionable. The usual reasoning approaches rely on the notion of completion, i.e. standard AFs representing “possible worlds” compatible with the uncertain information encoded in the IAF. Recently, extension-based semantics for IAFs that do not rely on the notion of completion have been defined, using instead new versions of conflict-freeness and defense that take into account the (certain or uncertain) nature of arguments and attacks. In this paper, we give new insights on both the “completion-based” and the “direct” reasoning approaches. First, we adapt the well-known grounded semantics to this framework in two different versions that do not rely on completions. After determining that our new semantics are polynomially computable, we provide a principle-based analysis of these semantics, as well as the “direct” semantics previously defined in the literature, namely the complete, preferred and stable semantics. Finally, we also provide new results regarding the satisfaction of principles by the classical “completion-based” semantics.
In a recently published book, the French writer and comedian François Rollin has discussed various aspects of the notion of stupidity, including artificial stupidity, the stupid counterpart of artificial intelligence. His claim is that a system of artificial stupidity is a system that provides wrong answers to any task it should solve, leading to absurd solutions in most cases. We believe that this claim is (at least partially) false and that designing artificial stupidity is not as trivial as it seems. In this article, we discuss why and how one could design a system of artificial stupidity. We believe that such a reflection on (artificial) stupidity can bring about some interesting insights about (artificial) intelligence.
Operations like belief change or merging have been adapted to the context of abstract argumentation. However, these operations may require to express some uncertainty or some disjunction in the result, which is not representable in classical AFs. For this reason, some of these earlier works require a set of AFs or a set of extensions as the outcome of the operation, somehow to represent a “disjunction” of AFs or extensions. In parallel, the notion of Incomplete AFs (IAFs) has been developed recently. It corresponds to AFs where the existence of some arguments or attacks may be uncertain. Each IAF can be associated with a set of classical AFs called completions, that correspond to different ways of resolving the uncertainty. While these IAFs could be good candidates for a compact representation of a disjunction of AFs, we prove that this model is not expressive enough. Then we introduce Constrained IAFs, that include a propositional formula allowing to select the set of completions used for reasoning. We prove that this model is expressive enough for representing any set of AFs, or any set of extensions. Moreover, we study the complexity of various decision problems related to the verification of extensions and the acceptability of arguments. While some of them are one level higher in the polynomial hierarchy (compared to their counterpart with standard IAFs), most of them have the same complexity than in the case of IAFs. Finally, we show that CIAFs can be used to model a new form of extension enforcement, where the possible evolutions of an AF are taken into account and modeled by the completions of the CIAF.
In fair division, local envy-freeness is a desirable property which has been thoroughly studied in recent years. In this paper, we study explanations which can be given to explain that no allocation of items can satisfy this criterion, in the house allocation setting where agents receive a single item. While MUSes are key concepts to extract explanations, they cannot be used as such: (i) they highly depend on the initial encoding of the problem; (ii) they are flat structures which fall short of capturing the dynamics of explanations; (iii) they typically come in large number and exhibit great diversity. In this paper we provide two SAT encodings of the problem which enable us to extract MUS when instances are unsatisfiable. We build a dynamic graph structure which allows to follow step-by-step the explanation. Finally, we propose several criteria to select MUSes, some of them being based on the MUS structure, while others rely on this original graphical explanation structure. We give theoretical bounds on these metrics, showing that they can vary significantly for some instances. Experimental results on synthetic data complement these results and illustrate the impact of the encodings and the relevance of our metrics to select among the many MUSes.
Various approaches have been proposed for providing efficient computational approaches for abstract argumentation. Among them, neural networks have permitted to solve various decision problems, notably related to arguments (credulous or skeptical) acceptability. In this work, we push further this study in various ways. First, relying on the state-of-the-art approach AFGCN, we show how we can improve the performances of the Graph Convolutional Networks (GCNs) regarding both runtime and accuracy. Then, we show that it is possible to improve even more the efficiency of the approach by modifying the architecture of the network, using Graph Attention Networks (GATs) instead.
In the realm of multi-agent systems, dialogues rooted in argumentation (such as those for persuasion and negotiation) rely on the exchange of arguments among autonomous agents. These agents must continually re-evaluate the acceptability of arguments during each step of the dialogue. The challenge lies in finding efficient computational methods due to the inherent complexity of traditional reasoning tasks in argumentation. In this study, we propose an innovative approach that leverages modern SAT solving techniques to dynamically reassess the acceptability status credulous or skeptical) of arguments, adhering to various classical semantics found in existing literature. Our method capitalizes on the assumption mechanism prevalent in contemporary SAT solvers. While previous approaches to argumentation dynamics have also utilized this mechanism, our method stands out by making minimal assumptions, rendering it highly practical. Through comprehensive experimental evaluation, our approach demonstrates robust scalability and superior performance when compared to existing methods.
The growing interest in generalizations of Dung’s abstract argumentation frameworks has recently led to the simultaneous and independent discovery of a combination of two of these generalizations: Bipolar Argumentation Frameworks (BAFs), where a relation representing supports between arguments is added, and Incomplete Argumentation Frameworks (IAFs), where the existence of arguments and attacks may be uncertain, resulting in the so-called Incomplete Bipolar Abstract Argumentation Frameworks (IBAFs). This paper digs deeper into such a combination by: (i) providing a thoughtful analysis of the existing notions of completion (the hypothetical removal of uncertainty used in IBAFs to reason about argument acceptability); (ii) proposing, motivating and studying new notions of completion; (iii) throwing new complexity results on argument acceptability problems associated with IBAFs; (iv) encoding these reasoning problems into a lightweight version of dynamic logic.
Recent advancements in algorithms for abstract argumentation make it possible now to solve reasoning problems even with argumentation frameworks of large size, as demonstrated by the results of the various editions of the International Competition on Computational Models of Argumentation (ICCMA). However, the solvers participating to the competition may be hard to use for non-expert programmers, especially if they need to incorporate these algorithms in their own code instead of simply using the command-line interface. In this paper we described pygarg, a Python implementation of the SAT-based approach used in the argumentation solver CoQuiAAS. Contrary to CoQuiAAS and most of the participants to the various editions of ICCMA, pygarg is made to be easy to use even for non-expert programmers. We show how to easily use pygarg in other Python scripts as a third-party library before experimentally demonstrating that it can be used in practice to solve large instances.
Proceedings of the First International Workshop on Argumentation and Applications (Arg&App’23) co-located with the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR’23)
Summary: There were 11 papers submitted for peer-review to this workshop. Out of these, 8 papers were accepted for this volume, 5 as regular papers and 3 as short papers. The workshop also included two invited talks: one keynote talk by Antonis Kakas (University of Cyprus), and the announcement of the results of the Fifth International Competition on Computational Models of Argumentation by Tuomo Lehtonen and Andreas Niskanen (University of Helsinki), which is included in the proceedings as an invited paper.
Incomplete Argumentation Frameworks (IAFs) have been defined to incorporate some qualitative uncertainty in abstract argumentation: information such as "I am not sure whether this argument exists" or "I am not sure whether this argument attacks that one" can be expressed. Reasoning with IAFs is classically based on a set of completions, i.e. standard argumentation frameworks that represent the possible worlds encoded in the IAF. The number of these completions may be exponential with respect to the number of arguments in the IAF. This leads, in some cases, to an increase of the complexity of reasoning, compared to the complexity of standard AFs. In this paper, we follow an approach that was initiated for Partial AFs (a subclass of IAFs), which consists in defining new forms of conflict-freeness and defense, the properties that underly the definition of Dung’s semantics for AFs. We generalize these semantics from PAFs to IAFs. We show that, among three possible types of admissibility, only two of them satisfy some desirable properties. We use them to define two new families of extension-based semantics. We study the properties of these semantics, and in particular we show that their complexity remains the same as in the case of Dung’s AFs. Finally, we propose a logical encoding of these semantics, and we show experimentally that this encoding can be used efficiently to reason with IAFs, thanks to the power of modern SAT solvers.
Efficient computation of hard reasoning tasks is a key issue in abstract argumentation. One recent approach consists in defining approximate algorithms, i.e. methods that provide an answer that may not always be correct, but outperforms the exact algorithms regarding the computation runtime. One such approach proposes to use the grounded semantics, which is polynomially computable, as a starting point for determining whether arguments are (credulously or skeptically) accepted with respect to various semantics. In this paper, we push further this idea by defining various approaches to evaluate the acceptability of arguments which are not in the grounded extension, neither attacked by it. We have implemented our approaches, and we describe the result of their empirical evaluation.
Incomplete Argumentation Frameworks (IAFs) enrich classical abstract argumentation with arguments and attacks whose actual existence is questionable. The usual reasoning approaches rely on the notion of completion, i.e. standard AFs representing "possible worlds" compatible with the uncertain information encoded in the IAF. Recently, extension-based semantics for IAFs that do not rely on the notion of completion have been defined, using instead new versions of conflict-freeness and defense that take into account the (certain or uncertain) nature of arguments and attacks. In this paper, we give new insights on this reasoning approach, by adapting the well-known grounded semantics to this framework in two different versions. After determining the computational complexity of our new semantics, we provide a principle-based analysis of these semantics, as well as the ones previously defined in the literature, namely the complete, preferred and stable semantics.
We study the notion of realization of extensions in abstract argumentation. It consists in reversing the usual reasoning process: instead of computing the extensions of an argumentation framework, we want to determine whether a given set of extensions corresponds to some (set of) argumentation framework(s) (AFs); and more importantly we want to identify such an AF (or set of AFs) that realizes the set of extensions. While deep theoretical studies have been concerned with realizability of extensions sets, there are few computational approaches for solving this problem. In this paper, we generalize the concept of realizability by introducing two parameters: the number k of auxiliary arguments (i.e. those that do not appear in any extension), and the number m of AFs in the result. We define a translation of k-m-realizability into Quantified Boolean Formulas (QBFs) solving. We also show that our method allows to guarantee that the result of the realization is as close as possible to some input AF. Our method can be applied in the context of AF revision operators, where revised extensions must be mapped to a set of AFs while ensuring some notion of proximity with the initial AF.
Graph generators are a powerful tool to provide benchmarks for various sub elds of KR (e.g. abstract argumentation, description logics, etc.) as well as other domains of AI (e.g. resources allocation, gossip problem, etc.). In this paper, we describe a new approach for generating graphs based on the idea of communities, i.e. parts of the graph which are densely connected, but with fewer connections between di erent communities. We discuss the design of an application named crusti_g2io implementing this idea, and then focus on a use case related to abstract argumentation. We show how crusti_g2io can be used to generate structured hard argumentation instances which are challenging for the fourth International Competition on Computational Models of Argumentation (ICCMA’21) solvers.
In this paper, we discuss the application of abstract argumentation mechanisms to resources allocation. We show how such problems can be modeled as abstract argumentation frameworks, such that specific sets of arguments corresponds to interesting solutions of the problem. By interesting solutions, here we mean Local Envy-Free (LEF) allocations. Envy-freeness is an important notion of fairness in resources allocation, assuming than no agent should prefer the resource allocated to another agent. We focus on LEF, a generalized form of envy-freeness, and we show that LEF allocations corresponds to some specific sets of arguments in our argument-based modeling of the problem. This work in progress paves the way to richer connections between the various models of argumentation and resources allocation problems.
Dung’s Abstract Argumentation Framework has been generalized in various directions. We combine two of these generalizations: Bipolar Argumentation Frameworks (BAFs), where a relation representing supports between arguments is added, and Incomplete Argumentation Frameworks (IAFs), where the existence of arguments and attacks may be uncertain. We discuss how the notion of completion of IAFs can be adapted to take into account the nature of the support relation, providing a couple of alternative definitions. After that, we analyse the impact of choosing among these alternatives on the complexity of argument acceptability problems. Finally, we give a logical encoding of our new framework in the Dynamic Logic of Propositional Assignments.
Proceedings of the Fourth International Workshop on Systems and Algorithms for Formal Argumentation (SAFA’22) co-located with the 9th International Conference on Computational Models of Argument (COMMA’22)
We describe a tool that allows the merging of extensions of argumentation frameworks, following the approach defined by (In Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning (KR’16) (2016) 33–42). The tool is implemented in Java, and is highly modular thanks to Object Oriented Programming (OOP) principles. We describe a short experimental study that assesses the scalability of the approach, as well as the impact on runtime of using an integrity constraint. In multi-agent systems, merging the beliefs of several agents in order to have a global view of the groups beliefs is an important task [16]. When the agents’ beliefs are expressed through argumentation frameworks [11], there are two options: aggregate the graphs (see e.g. [6,7]), or merge the extensions [8]. Here we present a Java tool that follows this second approach, then allowing the user to reason with the merged beliefs of the group (e.g. for determining whether a given argument is credulously or skeptically accepted w.r.t. the merged beliefs). Notice that the original paper also presents so-called generation operators, that allow to obtain new argumentation frameworks from the merged extensions. This part is not covered by our tool.
Abstract argumentation, as originally defined by Dung, is a model that allows the description of certain information about arguments and relationships between them: in an abstract argumentation framework (AF), the agent knows for sure whether a given argument or attack exists. It means that the absence of an attack between two arguments can be interpreted as “we know that the first argument does not attack the second one”. But the question of uncertainty in abstract argumentation has received much attention in the last years. In this paper, we survey approaches that allow to express information like “There may (or may not) be an attack between these arguments”. We describe the main models that incorporate qualitative uncertainty (or ignorance) in abstract argumentation, as well as some applications of these models. We also highlight some open questions that deserve some attention in the future.
Recently, Strength-based Argumentation Frameworks (StrAFs) have been proposed to model situations where some quantitative strength is associated with arguments. In this setting, the notion of accrual corresponds to sets of arguments that collectively attack an argument. Some semantics have already been defined, which are sensitive to the existence of accruals that collectively defeat their target, while their individual elements cannot. However, until now, only the surface of this framework and semantics have been studied. Indeed, the existing literature focuses on the adaptation of the stable semantics to StrAFs. In this paper, we push forward the study and investigate the adaptation of admissibility-based semantics. Especially, we show that the strong admissibility defined in the literature does not satisfy a desirable property, namely Dung’s fundamental lemma. We therefore propose an alternative definition that induces semantics that behave as expected. We then study computational issues for these new semantics, in particular we show that complexity of reasoning is similar to the complexity of the corresponding decision problems for standard argumentation frameworks in almost all cases. We then propose a translation in pseudo-Boolean constraints for computing (strong and weak) extensions. We conclude with an experimental evaluation of our approach which shows in particular that it scales up well for solving the problem of providing one extension as well as enumerating them all.
Within argumentation dynamics, a major strand of research is concerned with how changing an argumentation framework affects the acceptability of arguments, and how to modify an argumentation framework in order to guarantee that some arguments have a given acceptance status. In this chapter, we overview the main approaches for enforcement in formal argumentation. We mainly focus on extension enforcement, i.e., on how to modify an argumenta- tion framework to ensure that a given set of arguments becomes (part of) an extension. We present different forms of extension enforcement defined in the literature, as well as several possibility and impossibility results. The question of minimal change is also considered, i.e., what is the minimal number of modifications that must be made to the argumentation framework for enforcing an extension. Computational complexity and algorithms based on a declarative approach are discussed. Finally, we briefly describe several notions that do not directly fit our definition of extension enforcement, but are closely related.
Within argumentation dynamics, a major strand of research is concerned with how changing an argumentation framework affects the acceptability of arguments, and how to modify an argumentation framework in order to guarantee that some arguments have a given acceptance status. In this chapter, we overview the main approaches for enforcement in formal argumentation. We mainly focus on extension enforcement, i.e., on how to modify an argumenta- tion framework to ensure that a given set of arguments becomes (part of) an extension. We present different forms of extension enforcement defined in the literature, as well as several possibility and impossibility results. The question of minimal change is also considered, i.e., what is the minimal number of modifications that must be made to the argumentation framework for enforcing an extension. Computational complexity and algorithms based on a declarative approach are discussed. Finally, we briefly describe several notions that do not directly fit our definition of extension enforcement, but are closely related. Note: This journal article is the early open-access version of the Handbook chapter.
Computational argumentation has taken a predominant place in the modeling of negotiation dialogues over the last years. A competent agent participating in a negotiation process is expected to decide its next move taking into account an, often incomplete, model of its opponent. This work provides a complete computational account of argumentation-based negotiation under incomplete opponent profiles. After the agent identifies its best option, in any state of a negotiation, it looks for suitable arguments that support this option in the theory of its opponent. As the knowledge on the opponent is uncertain, the challenge is to find arguments that, ideally, support the selected option despite the uncertainty. We present a negotiation framework based on these ideas, along with experimental evidence that highlights the advantages of our approach.
Argumentation has been an important topic in knowledge representation, reasoning and multi-agent systems during the last twenty years. In this paper, we propose a new abstract framework where arguments are associated with a strength, namely a quantitative information which is used to determine whether an attack between arguments succeeds or not. Our Strength-based Argumentation Framework (StrAF) combines ideas of Preference-based and Weighted Argumentation Frameworks in an original way, which permits to define acceptability semantics sensitive to the existence of accruals between arguments. The question of accruals arises in situations where several arguments defending the same position (but from different points of view) against another argument are unable to individually defeat this argument, but could do it collectively if they combine their strengths. We investigate some of the theoretical and computational properties of our new framework and semantics, and present a reasoning algorithm that is based on a translation of the problem into pseudo-boolean constraint satisfaction. This paper proposes an intuitive framework which allows strength compensations in an argumentation context where attacks may not succeed, completed by an approach which detects accruals throughout the reasoning process without requiring the elicitation of all compensatory combinations of arguments as an input.
In this paper we present Probabilistic Control Argumentation Frame- works (PCAFs) that extend classical Control Argumentation Frame- works (CAFs) to take into account probabilistic information in the reasoning process. We show that probabilities can be used to optimally control CAFs that cannot be controlled otherwise. We introduce the notion of controlling power, that represents the prob- ability that a control configuration reaches its target. A computa- tional method based on Monte Carlo simulations for computing the controlling power of control configurations is defined. We experimentally show that PCAFs outperform w.r.t runtime classical CAFs and in a large number of situations they can reach the target with a high probability while the classical CAFs fail.
Since 2015, the International Competition on Computational Models of Argumentation (ICCMA) provides a systematic comparison of the different algorithms for solving some classical reasoning problems in the domain of abstract argumentation. This paper discusses the design of the Fourth International Competition on Computational Models of Argumentation. We describe the rules of the competition and the benchmark selection method that we used. After a brief presentation of the competitors, we give an overview of the results.
The recent Control Argumentation Framework (CAF) is a generalization of Dung’s Argumentation Framework which handles argumentation dynamics under uncertainty; especially it can be used to model the behavior of an agent which can anticipate future changes in the environment. Here we provide new insights on this model by defining the notion of possible controllability of a CAF. We study the complexity of this new form of reasoning for the four classical semantics, and we provide a logical encoding for reasoning with this framework.
Preference-based argumentation and ranking semantics are two important research topics in the computational argumentation literature. Surprisingly, no study investigates to what extent preferences over arguments and ranking semantics can interact. This paper fills the gap between the relative priorities that one can express and the evaluation of arguments individual acceptability. More precisely, we propose a natural principle that should be satisfied by a ranking semantics for Preference-based Argumentation Frameworks. We show that although existing semantics do not satisfy this desirable principle, they can be used to define new ranking semantics that exhibit the expected behavior. Finally, we discuss an application of these semantics to the modeling of human reasoning.
SAT-based approaches for reasoning with abstract argumen- tation frameworks (AFs) have been dominating in recent years. Dif- ferent SAT solvers have been used in the argumentation solvers that performed well at the different occurrences of the International Com- petition on Computational Models of Argumentation (ICCMA). Based on this observation, the question of the impact of the underlying SAT solver on the efficiency of the argumentation solver has arisen. We have conducted a preliminary study that shows varied results when the SAT solvers Minisat and Glucose are used. Our results suggest interesting research tracks, like the study of the link between the AF instances and the SAT solving algorithms, or the use of a portfolio of SAT solvers for reasoning with AFs.
Since 2015, the International Competition on Computational Models of Argumentation (ICCMA) has allowed to compare the differ- ent algorithms for solving some classical reasoning problems in the do- main of (abstract) argumentation. In this paper, we describe the rules of the fourth ICCMA, that will be held in 2021. We introduce some minor modifications regarding the existing tracks (i.e. reasoning with static and dynamic abstract argumentation). Also, for the first time, we present a new track dedicated to structured argumentation, more precisely Assumption-based Argumentation.
The notion of stability in a structured argumentation setup characterizes situations where the acceptance status associated with a given literal will not be impacted by any future evolution of this setup. In this paper, we abstract away from the logical structure of arguments, and we transpose this notion of stability to the context of Dungean argumentation frameworks. In particular, we show how this problem can be translated into reasoning with Argument-Incomplete AFs. Then we provide preliminary complexity results for stability under four prominent semantics, in the case of both credulous and skeptical reasoning. Finally, we illustrate to what extent this notion can be useful with an application to argument-based negotiation.
Recently, qualitative uncertainty in abstract argumentation has received much attention. The first works on this topic introduced uncertainty about the presence of attacks, then about the presence of arguments, and finally combined both kinds of uncertainty. This results in the Incomplete Argumentation Framework (IAFs). But another kind of uncertainty was introduced in the context of Control Argumentation Frameworks (CAFs): it consists in a conflict relation with uncertain orientation, i.e. we are sure that there is an attack between two arguments, but the actual direction of the attack is unknown. Here, we formally define Rich IAFs, that combine the three different kinds of uncertainty that were previously introduced in IAFs and CAFs. We show that this new model, although strictly more expressive than IAFs, does not suffer from a blow up of computational complexity. Also, the existing computational approach based on SAT can be easily adapted to the new framework.
In this paper, we provide an overview of the benchmarks that have been recently employed in Abstract Argumentation. We first describe the benchmark suite from previous editions of the International Competition of Computational Models of Argumentation (ICCMA), and then briefly describe the benchmarks for non-Dung frameworks. This article is a contribution to the new Argument & Computation Community Resources (ACCR) corner.
Computational argumentation has taken a predominant place in the modeling of negotiation dialogues over the last years. A competent agent participating in a negotiation process is expected to decide its next move taking into account an, often incomplete, model of its opponent. This work provides a complete computational account of argumentation-based negotiation under incomplete opponent profiles. After the agent identifies its best option, in any state of a negotiation, it looks for suitable arguments that support this option in the theory of its opponent. As the knowledge on the opponent is uncertain, the challenge is to find arguments that, ideally, support the selected option despite the uncertainty. We present a negotiation framework based on these ideas, along with experimental evidence that highlights the advantages of our approach.
This paper addresses the issue of the dynamic enforcement of a constraint in an argumentation system. The system consists in (1) an argumentation framework, made up, notably, of a set of arguments and of an attack relation, (2) an evaluation semantics, and (3) the evaluation result, computed from (1) and (2). An agent may want another agent to consider a new attack, or to have a given argument accepted, or even to relax the definition of the semantics. A constraint on any of the three components is thus defined, and it has to be enforced in the system. The enforcement may result in changes on components of the system. The paper surveys existing approaches for the dynamic enforcement of a constraint and its consequences, and reveals challenging enforcement cases that remain to be investigated.
Dynamics of argumentation is the family of techniques concerned with the evolution of an argumentation framework (AF), for instance to guarantee that a given set of arguments is accepted. This work proposes Control Argumentation Frameworks (CAFs), a new approach that generalizes existing techniques, namely normal extension enforcement, by accommodating the possibility of uncertainty in dynamic scenarios. A CAF is able to deal with situations where the exact set of arguments is unknown and subject to evolution, and the existence (or direction) of some attacks is also unknown. It can be used by an agent to ensure that a set of arguments is part of one (or every) extension whatever the actual set of arguments and attacks. A QBF encoding of reasoning with CAFs provides a computational mechanism for determining whether and how this goal can be reached. We also provide some results concerning soundness and completeness of the proposed encoding as well as complexity issues.
Argumentation reasoning is a way for agents to evaluate a situation. Given a framework made of conflicting arguments, a semantics allows to evaluate the acceptability of the arguments. It may happen that the semantics associated to the framework has to be changed. In order to perform the most suitable change, the current and a potential new semantics have to be compared. Notions of difference measures between semantics have already been proposed, and application cases where they have to be minimized when a change of semantics has to be performed, have been highlighted. This paper develops these notions, it proposes an additional kind of difference measure, and shows application cases where measures may have to be maximized, and combined.
Change in argumentation frameworks has been widely studied in the recent years. Most of the existing works on this topic are concerned with change of the structure of the argumentation graph (addition or removal of arguments and attacks), or change of the outcome of the framework (acceptance statuses of arguments). Change on the acceptability semantics that is used in the framework has not received much attention so far. Such a change can be motivated by different reasons, especially it is a way to change the outcome of the framework. In this paper, it is shown how semantic change can be used as a way to reach a goal about acceptance statuses in a situation of extension enforcement.
CoQuiAAS v2.0: Taking Benefit from Constraint Programming to Solve Argumentation Problems
Properties of argumentation semantics have been widely studied in the last decades. However, there has been no investigation on the question of difference measures between semantics. Such measures turn helpful when the semantics associated to an argumentation framework may have to be changed, in a way that ensures that the new semantics is not too dissimilar from the old one. Three main notions of difference measures between semantics are defined in this paper. Some of these measures are shown to be distances or semi-distances.
In this paper we introduce a new approach for revising and merging consistent Horn formulae under minimal model semantics. Our approach is translation-based in the following sense: we generate a propositional encoding capturing both the syntax of the original Horn formulae (the clauses which appear or not in them) and their semantics (their minimal models). We can then use any classical revision or merging operator to perform belief change on the encoding. The resulting propositional theory is then translated back into a Horn formula. We identify some specific operators which guarantee a particular kind of minimal change. A unique feature of our approach is that it allows us to control whether minimality of change primarily relates to the syntax or to the minimal model semantics of the Horn formula. We give an axiomatic characterization of minimal change on the minimal model for this new setting, and we show that some specific translation-based revision and merging operators satisfy our postulates.
Understanding the behavior of belief change operators for fragments of classical logic has received increasing interest over the last years. Results in this direction are mainly concerned with adapting representation theorems. However, fragment-driven belief change also leads to novel research questions. In this paper we propose the concept of belief distribution, which can be understood as the reverse task of merging. More specifically, we are interested in the following question: given an arbitrary knowledge base K and some merging operator Δ, can we find a profile E and a constraint μ, both from a given fragment of classical logic, such that Δμ(E) yields a result equivalent to K? In other words, we are interested in seeing if K can be distributed into knowledge bases of simpler structure, such that the task of merging allows for a reconstruction of the original knowledge. Our initial results show that merging based on drastic distance allows for an easy distribution of knowledge, while the power of distribution for operators based on Hamming distance relies heavily on the fragment of choice.
Formalizing dynamics of argumentation has received increasing attention over the last years. While AGM- like representation results for revision of argumentation frameworks (AFs) are now available, similar results for the problem of merging are still missing. In this paper, we close this gap and adapt model-based propositional belief merging to define extension-based merging oper- ators for AFs. We state an axiomatic and a constructive characterization of merging operators through a fam- ily of rationality postulates and a representation theo- rem. Then we exhibit merging operators which satisfy the postulates. In contrast to the case of revision, we observe that obtaining a single framework as result of merging turns out to be a more subtle issue. Finally, we establish links between our new results and previous ap- proaches to merging of AFs, which mainly relied on ax- ioms from Social Choice Theory, but lacked AGM-like representation theorems.
Nowadays, argumentation is a salient keyword in artificial intelligence. The use of argumentation techniques is par- ticularly convenient for thematics such that multiagent systems, where it allows to describe dialog protocols (using persuasion, negotiation, ...) or on-line discussion analysis; it also allows to handle queries where a single agent has to reason with conflicting information (inference in the presence of inconsistency, inconsistency measure). This very rich framework gives numerous reasoning tools, thanks to several acceptability semantics and inference policies. On the other hand, the progress of SAT solvers in the recent years, and more generally the progress on Constraint Programming paradigms, lead to some powerful approaches that permit tackling theoretically hard problems. The needs of efficient applications to solve the usual reasoning tasks in argumentation, together with the capabilities of modern Constraint Programming solvers, lead us to study the encoding of usual acceptability semantics into logical settings. We propose diverse use of Constraint Programming techniques to develop a software library dedicated to argumentative reasoning. We present a library which offers the advantages to be generic and easily adaptable. We finally describe an experimental study of our approach for a set of semantics and inference tasks, and we describe the behaviour of our solver during the First International Competition on Computational Models of Argumentation.
Change in abstract argumentation frameworks (AFs) is a very active topic. Especially, the problem of enforcing a set E of arguments, i.e., ensuring that E is an extension (or a subset of an extension) of a given AF F, has received a particular attention in the recent years. In this paper, we define a new family of enforcement operators, for which enforcement can be achieved by adding new arguments (and attacks) to F (as in previous approaches to enforcement), but also by questioning some attacks (and non-attacks) of F. This family includes previous enforcement operators, but also new ones for which the success of the enforcement operation is guaranteed. We show how the enforcement problem for the operators of the family can be modeled as a pseudo-Boolean optimization problem. Intensive experiments show that the method is practical and that it scales up well.
CoQuiAAS: Application of Constraint Programming for Abstract Argumentation
This thesis tackles the problem of integrating a new piece of information in an abstract argumenta- tion framework. Such a framework is a directed graph such that its nodes represent the arguments, and the directed edges represent the attacks between arguments. There are different ways to decide which arguments are accepted by the agent who uses such a framework to represent her beliefs. An agent may be confronted with a piece of information such that "this argument should be accepted", which is in contradiction with her current beliefs, represented by her argumentation framework. In this thesis, we have studied several approaches to incorporate a piece of information in an argumenta- tion framework. Our first contribution is an adaptation of the AGM framework for belief revision, which has been de- veloped for characterizing the incorporation of a new piece of information when the agent’s beliefs are represented in a logical setting. We have adapted the rationality postulates from the AGM framework to characterize the revision operators suited to argumentation frameworks, and we have identified several ways to generate the argumentation frameworks resulting from the revision. We have also shown how to use AGM revision as a tool for revising argumentation frameworks. Our approach uses a logical encoding of the argumentation framework to take advantage of the classical re- vision operators, for deriving the expected result. At last, we have studied the problem of enforcing a set of arguments (how to change an argumentation framework so that a given set of arguments becomes an extension). We have developed a new family of operators which guarantee the success of the enforcement process, contrary to the existing approaches, and we have shown that a translation of our approaches into satisfaction and optimization problems makes possible to develop efficient tools for computing the result of the enforcement.
In this paper, we investigate the revision issue for Dung ar- gumentation frameworks. The main idea is that such frameworks can be translated into propositional formulae, allowing the use of propositional revision operators to perform a rational minimal change. Our translation- based approach to revising argumentation frameworks can take advan- tage of any propositional revision operator ◦. Via a translation, each propositional operator ◦ can be associated with some revision operators ⋆ suited to argumentation frameworks. Some rationality postulates for the ⋆ operators are presented. If the revision formulae are restricted to formulae about acceptance statuses, some ⋆ operators satisfy these pos- tulates provided that the corresponding ◦ operator is AGM.
In this paper, we investigate the revision of argumenta- tion systems a‘ la Dung. We focus on revision as mini- mal change of the arguments status. Contrarily to most of the previous works on the topic, the addition of new arguments is not allowed in the revision process, so that the revised system has to be obtained by modify- ing the attack relation only. We introduce a language of revision formulae which is expressive enough for en- abling the representation of complex conditions on the acceptability of arguments in the revised system. We show how AGM belief revision postulates can be trans- lated to the case of argumentation systems. We pro- vide a corresponding representation theorem in terms of minimal change of the arguments statuses. Several distance-based revision operators satisfying the postu- lates are also pointed out, along with some methods to build revised argumentation systems. We also discuss some computational aspects of those methods.