Abstract
Argumentation, in the field of Artificial Intelligence, is a formalism allowing to reason with contradictory information as well as to model an exchange of arguments between one or several agents. For this purpose, many semantics have been defined with, amongst them, gradual semantics aiming to assign an acceptability degree to each argument. Although the number of these semantics continues to increase, there is currently no method allowing to explain the results returned by these semantics. In this paper, we study the interpretability of these semantics by measuring, for each argument, the impact of the other arguments on its acceptability degree. We define a new property and show that the score of an argument returned by a gradual semantics which satisfies this property can also be computed by aggregating the impact of the other arguments on it. This result allows to provide, for each argument in an argumentation framework, a ranking between arguments from the most to the least impacting ones w.r.t. a given gradual semantics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
From a computational point of view, the scores of each argument are computed using a fixed-point approach. If the function used in the gradual semantics converges, the number of iterations needed for convergence can also be used to define the maximal depth of the tree-shaped AF.
References
Amgoud, L., Ben-Naim, J.: Axiomatic foundations of acceptability semantics. In: Proceedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning (KR 2016), pp. 2–11 (2016)
Amgoud, L., Ben-Naim, J., Vesic, S.: Measuring the intensity of attacks in argumentation graphs with Shapley value. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017), pp. 63–69 (2017)
Atkinson, K., et al.: Towards artificial argumentation. AI Magaz. 38(3), 25–36 (2017). https://www.aaai.org/ojs/index.php/aimagazine/article/view/2704
Baroni, P., Rago, A., Toni, F.: From fine-grained properties to broad principles for gradual argumentation: a principled spectrum. Int. J. Approx. Reasoning 105, 252–286 (2019). https://doi.org/10.1016/j.ijar.2018.11.019
Besnard, P., Hunter, A.: A logic-based theory of deductive arguments. Artif. Intell. 128(1–2), 203–235 (2001)
Bonzon, E., Delobelle, J., Konieczny, S., Maudet, N.: A comparative study of ranking-based semantics for abstract argumentation. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI 2016), pp. 914–920 (2016)
Bonzon, E., Delobelle, J., Konieczny, S., Maudet, N.: Combining extension-based semantics and ranking-based semantics for abstract argumentation. In: Proceedings of the 16th International Conference on Principles of Knowledge Representation and Reasoning (KR 2018), pp. 118–127 (2018)
Cayrol, C., Lagasquie-Schiex, M.: Bipolarity in argumentation graphs: towards a better understanding. Int. J. Approx. Reasoning 54(7), 876–899 (2013). https://doi.org/10.1016/j.ijar.2013.03.001
Cyras, K., et al.: Explanations by arbitrated argumentative dispute. Expert Syst. Appl. 127, 141–156 (2019). https://doi.org/10.1016/j.eswa.2019.03.012
Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–358 (1995)
Fan, X., Toni, F.: On computing explanations in argumentation. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 25–30 January 2015, Austin, Texas, USA, pp. 1496–1502 (2015)
Fan, X., Toni, F.: On explanations for non-acceptable arguments. In: Black, E., Modgil, S., Oren, N. (eds.) TAFA 2015. LNCS (LNAI), vol. 9524, pp. 112–127. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-28460-6_7
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mittelstadt, B.D., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, 29–31 January 2019, pp. 279–288. ACM (2019). https://doi.org/10.1145/3287560.3287574
Pu, F., Luo, J., Zhang, Y., Luo, G.: Argument ranking with categoriser function. In: Buchmann, R., Kifor, C.V., Yu, J. (eds.) KSEM 2014. LNCS (LNAI), vol. 8793, pp. 290–301. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12096-6_26
Pu, F., Luo, J., Zhang, Y., Luo, G.: Attacker and defender counting approach for abstract argumentation. In: Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (2015)
Rago, A., Cocarascu, O., Toni, F.: Argumentation-based recommendations: fantastic explanations and how to find them. In: Lang, J. (ed.) Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 13–19 July 2018, pp. 1949–1955. ijcai.org (2018). https://doi.org/10.24963/ijcai.2018/269
Acknowledgements
This work benefited from the support of the project DGA RAPID CONFIRMA.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Delobelle, J., Villata, S. (2019). Interpretability of Gradual Semantics in Abstract Argumentation. In: Kern-Isberner, G., Ognjanović, Z. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2019. Lecture Notes in Computer Science(), vol 11726. Springer, Cham. https://doi.org/10.1007/978-3-030-29765-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-29765-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29764-0
Online ISBN: 978-3-030-29765-7
eBook Packages: Computer ScienceComputer Science (R0)