Abstract
Modern Machine Learning (ML) has significantly advanced various fields; yet, the challenge of understanding complex models, often referred to as the “black box problem”, remains a barrier to their widespread adoption, particularly in critical domains such as medical diagnosis and financial services. Explainable AI (XAI) addresses this challenge by augmenting ML models’ outputs with interpretable information to facilitate human understanding of their internal decision processes. Despite the proliferation of explainers in recent years, covering a wide range of ML tasks and explanation types, there is no consensus on what constitutes a good explanation, leaving ML practitioners without clear guidance for selecting appropriate explainers. We argue that explanation quality quantification is the enabling factor for informed explainer choices, but many proposed explanation evaluation criteria are either narrow in scope or closer to desired properties than quantifiable metrics. This paper addresses this gap by proposing a standardized set of metrics for quantitatively evaluating explanations across diverse explanation types and ML tasks. We describe in detail the metrics of Effective Compactness, Rank Quality Index and Stability, designed to assess quantitatively explanation quality for various types of explanations (attributions, counterfactuals and rules) across different ML tasks (classification, regression and anomaly detection). We then present an exhaustive benchmarking framework for tabular-based ML, comprising open datasets, trained models, and state-of-the-art explainers. For each (data, model, explainer) tuple, we measure the time of the explanation production, apply our metrics and collect the results, highlighting correlations and trade-offs between desired properties. The resulting framework allows us to quantitatively rank explainers suitable for specific ML scenarios and select the most appropriate one based on the user’s requirements.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
References
Agarwal, C., et al.: OpenXAI: towards a transparent evaluation of model explanations. In: Advances in Neural Information Processing Systems, vol. 35, pp. 15784–15799 (2022)
Allaj, E.: Two simple measures of variability for categorical data. J. Appl. Stat. 45(8), 1497–1516 (2018)
Amparore, E., Perotti, A., Bajardi, P.: To trust or not to trust an explanation: using leaf to evaluate local linear XAI methods. PeerJ Comput. Sci. 7 (2021)
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Becker, B., Kohavi, R.: Adult. UCI Machine Learning Repository (1996)
Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and survey of explanation methods for black box models (2021)
Carletti, M., Terzi, M., Susto, G.A.: Interpretable anomaly detection with DIFFI: depth-based feature importance of isolation forest. Eng. Appl. Artif. Intell. 119, 105730 (2023)
Chen, H., Lundberg, S., Lee, S.I.: Explaining models by propagating Shapley values of local components (2019)
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)
Fanaee-T, H., Gama, J.: Event labeling combining ensemble detectors and background knowledge. Progress Artif. Intell. 2, 113–127 (2014)
German, B.: Glass Identification. UCI Machine Learning Repository (1987)
Grinsztajn, L., Oyallon, E., Varoquaux, G.: Why do tree-based models still outperform deep learning on typical tabular data? In: Thirty-Sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022)
Hedström, A., et al.: Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations and beyond. J. Mach. Learn. Res. 24(34), 1–11 (2023)
Hofmann, H.: German Credit Data. UCI Machine Learning Repository (1994)
Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Kelley Pace, R., Barry, R.: Sparse spatial autoregressions. Stat. Probab. Lett. 33(3), 291–297 (1997)
Le, P.Q., Nauta, M., Nguyen, V.B., Pathak, S., Schlötterer, J., Seifert, C.: Benchmarking explainable AI - a survey on available toolkits and open challenges. In: Elkind, E. (ed.) Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pp. 6665–6673. International Joint Conferences on Artificial Intelligence Organization (2023). Survey Track
Liu, N., Shin, D., Hu, X.: Contextual outlier interpretation. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 2461–2467 (2018)
Liu, Y., Khandagale, S., White, C., Neiswanger, W.: Synthetic benchmarks for scientific research in explainable machine learning. In: Advances in Neural Information Processing Systems Datasets Track (2021)
Longo, L., et al.: Explainable artificial intelligence (XAI) 2.0: a manifesto of open challenges and interdisciplinary research directions. Inf. Fusion 106, 102301 (2024)
Lopes, P., Silva, E., Braga, C., Oliveira, T., Rosado, L.: XAI systems evaluation: a review of human and computer-centred methods. Appl. Sci. 12(19) (2022)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)
Lundberg, S.M., Lee, S.I.: Consistent feature attribution for tree ensembles (2018)
Nash, W., Sellers, T., Talbot, S., Cawthorn, A., Ford, W.: The population biology of abalone in Tasmania. Sea Fisheries Division, Technical Report No 48 (1994)
Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55(13s) (2023)
Pang, G., Cao, L., Chen, L.: Homophily outlier detection in non-IID categorical data. Data Min. Knowl. Disc. 35(4), 1163–1224 (2021)
Pang, G., Shen, C., Cao, L., Hengel, A.V.D.: Deep learning for anomaly detection: a review. ACM Comput. Surv. (CSUR) 54(2), 1–38 (2021)
Panigutti, C., et al.: Co-design of human-centered, explainable AI for clinical decision support. ACM Trans. Interact. Intell. Syst. 13(4) (2023)
Petsiuk, V., Das, A., Saenko, K.: Rise: randomized input sampling for explanation of black-box models. In: British Machine Vision Conference (BMVC) (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York, NY, USA (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: High-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. (1) (2018)
Saeed, W., Omlin, C.: Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 263, 110273 (2023)
Salojarvi, J., Puolamaki, K., Simola, J., Kovanen, L., Kojo, I., Kaski, S.: Inferring relevance from eye movements: feature extraction. In: Publications in Computer and Information Science (2005)
Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences (2017)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps, pp. 1–8. ICLR (2014)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3319–3328. PMLR, 06–11 August 2017
Yang, W., Li, J., Xiong, C., Hoi, S.C.H.: MACE: an efficient model-agnostic framework for counterfactual explanation (2022)
Yeh, I.C., Hui Lien, C.: The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl. 36(2, Part 1), 2473–2480 (2009)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Zhang, Q., et al.: Towards an integrated evaluation framework for XAI: an experimental study. Procedia Comput. Sci. 207, 3884–3893 (2022). Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 26th International Conference KES2022
Zhang, X., Marwah, M., Lee, I.T., Arlitt, M., Goldwasser, D.: ACE - an anomaly contribution explainer for cyber-security applications. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 1991–2000 (2019)
Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021)
Acknowledgments
The authors would like to thank Laura Li Puma, Paolo Racca, Silvia Ronchiadin, Mauro Giuseppe Ronzano, Mauro Paolo Valorio, for their useful comments. The authors would like to thank Valerio Cencig, Andrea Cosentini, Mario D’Almo and Luigi Ruggerone for supporting the research team.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The research was conducted within the AFC Digital Hub (Anti Financial Crime Digital Hub), a Turin-based consortium to fight digital financial crime through the use of new technologies and artificial intelligence. AFC Digital Hub’s members are Intesa Sanpaolo, Intesa Sanpaolo Innovation Center, the Polytechnic University of Turin, the University of Turin and CENTAI.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Perotti, A., Borile, C., Miola, A., Nerini, F.P., Baracco, P., Panisson, A. (2024). Explainability, Quantified: Benchmarking XAI Techniques. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2153. Springer, Cham. https://doi.org/10.1007/978-3-031-63787-2_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-63787-2_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63786-5
Online ISBN: 978-3-031-63787-2
eBook Packages: Computer ScienceComputer Science (R0)