iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://api.crossref.org/works/10.1145/3677119
{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,10,6]],"date-time":"2024-10-06T01:17:58Z","timestamp":1728177478487},"reference-count":381,"publisher":"Association for Computing Machinery (ACM)","issue":"12","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2024,12,31]]},"abstract":"\n Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine learning based systems. A burgeoning body of research seeks to define the goals and methods of\n explainability<\/jats:italic>\n in machine learning. In this article, we seek to review and categorize research on\n counterfactual explanations<\/jats:italic>\n , a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.\n <\/jats:p>","DOI":"10.1145\/3677119","type":"journal-article","created":{"date-parts":[[2024,7,9]],"date-time":"2024-07-09T11:17:01Z","timestamp":1720523821000},"page":"1-42","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review"],"prefix":"10.1145","volume":"56","author":[{"ORCID":"http:\/\/orcid.org\/0000-0001-7797-926X","authenticated-orcid":false,"given":"Sahil","family":"Verma","sequence":"first","affiliation":[{"name":"Computer Science and Engineering, University of Washington, Seattle, United States"}]},{"ORCID":"http:\/\/orcid.org\/0009-0005-3885-0195","authenticated-orcid":false,"given":"Varich","family":"Boonsanong","sequence":"additional","affiliation":[{"name":"Computer Science and Engineering, University of Washington, Seattle, United States"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-8918-7566","authenticated-orcid":false,"given":"Minh","family":"Hoang","sequence":"additional","affiliation":[{"name":"Computer Science and Engineering, University of Washington, Seattle, United States"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-8721-499X","authenticated-orcid":false,"given":"Keegan","family":"Hines","sequence":"additional","affiliation":[{"name":"Arthur AI, Washington DC, United States"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-2231-680X","authenticated-orcid":false,"given":"John","family":"Dickerson","sequence":"additional","affiliation":[{"name":"Arthur AI, Washington DC, United States"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3797-4293","authenticated-orcid":false,"given":"Chirag","family":"Shah","sequence":"additional","affiliation":[{"name":"University of Washington, Seattle, United States"}]}],"member":"320","published-online":{"date-parts":[[2024,10,3]]},"reference":[{"key":"e_1_3_4_2_2","first-page":"66","volume-title":"Proceedings of the 39th International Conference on Machine Learning","author":"Abid Abubakar","year":"2022","unstructured":"Abubakar Abid, Mert Yuksekgonul, and James Zou. 2022. Meaningfully debugging model mistakes using conceptual counterfactual explanations. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 66\u201388. https:\/\/proceedings.mlr.press\/v162\/abid22a.html"},{"key":"e_1_3_4_3_2","doi-asserted-by":"publisher","unstructured":"Carlo Abrate and Francesco Bonchi. 2021. Counterfactual graphs for explainable classification of brain networks(KDD \u201921). ACM New York 10. DOI:10.1145\/3447548.3467154","DOI":"10.1145\/3447548.3467154"},{"key":"e_1_3_4_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_4_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-010-9337-x"},{"volume-title":"Proceedings of the 36th International Conference on Machine Learning","year":"2019","author":"A\u00efvodji Ulrich","key":"e_1_3_4_6_2","unstructured":"Ulrich A\u00efvodji, Hiromi Arai, Olivier Fortineau, S\u00e9bastien Gambs, Satoshi Hara, and Alain Tapp. 2019. Fairwashing: The risk of rationalization. In Proceedings of the 36th International Conference on Machine Learning. PMLR. https:\/\/proceedings.mlr.press\/v97\/aivodji19a.html"},{"volume-title":"Advances in Neural Information Processing Systems","year":"2021","author":"A\u00efvodji Ulrich","key":"e_1_3_4_7_2","unstructured":"Ulrich A\u00efvodji, Hiromi Arai, S\u00e9bastien Gambs, and Satoshi Hara. 2021. Characterizing the risk of fairwashing. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc.https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/7caf5e22ea3eb8175ab518429c8589a4-Paper.pdf"},{"key":"e_1_3_4_8_2","article-title":"Model extraction from counterfactual explanations","author":"A\u00efvodji Ulrich","year":"2020","unstructured":"Ulrich A\u00efvodji, Alexandre Bolot, and S\u00e9bastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv:2009.01884 (2020).","journal-title":"arXiv:2009.01884"},{"key":"e_1_3_4_9_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i03.5643"},{"key":"e_1_3_4_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.isci.2021.103581"},{"key":"e_1_3_4_11_2","doi-asserted-by":"publisher","unstructured":"Emanuele Albini Jason Long Danial Dervovic and Daniele Magazzeni. 2022. Counterfactual shapley additive explanations(FAccT \u201922). ACM New York 17. DOI:10.1145\/3531146.3533168","DOI":"10.1145\/3531146.3533168"},{"key":"e_1_3_4_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-89188-6_7"},{"key":"e_1_3_4_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-021-06528-z"},{"key":"e_1_3_4_14_2","doi-asserted-by":"publisher","DOI":"10.1002\/ail2.47"},{"key":"e_1_3_4_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/0950-7051(96)81920-4"},{"volume-title":"Proceedings of the International Conference on Learning Representations","year":"2021","author":"Antoran Javier","key":"e_1_3_4_16_2","unstructured":"Javier Antoran, Umang Bhatt, Tameem Adel, Adrian Weller, and Jos\u00e9 Miguel Hern\u00e1ndez-Lobato. 2021. Getting a CLUE: A method for explaining uncertainty estimates. In Proceedings of the International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=XSLF1XFq5h"},{"key":"e_1_3_4_17_2","doi-asserted-by":"publisher","DOI":"10.1111\/rssb.12377"},{"key":"e_1_3_4_18_2","unstructured":"Andr\u00e9 Artelt. 2019-2021. CEML: Counterfactuals for Explaining Machine Learning Models. https:\/\/www.github.com\/andreArtelt\/ceml"},{"key":"e_1_3_4_19_2","unstructured":"Andr\u00e9 Artelt and Barbara Hammer. 2019. On the Computation of Counterfactual Explanations \u2013 A Survey. http:\/\/arxiv.org\/abs\/1911.07749"},{"key":"e_1_3_4_20_2","doi-asserted-by":"publisher","unstructured":"Andr\u00e9 Artelt and Barbara Hammer. 2020. Efficient Computation of Contrastive Explanations. DOI:10.48550\/ARXIV.2010.02647","DOI":"10.48550\/ARXIV.2010.02647"},{"key":"e_1_3_4_21_2","doi-asserted-by":"publisher","unstructured":"Andr\u00e9 Artelt and Barbara Hammer. 2021. Convex Optimization for Actionable & Plausible Counterfactual Explanations. DOI:10.48550\/ARXIV.2105.07630","DOI":"10.48550\/ARXIV.2105.07630"},{"key":"e_1_3_4_22_2","doi-asserted-by":"crossref","unstructured":"Andr\u00e9 Artelt and Barbara Hammer. 2022. \u201cEven if ...\u201d \u2013 Diverse Semifactual Explanations of Reject. arxiv:2207.01898","DOI":"10.1109\/SSCI51031.2022.10022139"},{"key":"e_1_3_4_23_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-85030-2_9"},{"key":"e_1_3_4_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/SSCI50451.2021.9660058"},{"key":"e_1_3_4_25_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v38i21.30390"},{"key":"e_1_3_4_26_2","doi-asserted-by":"publisher","DOI":"10.3390\/make4020014"},{"key":"e_1_3_4_27_2","first-page":"1","volume-title":"Proceedings of the 2021 International Conference on Applied Artificial Intelligence (ICAPAI\u201921)","author":"Ates Emre","year":"2021","unstructured":"Emre Ates, Burak Aksar, Vitus J. Leung, and Ayse K. Coskun. 2021. Counterfactual explanations for multivariate time series. In Proceedings of the 2021 International Conference on Applied Artificial Intelligence (ICAPAI\u201921). 1\u20138. DOI:10.1109\/ICAPAI49758.2021.9462056"},{"key":"e_1_3_4_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2022.3165618"},{"key":"e_1_3_4_29_2","doi-asserted-by":"publisher","unstructured":"Mohit Bajaj Lingyang Chu Zi Yu Xue Jian Pei Lanjun Wang Peter Cho-Ho Lam and Yong Zhang. 2021. Robust Counterfactual Explanations on Graph Neural Networks. DOI:10.48550\/ARXIV.2107.04086","DOI":"10.48550\/ARXIV.2107.04086"},{"key":"e_1_3_4_30_2","doi-asserted-by":"publisher","unstructured":"Rachana Balasubramanian Samuel Sharpe Brian Barr Jason Wittenbach and C. Bayan Bruss. 2020. Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations. DOI:10.48550\/ARXIV.2012.09301","DOI":"10.48550\/ARXIV.2012.09301"},{"volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201920) (FAT* \u201920)","year":"2020","author":"Barocas Solon","key":"e_1_3_4_31_2","unstructured":"Solon Barocas, Andrew D. Selbst, and Manish Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201920) (FAT* \u201920). ACM, New York, 10. DOI:10.1145\/3351095.3372830"},{"key":"e_1_3_4_32_2","doi-asserted-by":"publisher","unstructured":"Brian Barr Matthew R. Harrington Samuel Sharpe and C. Bayan Bruss. 2021. Counterfactual Explanations via Latent Space Projection and Interpolation. DOI:10.48550\/ARXIV.2112.00890","DOI":"10.48550\/ARXIV.2112.00890"},{"key":"e_1_3_4_33_2","doi-asserted-by":"crossref","DOI":"10.1093\/0198244274.001.0001","volume-title":"The Scientific Image","author":"Bas C. Van Fraassen","year":"1980","unstructured":"C. Van Fraassen Bas. 1980. The Scientific Image. Oxford University Press."},{"key":"e_1_3_4_34_2","doi-asserted-by":"publisher","DOI":"10.24432\/C5XW20"},{"key":"e_1_3_4_35_2","doi-asserted-by":"publisher","unstructured":"Sander Beckers. 2022. Causal Explanations and XAI. DOI:10.48550\/ARXIV.2201.13169","DOI":"10.48550\/ARXIV.2201.13169"},{"key":"e_1_3_4_36_2","doi-asserted-by":"publisher","DOI":"10.1017\/S1471068421000582"},{"volume-title":"Proceedings of CHI 2018","year":"2018","author":"Binns Reuben","key":"e_1_3_4_37_2","unstructured":"Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. \u2019It\u2019s reducing a human being to a percentage\u2019: Perceptions of justice in algorithmic decisions. In Proceedings of CHI 2018. ACM, New York, 14. DOI:10.1145\/3173574.3173951"},{"volume-title":"Proceedings of the International Conference on Learning Representations","year":"2022","author":"Black Emily","key":"e_1_3_4_38_2","unstructured":"Emily Black, Zifan Wang, and Matt Fredrikson. 2022. Consistent counterfactuals for deep models. In Proceedings of the International Conference on Learning Representations. https:\/\/arxiv.org\/abs\/2110.03109"},{"key":"e_1_3_4_39_2","doi-asserted-by":"publisher","DOI":"10.24432\/C50K5N"},{"key":"e_1_3_4_40_2","doi-asserted-by":"publisher","unstructured":"Pierre Blanchart. 2021. An Exact Counterfactual-example-based Approach to Tree-ensemble Models Interpretability. DOI:10.48550\/ARXIV.2105.14820","DOI":"10.48550\/ARXIV.2105.14820"},{"key":"e_1_3_4_41_2","article-title":"Fitting a response model for n dichotomously scored items","volume":"35","author":"Boch R. D.","year":"1970","unstructured":"R. D. Boch and M. Lieberman. 1970. Fitting a response model for n dichotomously scored items. Psychometrika 35 (1970), 179\u201397.","journal-title":"Psychometrika"},{"key":"e_1_3_4_42_2","doi-asserted-by":"crossref","unstructured":"Sebastian Bordt Mich\u00e8le Finck Eric Raidl and Ulrike von Luxburg. 2022. Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. https:\/\/arxiv.org\/abs\/2201.10295","DOI":"10.1145\/3531146.3533153"},{"key":"e_1_3_4_43_2","doi-asserted-by":"publisher","unstructured":"Zeyd Boukhers Timo Hartmann and Jan J\u00fcrjens. 2022. COIN: Counterfactual Image Generation for VQA Interpretation. DOI:10.48550\/ARXIV.2201.03342","DOI":"10.48550\/ARXIV.2201.03342"},{"key":"e_1_3_4_44_2","first-page":"299","volume-title":"Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN\u201921)","author":"Brand\u00e3o Martim","year":"2021","unstructured":"Martim Brand\u00e3o, Gerard Canal, Senka Krivi\u0107, Paul Luff, and Amanda Coles. 2021. How experts explain motion planner output: A preliminary user-study to inform the design of explainable planners. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN\u201921). 299\u2013306. DOI:10.1109\/RO-MAN50785.2021.9515407"},{"key":"e_1_3_4_45_2","doi-asserted-by":"publisher","DOI":"10.32473\/flairs.v34i1.128795"},{"key":"e_1_3_4_46_2","doi-asserted-by":"publisher","unstructured":"Kieran Browne and Ben Swift. 2020. Semantics and Explanation: Why Counterfactual Explanations Produce Adversarial Examples in Deep Neural Networks. DOI:10.48550\/ARXIV.2012.10076","DOI":"10.48550\/ARXIV.2012.10076"},{"key":"e_1_3_4_47_2","doi-asserted-by":"publisher","unstructured":"Dieter Brughmans and David Martens. 2021. NICE: An Algorithm for Nearest Instance Counterfactual Explanations. DOI:10.48550\/ARXIV.2104.07411","DOI":"10.48550\/ARXIV.2104.07411"},{"key":"e_1_3_4_48_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.117271"},{"key":"e_1_3_4_49_2","doi-asserted-by":"publisher","unstructured":"Ngoc Bui Duy Nguyen and Viet Anh Nguyen. 2022. Counterfactual Plans under Distributional Ambiguity. DOI:10.48550\/ARXIV.2201.12487","DOI":"10.48550\/ARXIV.2201.12487"},{"key":"e_1_3_4_50_2","doi-asserted-by":"publisher","DOI":"10.1017\/S0140525X07002579"},{"key":"e_1_3_4_51_2","first-page":"6276","volume-title":"Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19)","author":"Byrne Ruth M. J.","year":"2019","unstructured":"Ruth M. J. Byrne. 2019. Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-19). International Joint Conferences on Artificial Intelligence Organization, California, USA, 6276\u20136282. 10.24963\/ijcai.2019\/876"},{"key":"e_1_3_4_52_2","doi-asserted-by":"publisher","unstructured":"Carrie J. Cai Jonas Jongejan and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface(IUI \u201919). ACM New York 258\u2013262. DOI:10.1145\/3301275.3302289","DOI":"10.1145\/3301275.3302289"},{"key":"e_1_3_4_53_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i8.16851"},{"key":"e_1_3_4_54_2","doi-asserted-by":"publisher","unstructured":"Emilio Carrizosa Jasone Ramirez-Ayerbe and Dolores Romero Morales. 2021. Generating Collective Counterfactual Explanations in Score-Based Classification via Mathematical Optimization. DOI:10.13140\/RG.2.2.22996.12168\/1","DOI":"10.13140\/RG.2.2.22996.12168\/1"},{"key":"e_1_3_4_55_2","doi-asserted-by":"publisher","unstructured":"Emilio Carrizosa Jasone Ram\u00edrez-Ayerbe and Dolores Romero Morales. 2022. Counterfactual Explanations for Functional Data: A Mathematical Optimization Approach. DOI:10.13140\/RG.2.2.25682.68801","DOI":"10.13140\/RG.2.2.25682.68801"},{"key":"e_1_3_4_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ejor.2024.01.002"},{"key":"e_1_3_4_57_2","doi-asserted-by":"publisher","DOI":"10.3390\/electronics8080832"},{"key":"e_1_3_4_58_2","doi-asserted-by":"publisher","DOI":"10.3758\/s13421-023-01407-5"},{"key":"e_1_3_4_59_2","unstructured":"CFPB. [n. d.]. Adverse Action Notice Requirements Under the ECOA and the FCRA. https:\/\/consumercomplianceoutlook.org\/2013\/second-quarter\/adverse-action-notice-requirements-under-ecoa-fcra\/. Accessed: 2020-10-15."},{"key":"e_1_3_4_60_2","unstructured":"CFPB. [n. d.]. Notification of Action Taken ECOA Notice and Statement of Specific Reasons. https:\/\/www.consumerfinance.gov\/policy-compliance\/rulemaking\/regulations\/1002\/9\/. Accessed: 2020-10-15."},{"key":"e_1_3_4_61_2","first-page":"2516","volume-title":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing","author":"Chen Qianglong","year":"2021","unstructured":"Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, and Yin Zhang. 2021. KACE: Generating knowledge aware contrastive explanations for natural language inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Online, 2516\u20132527. DOI:10.18653\/v1\/2021.acl-long.196"},{"key":"e_1_3_4_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3143561"},{"key":"e_1_3_4_63_2","unstructured":"Yatong Chen Jialu Wang and Yang Liu. 2020. Strategic Recourse in Linear Classification. https:\/\/dynamicdecisions.github.io"},{"key":"e_1_3_4_64_2","doi-asserted-by":"publisher","unstructured":"Ziheng Chen Fabrizio Silvestri Jia Wang He Zhu Hongshik Ahn and Gabriele Tolomei. 2021. ReLAX: Reinforcement Learning Agent eXplainer for Arbitrary Predictive Models. DOI:10.48550\/ARXIV.2110.11960","DOI":"10.48550\/ARXIV.2110.11960"},{"key":"e_1_3_4_65_2","unstructured":"Furui Cheng Yao Ming and Huamin Qu. 2020. DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models. arxiv:cs.LG\/2008.08353"},{"key":"e_1_3_4_66_2","doi-asserted-by":"publisher","unstructured":"Noel Codella Veronica Rotemberg Philipp Tschandl M. Emre Celebi Stephen Dusza David Gutman Brian Helba Aadi Kalloo Konstantinos Liopyris Michael Marchetti Harald Kittler and Allan Halpern. 2019. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). DOI:10.48550\/ARXIV.1902.03368","DOI":"10.48550\/ARXIV.1902.03368"},{"key":"e_1_3_4_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2017.7966217"},{"key":"e_1_3_4_68_2","unstructured":"European Commission. [n. d.]. Artificial Intelligence. https:\/\/ec.europa.eu\/info\/funding-tenders\/opportunities\/portal\/screen\/opportunities\/topic-details\/ict-26-2018-2020. Accessed: 2020-10-15."},{"key":"e_1_3_4_69_2","unstructured":"European Commission. [n. d.]. REGULATION (EU) 2016\/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data and Repealing Directive 95\/46\/EC (General Data Protection Regulation). https:\/\/eur-lex.europa.eu\/eli\/reg\/2016\/679\/oj. Accessed: 2020-10-15."},{"key":"e_1_3_4_70_2","doi-asserted-by":"publisher","DOI":"10.14778\/3461535.3461546"},{"key":"e_1_3_4_71_2","doi-asserted-by":"publisher","unstructured":"Michael Correll. 2019. Ethical dimensions of visualization research. In Proceedings ofCHI \u201919. ACM New York 13. DOI:10.1145\/3290605.3300418","DOI":"10.1145\/3290605.3300418"},{"key":"e_1_3_4_72_2","doi-asserted-by":"publisher","DOI":"10.24432\/C5TG7T"},{"key":"e_1_3_4_73_2","first-page":"24","volume-title":"Proceedings of the 8th International Conference on Neural Information Processing Systems (NIPS\u201995)","author":"Craven Mark W.","year":"1995","unstructured":"Mark W. Craven and Jude W. Shavlik. 1995. Extracting tree-structured representations of trained networks. In Proceedings of the 8th International Conference on Neural Information Processing Systems (NIPS\u201995). MIT Press, Cambridge, MA, USA, 24\u201330."},{"key":"e_1_3_4_74_2","first-page":"24","volume-title":"Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,","author":"Crupi Riccardo","year":"2022","unstructured":"Riccardo Crupi, Beatriz San Miguel Gonz\u00e1lez, Alessandro Castelnovo, and Daniele Regoli. 2022. Leveraging causal relations to provide counterfactual explanations and feasible recommendations to end users. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,. SciTePress, 24\u201332. DOI:10.5220\/0010761500003116"},{"volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society (AIES \u201922)","year":"2022","author":"Dai Xinyue","key":"e_1_3_4_75_2","unstructured":"Xinyue Dai, Mark T. Keane, Laurence Shalloo, Elodie Ruelle, and Ruth M. J. Byrne. 2022. Counterfactual explanations for prediction and diagnosis in XAI. In Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society (AIES \u201922). ACM, New York,, 12. DOI:10.1145\/3514094.3534144"},{"key":"e_1_3_4_76_2","first-page":"448","volume-title":"Proceedings of PPSN XVI","author":"Dandl Susanne","year":"2020","unstructured":"Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-objective counterfactual explanations. In Proceedings of PPSN XVI. Springer International Publishing, Cham, 448\u2013469. DOI:10.1007\/978-3-030-58112-1_31"},{"key":"e_1_3_4_77_2","unstructured":"DARPA. [n. d.]. Broad Agency Announcement: Explainable Artificial Intelligence (XAI). https:\/\/www.darpa.mil\/attachments\/DARPA-BAA-16-53.pdf. Accessed: 2020-10-15."},{"key":"e_1_3_4_78_2","first-page":"915","volume-title":"Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV\u201922)","author":"Dash Saloni","year":"2022","unstructured":"Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. 2022. Evaluating and mitigating bias in image classifiers: A causal perspective using counterfactuals. In Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV\u201922). 915\u2013924. DOI:10.1109\/WACV51458.2022.00393"},{"key":"e_1_3_4_79_2","doi-asserted-by":"crossref","first-page":"598","DOI":"10.1109\/SP.2016.42","volume-title":"Proceedings of 2016 IEEE Symposium on Security and Privacy (SP\u201916)","author":"Datta A.","year":"2016","unstructured":"A. Datta, S. Sen, and Y. Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Proceedings of 2016 IEEE Symposium on Security and Privacy (SP\u201916). IEEE, New York,, 598\u2013617. DOI:10.1109\/SP.2016.42"},{"key":"e_1_3_4_80_2","doi-asserted-by":"publisher","unstructured":"Lucas de Lara Alberto Gonz\u00e1lez-Sanz Nicholas Asher and Jean-Michel Loubes. 2021. Transport-based Counterfactual Models. DOI:10.48550\/ARXIV.2108.13025","DOI":"10.48550\/ARXIV.2108.13025"},{"key":"e_1_3_4_81_2","doi-asserted-by":"publisher","unstructured":"Giovanni De Toni Bruno Lepri and Andrea Passerini. 2022. Synthesizing Explainable Counterfactual Policies for Algorithmic Recourse with Program Synthesis. DOI:10.48550\/ARXIV.2201.07135","DOI":"10.48550\/ARXIV.2201.07135"},{"key":"e_1_3_4_82_2","doi-asserted-by":"publisher","unstructured":"Sarah Dean Sarah Rich and Benjamin Recht. 2020. Recommendations and user agency: The reachability of collaboratively-filtered information. In Proceedings ofFAT* \u201920. ACM New York 10. DOI:10.1145\/3351095.3372866","DOI":"10.1145\/3351095.3372866"},{"key":"e_1_3_4_83_2","first-page":"32","volume-title":"Proceedings of the 29th International Conference on Case-Based Reasoning Research and Development (ICCBR 2021), (Salamanca, Spain, September 13\u201316, 2021). ,","author":"Delaney Eoin","year":"2021","unstructured":"Eoin Delaney, Derek Greene, and Mark T. Keane. 2021. Instance-based counterfactual explanations for time series classification. In Proceedings of the 29th International Conference on Case-Based Reasoning Research and Development (ICCBR 2021), (Salamanca, Spain, September 13\u201316, 2021). ,. Springer-Verlag, Berlin,, 32\u201347. DOI:10.1007\/978-3-030-86957-1_3"},{"key":"e_1_3_4_84_2","unstructured":"Eoin Delaney Derek Greene and Mark T. Keane. 2021. Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions. https:\/\/arxiv.org\/abs\/2107.09734"},{"key":"e_1_3_4_85_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2023.103995"},{"key":"e_1_3_4_86_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41060-018-0144-8"},{"key":"e_1_3_4_87_2","doi-asserted-by":"crossref","first-page":"248","DOI":"10.1109\/CVPR.2009.5206848","volume-title":"Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition","author":"Deng Jia","year":"2009","unstructured":"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248\u2013255. DOI:10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_4_88_2","first-page":"590","volume-title":"Proceedings of the NeurIPS 2018","author":"Dhurandhar Amit","year":"2018","unstructured":"Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Proceedings of the NeurIPS 2018. Curran Associates Inc., 590\u2013601."},{"key":"e_1_3_4_89_2","unstructured":"Amit Dhurandhar Tejaswini Pedapati Avinash Balakrishnan Pin-Yu Chen Karthikeyan Shanmugam and Ruchir Puri. 2019. Model Agnostic Contrastive Explanations for Structured Data. http:\/\/arxiv.org\/abs\/1906.00117"},{"issue":"1","key":"e_1_3_4_90_2","doi-asserted-by":"crossref","first-page":"269","DOI":"10.1007\/BF01386390","article-title":"A note on two problems in connexion with graphs","volume":"1","author":"Dijkstra Edsger W","year":"1959","unstructured":"Edsger W Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische Mathematik 1, 1 (1959), 269\u2013271.","journal-title":"Numerische Mathematik"},{"volume-title":"Proceedings of IUI 2019","year":"2019","author":"Dodge Jonathan","key":"e_1_3_4_91_2","unstructured":"Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of IUI 2019. ACM, New York, 11. DOI:10.1145\/3301275.3302310"},{"key":"e_1_3_4_92_2","unstructured":"Carl Doersch. 2016. Tutorial on Variational Autoencoders. arxiv:stat.ML\/1606.05908"},{"issue":"3","key":"e_1_3_4_93_2","doi-asserted-by":"crossref","first-page":"187","DOI":"10.3233\/IDA-1998-2303","article-title":"Knowledge discovery via multiple models","volume":"2","author":"Domingos Pedro","year":"1998","unstructured":"Pedro Domingos. 1998. Knowledge discovery via multiple models. Intell. Data Anal. 2, 3 (May1998), 187\u2013202.","journal-title":"Intell. Data Anal."},{"key":"e_1_3_4_94_2","first-page":"5324","volume-title":"Proceedings of the 39th International Conference on Machine Learning","author":"Dominguez-Olmedo Ricardo","year":"2022","unstructured":"Ricardo Dominguez-Olmedo, Amir H. Karimi, and Bernhard Sch\u00f6lkopf. 2022. On the adversarial robustness of causal algorithmic recourse. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 5324\u20135342. https:\/\/proceedings.mlr.press\/v162\/dominguez-olmedo22a.html"},{"key":"e_1_3_4_95_2","doi-asserted-by":"publisher","unstructured":"Finale Doshi-Velez Mason Kortz Ryan Budish Chris Bavitz Sam Gershman D. O\u2019Brien Stuart Schieber J. Waldo D. Weinberger and Alexandra Wood. 2017. Accountability of AI Under the Law: The Role of Explanation. DOI:10.2139\/ssrn.3064761","DOI":"10.2139\/ssrn.3064761"},{"volume-title":"Proceedings of the Workshop on Human Interpretability in Machine Learning (WHI\u201920)","year":"2020","author":"Downs Michael","key":"e_1_3_4_96_2","unstructured":"Michael Downs, Jonathan Chu, Yaniv Yacoby, Finale Doshi-Velez, and Weiwei. Pan. 2020. CRUDS: Counterfactual recourse using disentangled subspaces. In Proceedings of the Workshop on Human Interpretability in Machine Learning (WHI\u201920). https:\/\/finale.seas.harvard.edu\/files\/finale\/files\/cruds-_counterfactual_recourse_using_disentangled_subspaces.pdf"},{"key":"e_1_3_4_97_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository - Adult Income. http:\/\/archive.ics.uci.edu\/ml\/datasets\/Adult"},{"key":"e_1_3_4_98_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository - Breast Cancer. https:\/\/archive.ics.uci.edu\/ml\/datasets\/Breast+Cancer+Wisconsin+(Diagnostic)"},{"key":"e_1_3_4_99_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository - Iris. https:\/\/archive.ics.uci.edu\/ml\/datasets\/iris"},{"key":"e_1_3_4_100_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository - Shopping. https:\/\/archive.ics.uci.edu\/ml\/datasets\/Online+Shoppers+Purchasing+Intention+Dataset"},{"key":"e_1_3_4_101_2","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository - Wine. https:\/\/archive.ics.uci.edu\/ml\/datasets\/wine"},{"key":"e_1_3_4_102_2","unstructured":"Jannik Dunkelau and Michael Leuschel. 2019. Fairness-Aware Machine Learning. 60 pages. https:\/\/www.phil-fak.uni-duesseldorf.de\/fileadmin\/Redaktion\/Institute\/Sozialwissenschaften\/Kommunikations-_und_Medienwissenschaft\/KMW_I\/Working_Paper\/Dunkelau___Leuschel__2019__Fairness-Aware_Machine_Learning.pdf"},{"key":"e_1_3_4_103_2","doi-asserted-by":"publisher","unstructured":"Tri Dung Duong Qian Li and Guandong Xu. 2021. Prototype-based Counterfactual Explanation for Causal Classification. DOI:10.48550\/ARXIV.2105.00703","DOI":"10.48550\/ARXIV.2105.00703"},{"key":"e_1_3_4_104_2","first-page":"5742","volume-title":"Proceedings of the 39th International Conference on Machine Learning","author":"Dutta Sanghamitra","year":"2022","unstructured":"Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, and Daniele Magazzeni. 2022. Robust counterfactual explanations for tree-based ensembles. In Proceedings of the 39th International Conference on Machine Learning. PMLR, 5742\u20135756. https:\/\/proceedings.mlr.press\/v162\/dutta22a.html"},{"volume-title":"Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR\u201921)","year":"2021","author":"Elliott Andrew","key":"e_1_3_4_105_2","unstructured":"Andrew Elliott, Stephen Law, and Chris Russell. 2021. Explaining classifiers using adversarial perturbations on the perceptual ball. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR\u201921). DOI:10.48550\/ARXIV.1912.09405"},{"key":"e_1_3_4_106_2","doi-asserted-by":"publisher","unstructured":"Lukas Faber Amin K. Moghaddam and Roger Wattenhofer. 2020. Contrastive Graph Neural Network Explanation. DOI:10.48550\/ARXIV.2010.13663","DOI":"10.48550\/ARXIV.2010.13663"},{"key":"e_1_3_4_107_2","unstructured":"Daniel Faggella. 2020. Machine Learning for Medical Diagnostics \u2013 4 Current Applications. https:\/\/emerj.com\/ai-sector-overviews\/machine-learning-medical-diagnostics-4-current-applications\/. Accessed: 2020-10-15."},{"key":"e_1_3_4_108_2","doi-asserted-by":"publisher","unstructured":"Jake Fawkes Robin Evans and Dino Sejdinovic. 2022. Selection Ignorability and Challenges with Causal Fairness. DOI:10.48550\/ARXIV.2202.13774","DOI":"10.48550\/ARXIV.2202.13774"},{"key":"e_1_3_4_109_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.07.097"},{"volume-title":"Proceedings of ECAI","year":"2020","author":"Feghahati Amir H.","key":"e_1_3_4_110_2","unstructured":"Amir H. Feghahati, Christian R. Shelton, Michael J. Pazzani, and Kevin Tang. 2020. CDeepEx: Contrastive deep explanations. In Proceedings of ECAI."},{"key":"e_1_3_4_111_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2020.07.001"},{"key":"e_1_3_4_112_2","unstructured":"Carlos Fern\u00e1ndez-Lor\u00eda Foster Provost and Xintian Han. 2020. Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach. http:\/\/arxiv.org\/abs\/2001.07417"},{"key":"e_1_3_4_113_2","doi-asserted-by":"publisher","unstructured":"Andrea Ferrario and Michele Loi. 2020. A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations. DOI:10.48550\/ARXIV.2010.04687","DOI":"10.48550\/ARXIV.2010.04687"},{"key":"e_1_3_4_114_2","unstructured":"FICO. 2018. FICO (HELOC) Dataset. https:\/\/community.fico.com\/s\/explainable-machine-learning-challenge?tabset-3158a=2"},{"key":"e_1_3_4_115_2","unstructured":"Giorgos Filandrianos Konstantinos Thomas Edmund Dervakos and Giorgos Stamou. 2022. Conceptual edits as counterfactual explanations(CEUR Workshop Proceedings). CEUR-WS.org. http:\/\/ceur-ws.org\/Vol-3121\/paper6.pdf"},{"key":"e_1_3_4_116_2","doi-asserted-by":"publisher","unstructured":"Maximilian F\u00f6rster Philipp H\u00fchn Mathias Klier and Kilian Kluge. 2021. Capturing users\u2019 reality: A novel approach to generate coherent counterfactual explanations. DOI:10.24251\/HICSS.2021.155","DOI":"10.24251\/HICSS.2021.155"},{"key":"e_1_3_4_117_2","unstructured":"Maximilian F\u00f6rster Mathias Klier Kilian Kluge and Irina Sigler. 2020. Evaluating explainable Artifical intelligence\u2013What users really appreciate. (2020). https:\/\/aisel.aisnet.org\/ecis2020_rp\/195"},{"key":"e_1_3_4_118_2","doi-asserted-by":"crossref","unstructured":"Maximilian Becker Nadia Burkart Pascal Birnstill and J\u00fcrgen Beyerer. 2021. A step towards global counterfactual explanations: Approximating the feature space through hierarchical division and graph search. Adv. Artif. Intell. Mach. Learn. 1 2 (2021) 90\u2013110.","DOI":"10.54364\/AAIML.2021.1107"},{"key":"e_1_3_4_119_2","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1007\/s11023-021-09580-9","article-title":"The intriguing relation between counterfactual explanations and adversarial examples","author":"Freiesleben Timo","year":"2022","unstructured":"Timo Freiesleben. 2022. The intriguing relation between counterfactual explanations and adversarial examples. Minds Mach. (Dordr.) (2022), 77\u2013109.","journal-title":"Minds Mach. (Dordr.)"},{"issue":"5","key":"e_1_3_4_120_2","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203450","article-title":"Greedy function approximation: A gradient boosting machine","volume":"29","author":"Friedman Jerome H.","year":"2001","unstructured":"Jerome H. Friedman. 2001. Greedy function approximation: A gradient boosting machine. The Annals of Statistics 29, 5 (2001), 1189\u20131232. http:\/\/www.jstor.org\/stable\/2699986","journal-title":"The Annals of Statistics"},{"key":"e_1_3_4_121_2","first-page":"2126","volume-title":"Proceedings of the Language Resources and Evaluation Conference","author":"Frohberg J\u00f6rg","year":"2022","unstructured":"J\u00f6rg Frohberg and Frank Binder. 2022. CRASS: A novel data set and benchmark to test counterfactual reasoning of large language models. In Proceedings of the Language Resources and Evaluation Conference. European Language Resources Association, Marseille, France, 2126\u20132140. https:\/\/aclanthology.org\/2022.lrec-1.229"},{"key":"e_1_3_4_122_2","doi-asserted-by":"publisher","DOI":"10.1109\/tvcg.2021.3114807"},{"key":"e_1_3_4_123_2","doi-asserted-by":"publisher","unstructured":"Maximilian F\u00f6rster Philipp H\u00fchn Mathias Klier and Kilian Kluge. 2021. Capturing users\u2019 reality: A novel approach to generate coherent counterfactual explanations. DOI:10.24251\/HICSS.2021.155","DOI":"10.24251\/HICSS.2021.155"},{"volume-title":": Proceedings of the International Conference on Management of Data (SIGMOD \u201921), (Virtual Event, China, June 20\u201325, 2021","year":"2021","author":"Galhotra Sainyam","key":"e_1_3_4_124_2","unstructured":"Sainyam Galhotra, Romila Pradhan, and Babak Salimi. 2021. Explaining black-box algorithms using probabilistic contrastive counterfactuals. In : Proceedings of the International Conference on Management of Data (SIGMOD \u201921), (Virtual Event, China, June 20\u201325, 2021.) ACM. DOI:10.1145\/3448016.3458455"},{"key":"e_1_3_4_125_2","first-page":"4064","volume-title":"Proceedings of the 2021 IEEE International Conference on Big Data (Big Data)","author":"Gan Jingwei","year":"2021","unstructured":"Jingwei Gan, Shinan Zhang, Chi Zhang, and Andy Li. 2021. Automated counterfactual generation in financial model risk management. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data). 4064\u20134068. DOI:10.1109\/BigData52589.2021.9671561"},{"key":"e_1_3_4_126_2","doi-asserted-by":"crossref","first-page":"263","DOI":"10.1007\/s00521-009-0295-6","article-title":"Pattern classification with missing data: A review","volume":"19","author":"Garc\u00eda-Laencina P. J.","year":"2009","unstructured":"P. J. Garc\u00eda-Laencina, J. Sancho-G\u00f3mez, and A. R. Figueiras-Vidal. 2009. Pattern classification with missing data: A review. Neural Computing and Applications 19 (2009), 263\u2013282.","journal-title":"Neural Computing and Applications"},{"key":"e_1_3_4_127_2","unstructured":"Gordon Garisch. [n. d.]. Model Lifecycle Transformation: How Banks Are Unlocking Efficiencies. https:\/\/financialservicesblog.accenture.com\/model-lifecycle-transformation-how-banks-are-unlocking-efficiencies. Accessed: 2022-10-15."},{"key":"e_1_3_4_128_2","doi-asserted-by":"publisher","unstructured":"Yingqiang Ge Shuchang Liu Zelong Li Shuyuan Xu Shijie Geng Yunqi Li Juntao Tan Fei Sun and Yongfeng Zhang. 2021. Counterfactual Evaluation for Explainable AI. DOI:10.48550\/ARXIV.2109.01962","DOI":"10.48550\/ARXIV.2109.01962"},{"volume-title":"Proceedings of the International Conference on Learning Representations","year":"2022","author":"Ghandeharioun Asma","key":"e_1_3_4_129_2","unstructured":"Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, and Rosalind Picard. 2022. DISSECT: Disentangled simultaneous explanations via concept traversals. In Proceedings of the International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=qY79G8jGsep"},{"key":"e_1_3_4_130_2","doi-asserted-by":"publisher","unstructured":"Azin Ghazimatin Oana Balalau Rishiraj Saha Roy and Gerhard Weikum. 2020. PRINCE: Provider-side interpretability with counterfactual explanations in recommender systems(WSDM \u201920). ACM NewYork 9. DOI:10.1145\/3336191.3371824","DOI":"10.1145\/3336191.3371824"},{"key":"e_1_3_4_131_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450614.3462238"},{"key":"e_1_3_4_132_2","doi-asserted-by":"publisher","DOI":"10.1080\/10618600.2014.907095"},{"key":"e_1_3_4_133_2","doi-asserted-by":"publisher","unstructured":"Oscar Gomez Steffen Holter Jun Yuan and Enrico Bertini. 2020. ViCE: Visual counterfactual explanations for machine learning models. In Proceedings ofIUI \u201920. 5. 10.1145\/3377325.3377536","DOI":"10.1145\/3377325.3377536"},{"key":"e_1_3_4_134_2","doi-asserted-by":"publisher","unstructured":"Oscar Gomez Steffen Holter Jun Yuan and Enrico Bertini. 2021. AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation. DOI:10.48550\/ARXIV.2109.05629","DOI":"10.48550\/ARXIV.2109.05629"},{"key":"e_1_3_4_135_2","article-title":"EU regulations on algorithmic decision-making and a \u201cRight to Explanation\u201d","volume":"1606","author":"Goodman Bryce","year":"2016","unstructured":"Bryce Goodman and S. Flaxman. 2016. EU regulations on algorithmic decision-making and a \u201cRight to Explanation\u201d. ArXiv abs\/1606.08813 (2016).","journal-title":"ArXiv"},{"key":"e_1_3_4_136_2","first-page":"2376","volume-title":"Proceedings of ICML 2019","author":"Goyal Yash","year":"2019","unstructured":"Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual visual explanations. In Proceedings of ICML 2019. PMLR, 2376\u20132384. https:\/\/proceedings.mlr.press\/v97\/goyal19a.html"},{"key":"e_1_3_4_137_2","unstructured":"Preston Gralla. 2016. Amazon Prime and the Racist Algorithms. https:\/\/www.computerworld.com\/article\/3068622\/amazon-prime-and-the-racist-algorithms.html"},{"key":"e_1_3_4_138_2","unstructured":"Rory McGrath Luca Costabello Chan Le Van Paul Sweeney Farbod Kamiab Zhao Shen and Freddy Lecue. 2018. Interpretable Credit Application Predictions with Counterfactual Explanations. http:\/\/arxiv.org\/abs\/1811.05245"},{"key":"e_1_3_4_139_2","unstructured":"Home Credit Group. 2018. Home Credit Default Risk.https:\/\/www.kaggle.com\/c\/home-credit-default-risk\/data"},{"key":"e_1_3_4_140_2","unstructured":"Riccardo Guidotti Anna Monreale Salvatore Ruggieri Dino Pedreschi Franco Turini and Fosca Giannotti. 2018. Local Rule-Based Explanations of Black Box Decision Systems. http:\/\/arxiv.org\/abs\/1805.10820"},{"key":"e_1_3_4_141_2","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_3_4_142_2","doi-asserted-by":"publisher","unstructured":"Riccardo Guidotti and Salvatore Ruggieri. 2021. Ensemble of counterfactual explainers. Springer-Verlag Berlin 11. DOI:10.1007\/978-3-030-88942-5_28","DOI":"10.1007\/978-3-030-88942-5_28"},{"key":"e_1_3_4_143_2","doi-asserted-by":"publisher","DOI":"10.1007\/s13735-021-00208-3"},{"key":"e_1_3_4_144_2","doi-asserted-by":"publisher","unstructured":"Hangzhi Guo Thanh Hong Nguyen and Amulya Yadav. 2021. CounterNet: End-to-End Training of Counterfactual Aware Predictions. DOI:10.48550\/ARXIV.2109.07557","DOI":"10.48550\/ARXIV.2109.07557"},{"key":"e_1_3_4_145_2","doi-asserted-by":"publisher","unstructured":"Sharmi Dev Gupta Begum Genc and Barry O\u2019Sullivan. 2022. Finding Counterfactual Explanations through Constraint Relaxations. DOI:10.48550\/ARXIV.2204.03429","DOI":"10.48550\/ARXIV.2204.03429"},{"key":"e_1_3_4_146_2","unstructured":"Vivek Gupta Pegah Nokhiz Chitradeep Dutta Roy and Suresh Venkatasubramanian. 2019. Equalizing Recourse Across Groups. https:\/\/arxiv.org\/abs\/1909.03166"},{"key":"e_1_3_4_147_2","doi-asserted-by":"publisher","unstructured":"Victor Guyomard Fran\u00e7oise Fessant Tassadit Bouadi and Thomas Guyet. 2021. Post-hoc counterfactual generation with supervised autoencoder. DOI:10.1007\/978-3-030-93736-2_10","DOI":"10.1007\/978-3-030-93736-2_10"},{"key":"e_1_3_4_148_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-93736-2_37"},{"key":"e_1_3_4_149_2","doi-asserted-by":"crossref","first-page":"83","DOI":"10.1145\/3430984.3431015","volume-title":"Proceedings of the 8th ACM IKDD CODS and 26th COMAD","author":"Haldar Swastik","year":"2021","unstructured":"Swastik Haldar, Philips George John, and Diptikalyan Saha. 2021. Reliable counterfactual explanations for autoencoder based anomalies. In Proceedings of the 8th ACM IKDD CODS and 26th COMAD. ACM. New York, 83\u201391. DOI:10.1145\/3430984.3431015"},{"key":"e_1_3_4_150_2","first-page":"1","volume-title":"Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN\u201921)","author":"Han Xing","year":"2021","unstructured":"Xing Han and Joydeep Ghosh. 2021. Model-agnostic explanations using minimal forcing subsets. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN\u201921). 1\u20138. DOI:10.1109\/IJCNN52387.2021.9533992"},{"key":"e_1_3_4_151_2","doi-asserted-by":"publisher","unstructured":"Masoud Hashemi and Ali Fathi. 2020. PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards. DOI:10.48550\/ARXIV.2008.10138","DOI":"10.48550\/ARXIV.2008.10138"},{"key":"e_1_3_4_152_2","doi-asserted-by":"publisher","unstructured":"Lisa Anne Hendricks Ronghang Hu Trevor Darrell and Zeynep Akata. 2018. Generating Counterfactual Explanations with Natural Language. DOI:10.48550\/ARXIV.1806.09809","DOI":"10.48550\/ARXIV.1806.09809"},{"key":"e_1_3_4_153_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-014-0368-8"},{"key":"e_1_3_4_154_2","doi-asserted-by":"publisher","unstructured":"Fabian Hinder and Barbara Hammer. 2020. Counterfactual Explanations of Concept Drift. DOI:10.48550\/ARXIV.2006.12822","DOI":"10.48550\/ARXIV.2006.12822"},{"key":"e_1_3_4_155_2","doi-asserted-by":"publisher","DOI":"10.24432\/C5NC77"},{"key":"e_1_3_4_156_2","doi-asserted-by":"crossref","DOI":"10.1145\/3290605.3300809","article-title":"Gamut: A design probe to understand how data scientists understand machine learning models","author":"Hohman Fred","year":"2019","unstructured":"Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven Mark Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).","journal-title":"Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems"},{"key":"e_1_3_4_157_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0201016"},{"key":"e_1_3_4_158_2","doi-asserted-by":"publisher","DOI":"10.24432\/C53G6X"},{"key":"e_1_3_4_159_2","unstructured":"The US White House. 2022. Blueprint for an AI Bill of Rights. https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/#discrimination"},{"key":"e_1_3_4_160_2","doi-asserted-by":"crossref","first-page":"88","DOI":"10.1109\/ICPM53251.2021.9576881","volume-title":"Proceedings of the 2021 3rd International Conference on Process Mining (ICPM\u201921)","author":"Hsieh Chihcheng","year":"2021","unstructured":"Chihcheng Hsieh, Catarina Moreira, and Chun Ouyang. 2021. DiCE4EL: Interpreting process predictions using a milestone-aware counterfactual approach. In Proceedings of the 2021 3rd International Conference on Process Mining (ICPM\u201921). 88\u201395. DOI:10.1109\/ICPM53251.2021.9576881"},{"key":"e_1_3_4_161_2","doi-asserted-by":"publisher","unstructured":"Tsung-Hao Huang Andreas Metzger and Klaus Pohl. 2022. Counterfactual explanations for predictive business process monitoring. Springer International Publishing Cham 399\u2013413. DOI:10.1007\/978-3-030-95947-0_28","DOI":"10.1007\/978-3-030-95947-0_28"},{"key":"e_1_3_4_162_2","doi-asserted-by":"publisher","unstructured":"Frederik Hvilsh\u00f8j Alexandros Iosifidis and Ira Assent. 2021. ECINN: Efficient Counterfactuals from Invertible Neural Networks. DOI:10.48550\/ARXIV.2103.13701","DOI":"10.48550\/ARXIV.2103.13701"},{"key":"e_1_3_4_163_2","doi-asserted-by":"publisher","unstructured":"Frederik Hvilsh\u00f8j Alexandros Iosifidis and Ira Assent. 2021. On Quantitative Evaluations of Counterfactuals. DOI:10.48550\/ARXIV.2111.00177","DOI":"10.48550\/ARXIV.2111.00177"},{"key":"e_1_3_4_164_2","doi-asserted-by":"publisher","unstructured":"Benedikt H\u00f6ltgen Lisa Schut Jan M. Brauner and Yarin Gal. 2021. DeDUCE: Generating Counterfactual Explanations Efficiently. DOI:10.48550\/ARXIV.2111.15639","DOI":"10.48550\/ARXIV.2111.15639"},{"key":"e_1_3_4_165_2","unstructured":"Global Women in Data Science Conference The Global Open Source Severity of Illness Score Consortium.2020. WiDS Datathon 2020. https:\/\/www.kaggle.com\/c\/widsdatathon2020"},{"key":"e_1_3_4_166_2","unstructured":"Allstate Insurance. 2011. Allstate Claim Prediction Challenge. https:\/\/www.kaggle.com\/c\/ClaimPredictionChallenge"},{"key":"e_1_3_4_167_2","unstructured":"France Intelligence Artificielle. [n. d.]. Rapport de Synthese France Intelligence Artificielle. https:\/\/www.economie.gouv.fr\/files\/files\/PDF\/2017\/Rapport_synthese_France_IA_.pdf. Accessed: 2020-10-15."},{"key":"e_1_3_4_168_2","doi-asserted-by":"publisher","unstructured":"Jeremy Irvin Pranav Rajpurkar Michael Ko Yifan Yu Silviana Ciurea-Ilcus Chris Chute Henrik Marklund Behzad Haghgoo Robyn Ball Katie Shpanskaya Jayne Seekins David A. Mong Safwan S. Halabi Jesse K. Sandberg Ricky Jones David B. Larson Curtis P. Langlotz Bhavik N. Patel Matthew P. Lungren and Andrew Y. Ng. 2019. CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. DOI:10.48550\/ARXIV.1901.07031","DOI":"10.48550\/ARXIV.1901.07031"},{"key":"e_1_3_4_169_2","doi-asserted-by":"publisher","unstructured":"Paul Jacob \u00c9loi Zablocki H\u00e9di Ben-Younes Micka\u00ebl Chen Patrick P\u00e9rez and Matthieu Cord. [n. d.]. STEEX: Steering Counterfactual Explanations with Semantics. DOI:10.48550\/ARXIV.2111.09094","DOI":"10.48550\/ARXIV.2111.09094"},{"key":"e_1_3_4_170_2","doi-asserted-by":"publisher","unstructured":"Guillaume Jeanneret Lo\u00efc Simon and Fr\u00e9d\u00e9ric Jurie. 2022. Diffusion Models for Counterfactual Explanations. DOI:10.48550\/ARXIV.2203.15636","DOI":"10.48550\/ARXIV.2203.15636"},{"key":"e_1_3_4_171_2","unstructured":"Lauren Kirchner Jeff Larson Surya Mattu and Julia Angwin. 2016. UCI Machine Learning Repository. https:\/\/github.com\/propublica\/compas-analysis\/"},{"key":"e_1_3_4_172_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-77211-6_46"},{"key":"e_1_3_4_173_2","doi-asserted-by":"publisher","unstructured":"Alistair Johnson Lucas Bulgarelli Tom Pollard Steven Horng Leo Anthony Celi and Roger Mark. 2021. MIMIC-IV. DOI:10.13026\/S6N6-XD98","DOI":"10.13026\/S6N6-XD98"},{"key":"e_1_3_4_174_2","doi-asserted-by":"publisher","DOI":"10.1080\/15377938.2014.984045"},{"key":"e_1_3_4_175_2","unstructured":"Shalmali Joshi Oluwasanmi Koyejo Warut Vijitbenjaronk Been Kim and Joydeep Ghosh. 2019. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems. http:\/\/arxiv.org\/abs\/1907.09615"},{"key":"e_1_3_4_176_2","doi-asserted-by":"publisher","unstructured":"Hong-Gyu Jung Sin-Han Kang Hee-Dong Kim Dong-Ok Won and Seong-Whan Lee. 2020. Counterfactual Explanation Based on Gradual Construction for Deep Networks. DOI:10.48550\/ARXIV.2008.01897","DOI":"10.48550\/ARXIV.2008.01897"},{"key":"e_1_3_4_177_2","doi-asserted-by":"publisher","unstructured":"Vassilis Kaffes Dimitris Sacharidis and Giorgos Giannopoulos. 2021. Model-agnostic counterfactual explanations of recommendations(UMAP \u201921). ACM. New York 6. DOI:10.1145\/3450613.3456846","DOI":"10.1145\/3450613.3456846"},{"key":"e_1_3_4_178_2","unstructured":"Kaggle. 2012. Give Me Some Credit. https:\/\/www.kaggle.com\/c\/GiveMeSomeCredit"},{"key":"e_1_3_4_179_2","doi-asserted-by":"crossref","first-page":"136","DOI":"10.1037\/0033-295X.93.2.136","article-title":"Norm theory: Comparing reality to its alternatives.","volume":"93","author":"Kahneman D.","year":"1986","unstructured":"D. Kahneman and D. Miller. 1986. Norm theory: Comparing reality to its alternatives.Psychological Review 93 (1986), 136\u2013153.","journal-title":"Psychological Review"},{"volume-title":"Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI\u201920)","year":"2020","author":"Kanamori Kentaro","key":"e_1_3_4_180_2","unstructured":"Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Hiroki Arimura. 2020. DACE: Distribution-aware counterfactual explanation by mixed-integer linear optimization. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI\u201920). DOI:10.24963\/ijcai.2020\/395"},{"key":"e_1_3_4_181_2","unstructured":"Kentaro Kanamori Takuya Takagi Ken Kobayashi and Yuichi Ike. 2022. Counterfactual explanation trees: Transparent and consistent actionable recourse with decision tree. In Proceedings of Machine Learning Research(PMLR) 1846\u20131870."},{"key":"e_1_3_4_182_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i13.17376"},{"key":"e_1_3_4_183_2","unstructured":"A.-H. Karimi G. Barthe B. Balle and I. Valera. 2020. Model-Agnostic Counterfactual Explanations for Consequential Decisions. http:\/\/arxiv.org\/abs\/1905.11190"},{"key":"e_1_3_4_184_2","doi-asserted-by":"publisher","unstructured":"Amir-Hossein Karimi Bernhard Sch\u00f6lkopf and Isabel Valera. 2021. Algorithmic recourse: From counterfactual explanations to interventions.In Proceedings of FAccT \u201921. ACM New York 10. DOI:10.1145\/3442188.3445899","DOI":"10.1145\/3442188.3445899"},{"key":"e_1_3_4_185_2","unstructured":"Amir-Hossein Karimi Julius von K\u00fcgelgen Bernhard Sch\u00f6lkopf and Isabel Valera. 2020. Algorithmic Recourse under Imperfect Causal Knowledge: A Probabilistic Approach. http:\/\/arxiv.org\/abs\/2006.06831"},{"key":"e_1_3_4_186_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-019-01389-4"},{"volume-title":"Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency","year":"2021","author":"Kasirzadeh Atoosa","key":"e_1_3_4_187_2","unstructured":"Atoosa Kasirzadeh and Andrew Smart. 2021. The use and misuse of counterfactuals in ethical machine learning. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, 9. DOI:10.1145\/3442188.3445886"},{"key":"e_1_3_4_188_2","article-title":"If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual XAI techniques","author":"Keane Mark T.","year":"2021","unstructured":"Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual XAI techniques. CoRR (2021). https:\/\/arxiv.org\/abs\/2103.01035","journal-title":"CoRR"},{"key":"e_1_3_4_189_2","doi-asserted-by":"crossref","unstructured":"Mark T. Keane and Barry Smyth. 2020. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). arxiv:cs.AI\/2005.13997","DOI":"10.1007\/978-3-030-58342-2_11"},{"key":"e_1_3_4_190_2","first-page":"52907","volume-title":"Advances in Neural Information Processing Systems","volume":"36","author":"Kenny Eoin","year":"2023","unstructured":"Eoin Kenny and Weipeng Huang. 2023. The utility of \u201cEven if\u201d semifactual explanation to optimise positive outcomes. In Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 52907\u201352935. https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/a5e146ca55a2b18be41942cfa677123d-Paper-Conference.pdf"},{"key":"e_1_3_4_191_2","unstructured":"Eoin M. Kenny and Mark T. Keane. 2020. On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning. arxiv:2009.06399"},{"key":"e_1_3_4_192_2","article-title":"On generating plausible counterfactual and semi-factual explanations for deep learning","volume":"35","author":"Kenny Eoin M.","year":"2021","unstructured":"Eoin M. Kenny and Mark T Keane. 2021. On generating plausible counterfactual and semi-factual explanations for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence 35 (May2021), 11. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17377","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201922)","year":"2022","author":"Khorram Saeed","key":"e_1_3_4_193_2","unstructured":"Saeed Khorram and Li Fuxin. 2022. Cycle-consistent counterfactuals by latent transformations. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201922). 10."},{"volume-title":"Advances in Neural Information Processing Systems","year":"2016","author":"Kim Been","key":"e_1_3_4_194_2","unstructured":"Been Kim, Rajiv Khanna, and Oluwasanmi O. Koyejo. 2016. Examples are not enough, learn to criticize! criticism for interpretability. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29. Curran Associates, Inc.https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2016\/file\/5680522b8e2bb01943234bce7bf84534-Paper.pdf"},{"key":"e_1_3_4_195_2","doi-asserted-by":"publisher","DOI":"10.1093\/mind\/fzl261"},{"key":"e_1_3_4_196_2","unstructured":"Will Knight. 2019. The Apple Card Didn\u2019t \u2019See\u2019 Gender-and That\u2019s the Problem. https:\/\/www.wired.com\/story\/the-apple-card-didnt-see-genderand-thats-the-problem\/"},{"volume-title":"Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End","year":"2021","author":"Mothilal Ramaravind Kommiya","key":"e_1_3_4_197_2","unstructured":"Ramaravind Kommiya Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2021. Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End. ACM, New York."},{"key":"e_1_3_4_198_2","doi-asserted-by":"publisher","unstructured":"Jaehoon Koo Diego Klabjan and Jean Utke. 2020. Inverse Classification with Limited Budget and Maximum Number of Perturbed Samples. DOI:10.48550\/ARXIV.2009.14111","DOI":"10.48550\/ARXIV.2009.14111"},{"key":"e_1_3_4_199_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-86772-0_17"},{"key":"e_1_3_4_200_2","volume-title":"Proceedings of the 27th International Conference on Principles and Practice of Constraint Programming (CP\u201921)","volume":"210","author":"Korikov Anton","year":"2021","unstructured":"Anton Korikov and J. Christopher Beck. 2021. Counterfactual explanations via inverse constraint programming. In Proceedings of the 27th International Conference on Principles and Practice of Constraint Programming (CP\u201921), Vol. 210. Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik. DOI:10.4230\/LIPIcs.CP.2021.35"},{"key":"e_1_3_4_201_2","first-page":"4097","volume-title":"Proceedings of IJCAI-21","author":"Korikov Anton","year":"2021","unstructured":"Anton Korikov, Alexander Shleyfman, and J. Christopher Beck. 2021. Counterfactual explanations for optimization-based decisions in the context of the GDPR. In Proceedings of IJCAI-21. 4097\u20134103. DOI:10.24963\/ijcai.2021\/564"},{"key":"e_1_3_4_202_2","doi-asserted-by":"publisher","DOI":"10.15388\/21-INFOR468"},{"key":"e_1_3_4_203_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0031-3203(98)00181-2"},{"volume-title":"HILDA\u201917","year":"2017","author":"Krishnan Sanjay","key":"e_1_3_4_204_2","unstructured":"Sanjay Krishnan and Eugene Wu. 2017. PALM: Machine learning explanations for iterative debugging. In Proceedings ofHILDA\u201917. ACM. New York, 6. DOI:10.1145\/3077257.3077271"},{"key":"e_1_3_4_205_2","article-title":"Keep your friends close and your counterfactuals closer: Improved learning from closest rather than plausible counterfactual explanations in an abstract setting","volume":"2205","author":"Kuhl Ulrike","year":"2022","unstructured":"Ulrike Kuhl, Andr\u00e9 Artelt, and Barbara Hammer. 2022. Keep your friends close and your counterfactuals closer: Improved learning from closest rather than plausible counterfactual explanations in an abstract setting. ArXiv abs\/2205.05515 (2022).","journal-title":"ArXiv"},{"key":"e_1_3_4_206_2","article-title":"Counterfactual fairness","volume":"30","author":"Kusner Matt J.","year":"2017","unstructured":"Matt J. Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. Advances in Neural Information Processing Systems 30 (2017).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_4_207_2","doi-asserted-by":"publisher","unstructured":"Gunnar K\u00f6nig Timo Freiesleben and Moritz Grosse-Wentrup. 2021. A Causal Perspective on Meaningful and Robust Algorithmic Recourse. DOI:10.48550\/ARXIV.2107.07853","DOI":"10.48550\/ARXIV.2107.07853"},{"key":"e_1_3_4_208_2","doi-asserted-by":"publisher","unstructured":"Jokin Labaien Ekhi Zugasti and Xabier De Carlos. 2021. DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations with Distribution-Aware Autoencoder Loss. DOI:10.48550\/ARXIV.2104.09062","DOI":"10.48550\/ARXIV.2104.09062"},{"key":"e_1_3_4_209_2","first-page":"162","volume-title":"Proceedings of SDM","author":"Lash Michael T.","year":"2017","unstructured":"Michael T. Lash, Qihang Lin, William Nick Street, Jennifer G. Robinson, and Jeffrey W. Ohlmann. 2017. Generalized inverse classification. In Proceedings of SDM. Society for Industrial and Applied Mathematics, Philadelphia, PA, 162\u2013170. DOI:10.1137\/1.9781611974973.19"},{"key":"e_1_3_4_210_2","unstructured":"Thibault Laugel Marie-Jeanne Lesot Christophe Marsala and Marcin Detyniecki. 2019. Issues with Post-hoc Counterfactual Explanations: A Discussion. arxiv:1906.04774"},{"volume-title":"Proceedings of Information Processing and Management of Uncertainty in Knowledge-Based Systems, Theory and Foundations (IPMU\u201918)","year":"2018","author":"Laugel Thibault","key":"e_1_3_4_211_2","unstructured":"Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2018. Comparison-based inverse classification for interpretability in machine learning. In Proceedings of Information Processing and Management of Uncertainty in Knowledge-Based Systems, Theory and Foundations (IPMU\u201918). Springer International Publishing. DOI:10.1007\/978-3-319-91473-2_9"},{"key":"e_1_3_4_212_2","doi-asserted-by":"crossref","unstructured":"Thibault Laugel Marie-Jeanne Lesot Christophe Marsala Xavier Renard and Marcin Detyniecki. 2019. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations. http:\/\/arxiv.org\/abs\/1907.09294","DOI":"10.24963\/ijcai.2019\/388"},{"key":"e_1_3_4_213_2","unstructured":"Thai Le Suhang Wang and Dongwon Lee. 2019. GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model\u2019s Prediction. arxiv:cs.LG\/1911.02042"},{"key":"e_1_3_4_214_2","unstructured":"Yann LeCun and Corinna Cortes. 2010. MNIST handwritten digit database. (2010). http:\/\/yann.lecun.com\/exdb\/mnist\/"},{"key":"e_1_3_4_215_2","first-page":"1","volume-title":"Proceedings of the 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA\u201921)","author":"Leung Carson K.","year":"2021","unstructured":"Carson K. Leung, Adam G. M. Pazdor, and Joglas Souza. 2021. Explainable artificial intelligence for data science on customer churn. In Proceedings of the 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA\u201921). 1\u201310. DOI:10.1109\/DSAA53316.2021.9564166"},{"volume-title":"Counterfactuals","year":"1973","author":"Lewis David","key":"e_1_3_4_216_2","unstructured":"David Lewis. 1973. Counterfactuals. Blackwell Publishers, Oxford."},{"volume-title":"Proceedings of the ICLR Workshop on Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data","year":"2022","author":"Ley Dan","key":"e_1_3_4_217_2","unstructured":"Dan Ley, Saumitra Mishra, and Daniele Magazzeni. 2022. Global counterfactual explanations: Investigations, implementations and improvements. In Proceedings of the ICLR Workshop on Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data."},{"key":"e_1_3_4_218_2","first-page":"972","volume-title":"Proceedings of BIBM 2021","author":"Li Yan","year":"2021","unstructured":"Yan Li, Shasha Liu, Chunwei Wu, Xidong Xi, Guitao Cao, and Wenming Cao. 2021. DCFG: Discovering directional CounterFactual generation for chest X-rays. In Proceedings of BIBM 2021. 972\u2013979. DOI:10.1109\/BIBM52615.2021.9669770"},{"key":"e_1_3_4_219_2","first-page":"1","volume-title":"Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP\u201919)","author":"Liu Shusen","year":"2019","unstructured":"Shusen Liu, Bhavya Kailkhura, Donald Loveland, and Yong Han. 2019. Generative counterfactual introspection for explainable deep learning. In Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP\u201919). 1\u20135. DOI:10.1109\/GlobalSIP45357.2019.8969491"},{"key":"e_1_3_4_220_2","doi-asserted-by":"publisher","unstructured":"Ziwei Liu Ping Luo Xiaogang Wang and Xiaoou Tang. 2014. Deep learning face attributes in the wild. (112014). DOI:10.1109\/ICCV.2015.425","DOI":"10.1109\/ICCV.2015.425"},{"volume-title":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","year":"2020","author":"Lucic Ana","key":"e_1_3_4_221_2","unstructured":"Ana Lucic, Hinda Haned, and Maarten de Rijke. 2020. Why does my model fail? Contrastive local explanations for retail forecasting. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, New York, 9. DOI:10.1145\/3351095.3372824"},{"key":"e_1_3_4_222_2","doi-asserted-by":"publisher","unstructured":"Ana Lucic Harrie Oosterhuis Hinda Haned and Maarten de Rijke. 2019. FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles. DOI:10.48550\/ARXIV.1911.12199","DOI":"10.48550\/ARXIV.1911.12199"},{"key":"e_1_3_4_223_2","unstructured":"Ana Lucic Harrie Oosterhuis Hinda Haned and Maarten de Rijke. 2020. Actionable Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles. http:\/\/arxiv.org\/abs\/1911.12199"},{"key":"e_1_3_4_224_2","unstructured":"Ana Lucic Maartje ter Hoeve Gabriele Tolomei Maarten de Rijke and Fabrizio Silvestri. 2021. CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks. arxiv:cs.LG\/2102.03322"},{"key":"e_1_3_4_225_2","first-page":"4765","volume-title":"Advances in Neural Information Processing Systems 30","author":"Lundberg Scott M.","year":"2017","unstructured":"Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30. Curran Associates, Inc., 4765\u20134774."},{"key":"e_1_3_4_226_2","unstructured":"Freddie Mac. 2019. Single Family Loan-level Dataset. https:\/\/www.freddiemac.com\/research\/datasets\/sf-loanlevel-dataset"},{"key":"e_1_3_4_227_2","doi-asserted-by":"crossref","first-page":"13516","DOI":"10.1609\/aaai.v35i15.17594","article-title":"Generate your counterfactuals: Towards controlled counterfactual generation for text","volume":"35","author":"Madaan Nishtha","year":"2021","unstructured":"Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Diptikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. In Proceedings of the AAAI Conference on Artificial Intelligence 35 (May2021), 13516\u201313524. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17594","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"e_1_3_4_228_2","unstructured":"Fannie Mae. 2020. Fannie Mae Dataset. https:\/\/www.fanniemae.com\/portal\/funding-the-market\/data\/loan-performance-data.html"},{"key":"e_1_3_4_229_2","doi-asserted-by":"publisher","DOI":"10.1515\/bile-2017-0002"},{"key":"e_1_3_4_230_2","unstructured":"Divyat Mahajan Chenhao Tan and Amit Sharma. 2020. Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers. http:\/\/arxiv.org\/abs\/1912.03277"},{"key":"e_1_3_4_231_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-021-00818-9"},{"key":"e_1_3_4_232_2","doi-asserted-by":"publisher","DOI":"10.25300\/MISQ\/2014\/38.1.04"},{"key":"e_1_3_4_233_2","first-page":"1","volume-title":"Proceedings of the International Workshop on Fair, Effective and Sustainable Talent Management using Data Science","author":"Mazzine Raphael","year":"2021","unstructured":"Raphael Mazzine, Sofie Goethals, Dieter Brughmans, and David Martens. 2021. Counterfactual explanations for employment services. In Proceedings of the International Workshop on Fair, Effective and Sustainable Talent Management using Data Science. 1\u20137."},{"key":"e_1_3_4_234_2","doi-asserted-by":"publisher","unstructured":"Raphael Mazzine and David Martens. 2021. A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data. DOI:10.48550\/ARXIV.2107.04680","DOI":"10.48550\/ARXIV.2107.04680"},{"key":"e_1_3_4_235_2","doi-asserted-by":"publisher","unstructured":"Marcos Medeiros Raimundo Luis Nonato and Jorge Poco. 2021. Mining Pareto-Optimal Counterfactual Antecedents with a Branch-And-Bound Model-Agnostic Algorithm. DOI:10.21203\/rs.3.rs-551661\/v1","DOI":"10.21203\/rs.3.rs-551661\/v1"},{"key":"e_1_3_4_236_2","doi-asserted-by":"publisher","DOI":"10.32473\/flairs.v35i.130705"},{"key":"e_1_3_4_237_2","doi-asserted-by":"publisher","DOI":"10.32473\/flairs.v35i.130711"},{"key":"e_1_3_4_238_2","doi-asserted-by":"publisher","DOI":"10.3389\/frai.2022.825565"},{"key":"e_1_3_4_239_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"e_1_3_4_240_2","doi-asserted-by":"publisher","unstructured":"Saumitra Mishra Sanghamitra Dutta Jason Long and Daniele Magazzeni. 2021. A Survey on the Robustness of Feature Importance and Counterfactual Explanations. DOI:10.48550\/ARXIV.2111.00358","DOI":"10.48550\/ARXIV.2111.00358"},{"key":"e_1_3_4_241_2","article-title":"MEGEX: Data-free model extraction attack against gradient-based explainable AI","volume":"2107","author":"Miura Takayuki","year":"2021","unstructured":"Takayuki Miura, Satoshi Hasegawa, and Toshiki Shibahara. 2021. MEGEX: Data-free model extraction attack against gradient-based explainable AI. ArXiv abs\/2107.08909 (2021).","journal-title":"ArXiv"},{"key":"e_1_3_4_242_2","first-page":"177","volume-title":"ACM Conference on AI, Ethics, and Society","author":"Mohammadi Kiarash","year":"2021","unstructured":"Kiarash Mohammadi, Amir-Hossein Karimi, Gilles Barthe, and Isabel Valera. 2021. Scaling guarantees for nearest counterfactual explanations. In Proceedings of theACM Conference on AI, Ethics, and Society. ACM, New York, 177\u2013187. DOI:10.1145\/3461702.3462514"},{"key":"e_1_3_4_243_2","unstructured":"Wellington Rodrigo Monteiro and Gilberto Reynoso-Meza. 2022. Counterfactual generation through multi-objective constrained optimisation. (2022) 23. https:\/\/www.researchsquare.com\/article\/rs-1325730\/v1"},{"key":"e_1_3_4_244_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dss.2014.03.001"},{"volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201920) (FAT* \u201920)","year":"2020","author":"Mothilal Ramaravind K.","key":"e_1_3_4_245_2","unstructured":"Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201920) (FAT* \u201920). ACM, New York, DOI:10.1145\/3351095.3372850"},{"key":"e_1_3_4_246_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-72076-0_18"},{"key":"e_1_3_4_247_2","doi-asserted-by":"publisher","unstructured":"Chelsea M. Myers Evan Freed Luis Fernando Laris Pardo Anushay Furqan Sebastian Risi and Jichen Zhu. 2020. Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples. DOI:10.48550\/ARXIV.2001.02271","DOI":"10.48550\/ARXIV.2001.02271"},{"key":"e_1_3_4_248_2","doi-asserted-by":"crossref","unstructured":"Philip Naumann and Eirini Ntoutsi. 2021. Consequence-aware Sequential Counterfactual Generation. arxiv:cs.LG\/2104.05592","DOI":"10.1007\/978-3-030-86520-7_42"},{"key":"e_1_3_4_249_2","unstructured":"Guillermo Navas-Palencia. 2021. Optimal Counterfactual Explanations for Scorecard Modelling. https:\/\/arxiv.org\/abs\/2104.08619"},{"volume-title":"Proceedings of WSDM 2021","year":"2021","author":"Nemirovsky Daniel","key":"e_1_3_4_250_2","unstructured":"Daniel Nemirovsky, Nicolas Thiebaut, Ye Xu, and Abhishek Gupta. 2021. Providing actionable feedback in hiring marketplaces using generative adversarial networks. In Proceedings of WSDM 2021. ACM, New York, 4. DOI:10.1145\/3437963.3441705"},{"key":"e_1_3_4_251_2","first-page":"1488","volume-title":"Proceedings of UAI 2022","author":"Nemirovsky Daniel","year":"2022","unstructured":"Daniel Nemirovsky, Nicolas Thiebaut, Ye Xu, and Abhishek Gupta. 2022. CounteRGAN: Generating counterfactuals for real-time recourse and interpretability using residual GANs. In Proceedings of UAI 2022. PMLR, 1488\u20131497. https:\/\/proceedings.mlr.press\/v180\/nemirovsky22a.html"},{"key":"e_1_3_4_252_2","unstructured":"Tri Minh Nguyen Thomas P. Quinn Thin Nguyen and Truyen Tran. 2021. Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction. arxiv:cs.AI\/2103.12983"},{"key":"e_1_3_4_253_2","doi-asserted-by":"publisher","unstructured":"Danilo Numeroso and Davide Bacciu. 2021. MEG: Generating molecular counterfactual explanations for deep graph networks. In 2021 International Joint Conference on Neural Networks (IJCNN). 1\u20138. DOI:10.1109\/IJCNN52387.2021.9534266","DOI":"10.1109\/IJCNN52387.2021.9534266"},{"key":"e_1_3_4_254_2","doi-asserted-by":"publisher","unstructured":"Andrew O\u2019Brien and Edward Kim. 2021. Multi-Agent Algorithmic Recourse. DOI:10.48550\/ARXIV.2110.00673","DOI":"10.48550\/ARXIV.2110.00673"},{"key":"e_1_3_4_255_2","unstructured":"House of Commons. [n. d.]. Algorithms in Decision Making. https:\/\/publications.parliament.uk\/pa\/cm201719\/cmselect\/cmsctech\/351\/351.pdf. Accessed: 2020-10-15."},{"key":"e_1_3_4_256_2","doi-asserted-by":"publisher","unstructured":"Kwanseok Oh Jee Seok Yoon and Heung-Il Suk. 2020. Born Identity Network: Multi-way Counterfactual Map Generation to Explain a Classifier\u2019s Decision. DOI:10.48550\/ARXIV.2011.10381","DOI":"10.48550\/ARXIV.2011.10381"},{"key":"e_1_3_4_257_2","doi-asserted-by":"publisher","unstructured":"Kwanseok Oh Jee Seok Yoon and Heung-Il Suk. 2021. Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer\u2019s Disease Diagnosis Model. DOI:10.48550\/ARXIV.2108.09451","DOI":"10.48550\/ARXIV.2108.09451"},{"key":"e_1_3_4_258_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103455"},{"key":"e_1_3_4_259_2","unstructured":"Axel Parmentier and Thibaut Vidal. 2021. Optimal Counterfactual Explanations in Tree Ensembles. https:\/\/arxiv.org\/abs\/2106.06631"},{"key":"e_1_3_4_260_2","first-page":"4574","volume-title":"Proceedings of the 25th International Conference on Artificial Intelligence and Statistics","author":"Pawelczyk Martin","year":"2022","unstructured":"Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. 2022. Exploring counterfactual explanations through the lens of adversarial examples: A theoretical and empirical analysis. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics. PMLR, 4574\u20134594. https:\/\/proceedings.mlr.press\/v151\/pawelczyk22a.html"},{"key":"e_1_3_4_261_2","unstructured":"Martin Pawelczyk Sascha Bielawski Johannes van den Heuvel Tobias Richter and Gjergji Kasneci. 2021. CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms. arxiv:cs.LG\/2108.00783"},{"volume-title":"Proceedings of Machine Learning Research","year":"2020","author":"Pawelczyk Martin","key":"e_1_3_4_262_2","unstructured":"Martin Pawelczyk, Klaus Broelemann, and Gjergji. Kasneci. 2020. On counterfactual explanations under predictive multiplicity. In Proceedings of Machine Learning Research. PMLR, Virtual, 9. http:\/\/proceedings.mlr.press\/v124\/pawelczyk20a.html"},{"key":"e_1_3_4_263_2","doi-asserted-by":"publisher","unstructured":"Martin Pawelczyk Teresa Datta Johannes van-den Heuvel Gjergji Kasneci and Himabindu Lakkaraju. 2022. Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse. DOI:10.48550\/ARXIV.2203.06768","DOI":"10.48550\/ARXIV.2203.06768"},{"key":"e_1_3_4_264_2","doi-asserted-by":"publisher","unstructured":"Martin Pawelczyk Klaus Broelemann and Gjergji Kasneci. 2020. Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of The Web Conference. Association for Computing Machinery New York NY USA. DOI:10.1145\/3366423.3380087","DOI":"10.1145\/3366423.3380087"},{"volume-title":"Causality: Models, Reasoning, and Inference","year":"2000","author":"Pearl Judea","key":"e_1_3_4_265_2","unstructured":"Judea Pearl. 2000. Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge, MA, USA."},{"volume-title":"Proceedings of NeurIPS 2020","year":"2020","author":"Pedapati Tejaswini","key":"e_1_3_4_266_2","unstructured":"Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugan, and Amit Dhurandhar. 2020. Learning global transparent models consistent with local contrastive explanations. In Proceedings of NeurIPS 2020. Curran Associates Inc., 11."},{"key":"e_1_3_4_267_2","doi-asserted-by":"publisher","unstructured":"Oana-Iuliana Popescu Maha Shadaydeh and Joachim Denzler. 2021. Counterfactual Generation with Knockoffs. DOI:10.48550\/ARXIV.2102.00951","DOI":"10.48550\/ARXIV.2102.00951"},{"key":"e_1_3_4_268_2","doi-asserted-by":"publisher","unstructured":"Rafael Poyiadzi Kacper Sokol Raul Santos-Rodriguez Tijl De Bie and Peter Flach. 2020. FACE: Feasible and Actionable Counterfactual Explanations. DOI:10.1145\/3375627.3375850arXiv: 1909.09369.","DOI":"10.1145\/3375627.3375850"},{"key":"e_1_3_4_269_2","doi-asserted-by":"publisher","unstructured":"Mario Alfonso Prado-Romero Bardh Prenkaj Giovanni Stilo and Fosca Giannotti. 2022. A Survey on Graph Counterfactual Explanations: Definitions Methods Evaluation. DOI:10.48550\/ARXIV.2210.12089","DOI":"10.48550\/ARXIV.2210.12089"},{"key":"e_1_3_4_270_2","first-page":"1102","volume-title":"Proceedings of the 2021 IEEE International Conference on Big Data (Big Data\u201921)","author":"Qi Wenting","year":"2021","unstructured":"Wenting Qi and Charalampos Chelmis. 2021. Improving algorithmic decision\u2013making in the presence of untrustworthy training data. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data\u201921). 1102\u20131108. DOI:10.1109\/BigData52589.2021.9671677"},{"volume-title":"Proceedings of the Conference on Artificial Intelligence (AAAI\u201920)","year":"2020","author":"Ramakrishnan Goutham","key":"e_1_3_4_271_2","unstructured":"Goutham Ramakrishnan, Y. C. Lee, and Aws Albarghouthi. 2020. Synthesizing action sequences for modifying model decisions. In Proceedings of the Conference on Artificial Intelligence (AAAI\u201920). AAAI press, California, USA, 16. http:\/\/arxiv.org\/abs\/1910.00057"},{"key":"e_1_3_4_272_2","doi-asserted-by":"publisher","unstructured":"Yanou Ramon David Martens Foster Provost and Theodoros Evgeniou. 2020. A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC LIME-C and SHAP-C. Advances in Data Analysis and Classification 14 4 (2020) 801\u2013819. DOI:10.1007\/s11634-020-00418-3","DOI":"10.1007\/s11634-020-00418-3"},{"key":"e_1_3_4_273_2","doi-asserted-by":"publisher","DOI":"10.1007\/s41060-022-00365-6"},{"key":"e_1_3_4_274_2","first-page":"1286","volume-title":"Proceedings of the 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA\u201921)","author":"Rasouli Peyman","year":"2021","unstructured":"Peyman Rasouli and Ingrid Chieh Yu. 2021. Analyzing and improving the robustness of tabular classifiers using counterfactual explanations. In Proceedings of the 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA\u201921). 1286\u20131293. DOI:10.1109\/ICMLA52953.2021.00209"},{"key":"e_1_3_4_275_2","unstructured":"Shubham Rathi. 2019. Generating Counterfactual and Contrastive Explanations using SHAP. http:\/\/arxiv.org\/abs\/1906.09293arXiv: 1906.09293."},{"key":"e_1_3_4_276_2","doi-asserted-by":"crossref","first-page":"194","DOI":"10.18653\/v1\/2021.conll-1.15","volume-title":"Proceedings of the 25th Conference on Computational Natural Language Learning","author":"Ravfogel Shauli","year":"2021","unstructured":"Shauli Ravfogel, Grusha Prasad, Tal Linzen, and Yoav Goldberg. 2021. Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction. In Proceedings of the 25th Conference on Computational Natural Language Learning. Association for Computational Linguistics, 194\u2013209. DOI:10.18653\/v1\/2021.conll-1.15"},{"key":"e_1_3_4_277_2","first-page":"1","volume-title":"Proceedings of the 2021 IEEE International Conference on Autonomous Systems (ICAS\u201921)","author":"Ravi Ambareesh","year":"2021","unstructured":"Ambareesh Ravi, Xiaozhuo Yu, Iara Santelices, Fakhri Karray, and Baris Fidan. 2021. General frameworks for anomaly detection explainability: Comparative study. In Proceedings of the 2021 IEEE International Conference on Autonomous Systems (ICAS\u201921). 1\u20135. DOI:10.1109\/ICAS49788.2021.9551129"},{"key":"e_1_3_4_278_2","unstructured":"Kaivalya Rawal Ece Kamar and Himabindu Lakkaraju. 2021. Algorithmic Recourse in the Wild: Understanding the Impact of Data and Model Shifts. arxiv:cs.LG\/2012.11788"},{"key":"e_1_3_4_279_2","first-page":"12187","volume-title":"Advances in Neural Information Processing Systems","author":"Rawal Kaivalya","year":"2020","unstructured":"Kaivalya Rawal and Himabindu Lakkaraju. 2020. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 12187\u201312198. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/8ee7730e97c67473a424ccfeff49ab20-Paper.pdf"},{"key":"e_1_3_4_280_2","doi-asserted-by":"publisher","unstructured":"Annabelle Redelmeier Martin Jullum Kjersti Aas and Anders L\u00f8land. 2021. MCCE: Monte Carlo Sampling of Realistic Counterfactual Explanations. DOI:10.48550\/ARXIV.2111.09790","DOI":"10.48550\/ARXIV.2111.09790"},{"volume-title":"Queen Mary Law Research Paper No. 370\/2021","year":"2021","author":"Reed Chris","key":"e_1_3_4_281_2","unstructured":"Chris Reed, Keri Grieman, and Joseph Early. 2021. Non-Asimov explanations regulating AI through transparency. In Queen Mary Law Research Paper No. 370\/2021. https:\/\/ssrn.com\/abstract=3970518"},{"key":"e_1_3_4_282_2","doi-asserted-by":"publisher","unstructured":"Marco Tulio Ribeiro Sameer Singh and Carlos Guestrin. 2016. \u201cWhy Should I Trust You?\u201d: Explaining the predictions of any classifier.In Proceedings of KDD \u201916. ACM New York 10. DOI:10.1145\/2939672.2939778","DOI":"10.1145\/2939672.2939778"},{"volume-title":"Proceedings of the Conference on Artificial Intelligence (AAAI\u201918)","year":"2018","author":"Ribeiro Marco Tulio","key":"e_1_3_4_283_2","unstructured":"Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the Conference on Artificial Intelligence (AAAI\u201918). AAAI Press, California, USA, 9. https:\/\/www.aaai.org\/ocs\/index.php\/AAAI\/AAAI18\/paper\/view\/16982"},{"key":"e_1_3_4_284_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-emnlp.306"},{"key":"e_1_3_4_285_2","doi-asserted-by":"publisher","unstructured":"Pau Rodriguez Massimo Caccia Alexandre Lacoste Lee Zamparo Issam Laradji Laurent Charlin and David Vazquez. 2021. Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations. DOI:10.48550\/ARXIV.2103.10226","DOI":"10.48550\/ARXIV.2103.10226"},{"key":"e_1_3_4_286_2","first-page":"18734","volume-title":"Advances in Neural Information Processing Systems","author":"Ross Alexis","year":"2021","unstructured":"Alexis Ross, Himabindu Lakkaraju, and Osbert Bastani. 2021. Learning models for actionable recourse. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 18734\u201318746. https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/9b82909c30456ac902e14526e63081d4-Paper.pdf"},{"volume-title":"Counterfactuals","year":"1992","author":"Ruben David-Hillel","key":"e_1_3_4_287_2","unstructured":"David-Hillel Ruben. 1992. Counterfactuals. Routledge Publishers. https:\/\/philarchive.org\/archive\/RUBEE-3"},{"volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201919) (FAT* \u201919)","year":"2019","author":"Russell Chris","key":"e_1_3_4_288_2","unstructured":"Chris Russell. 2019. Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201919) (FAT* \u201919). ACM, New York, 9. DOI:10.1145\/3287560.3287569"},{"volume-title":"GEM: Graph Embedding and Mining (ECML-PKDD 2021 Workshop+Tutorial)","year":"2021","author":"Sadler Sophie","key":"e_1_3_4_289_2","unstructured":"Sophie Sadler, Derek Greene, and Daniel W. Archambault. 2021. A study of explainable community-level features. In GEM: Graph Embedding and Mining (ECML-PKDD 2021 Workshop+Tutorial)."},{"key":"e_1_3_4_290_2","unstructured":"Surya Shravan Kumar Sajja Sumanta Mukherjee Satyam Dwivedi and Vikas C. Raykar. 2021. Semi-supervised Counterfactual Explanations. https:\/\/openreview.net\/forum?id=o6ndFLB1DST"},{"key":"e_1_3_4_291_2","doi-asserted-by":"publisher","unstructured":"Robert-Florian Samoilescu Arnaud Van Looveren and Janis Klaise. 2021. Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning. DOI:10.48550\/ARXIV.2106.02597","DOI":"10.48550\/ARXIV.2106.02597"},{"key":"e_1_3_4_292_2","doi-asserted-by":"publisher","unstructured":"Pedro Sanchez and Sotirios A. Tsaftaris. 2022. Diffusion Causal Models for Counterfactual Estimation. DOI:10.48550\/ARXIV.2202.10166","DOI":"10.48550\/ARXIV.2202.10166"},{"key":"e_1_3_4_293_2","doi-asserted-by":"crossref","unstructured":"Maximilian Schleich Zixuan Geng Yihong Zhang and Dan Suciu. 2021. GeCo: Quality Counterfactual Explanations in Real Time. arxiv:cs.LG\/2101.01292","DOI":"10.14778\/3461535.3461555"},{"key":"e_1_3_4_294_2","doi-asserted-by":"publisher","unstructured":"Lisa Schut Oscar Key Rory McGrath Luca Costabello Bogdan Sacaleanu Medb Corcoran and Yarin Gal. 2021. Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties. DOI:10.48550\/ARXIV.2103.08951","DOI":"10.48550\/ARXIV.2103.08951"},{"key":"e_1_3_4_295_2","first-page":"618","volume-title":"Proceedings of the IEEE International Conference on Computer Vision","author":"Selvaraju R. R.","year":"2017","unstructured":"R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. 618\u2013626. DOI:10.1109\/ICCV.2017.74"},{"key":"e_1_3_4_296_2","unstructured":"Kumba Sennaar. 2019. Machine Learning for Recruiting and Hiring \u2013 6 Current Applications. https:\/\/emerj.com\/ai-sector-overviews\/machine-learning-for-recruiting-and-hiring\/. Accessed: 2020-10-15."},{"key":"e_1_3_4_297_2","doi-asserted-by":"publisher","unstructured":"Ruoxi Shang K. J. Kevin Feng and Chirag Shah. 2022. Why am I not seeing it? Understanding users\u2019 needs for counterfactual explanations in everyday recommendations. In Proceedings ofFAccT \u201922. ACM New York 11. DOI:10.1145\/3531146.3533189","DOI":"10.1145\/3531146.3533189"},{"key":"e_1_3_4_298_2","doi-asserted-by":"publisher","unstructured":"Xiaoting Shao and Kristian Kersting. 2022. Gradient-based Counterfactual Explanations using Tractable Probabilistic Models. DOI:10.48550\/ARXIV.2205.07774","DOI":"10.48550\/ARXIV.2205.07774"},{"key":"e_1_3_4_299_2","doi-asserted-by":"crossref","unstructured":"Shubham Sharma Jette Henderson and Joydeep Ghosh. 2019. CERTIFAI: Counterfactual Explanations for Robustness Transparency Interpretability and Fairness of Artificial Intelligence models. http:\/\/arxiv.org\/abs\/1905.07857","DOI":"10.1145\/3375627.3375812"},{"volume-title":"Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society","year":"2021","author":"Shokri Reza","key":"e_1_3_4_300_2","unstructured":"Reza Shokri, Martin Strobel, and Yair Zick. 2021. On the privacy risks of model explanations. In Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society. ACM, New York, 11. DOI:10.1145\/3461702.3462533"},{"key":"e_1_3_4_301_2","doi-asserted-by":"publisher","unstructured":"Ronal Rajneshwar Singh Paul Dourish Piers Howe Tim Miller Liz Sonenberg Eduardo Velloso and Frank Vetere. 2021. Directive Explanations for Actionable Explainability in Machine Learning Applications. DOI:10.1145\/3579363","DOI":"10.1145\/3579363"},{"key":"e_1_3_4_302_2","unstructured":"Saurav Singla. 2020. Machine Learning to Predict Credit Risk in Lending Industry. https:\/\/www.aitimejournal.com\/@saurav.singla\/machine-learning-to-predict-credit-risk-in-lending-industry. Accessed: 2020-10-15."},{"key":"e_1_3_4_303_2","unstructured":"Dylan Slack Sophie Hilgard Himabindu Lakkaraju and Sameer Singh. 2021. Counterfactual Explanations Can Be Manipulated. arxiv:cs.LG\/2106.02666"},{"key":"e_1_3_4_304_2","first-page":"261","volume-title":"Proceedings of the Annual Symposium on Computer Application in Medical Care","author":"Smith J. W.","year":"1988","unstructured":"J. W. Smith, J. Everhart, W. C. Dickson, W. Knowler, and R. Johannes. 1988. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Annual Symposium on Computer Application in Medical Care. American Medical Informatics Association, Washington, D.C., 261\u2013265."},{"key":"e_1_3_4_305_2","first-page":"1","volume-title":"Proceedings of the 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob\u201920)","author":"Smith Sim\u00f3n C.","year":"2020","unstructured":"Sim\u00f3n C. Smith and Subramanian Ramamoorthy. 2020. Counterfactual explanation and causal inference in service of robustness in robot control. In Proceedings of the 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob\u201920). 1\u20138. DOI:10.1109\/ICDL-EpiRob48136.2020.9278061"},{"key":"e_1_3_4_306_2","doi-asserted-by":"publisher","unstructured":"Kacper Sokol and Peter Flach. 2018. Glass-Box: Explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In Proceedings ofIJCAI\u201918. AAAI Press 5868\u20135870. DOI:10.24963\/ijcai.2018\/865","DOI":"10.24963\/ijcai.2018\/865"},{"key":"e_1_3_4_307_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.330110035"},{"key":"e_1_3_4_308_2","unstructured":"Thomas Spooner Danial Dervovic Jason Long Jon Shepard Jiahao Chen and Daniele Magazzeni. 2021. Counterfactual Explanations for Arbitrary Regression Models. https:\/\/arxiv.org\/abs\/2106.15212"},{"key":"e_1_3_4_309_2","volume-title":"Proceedings of the International Conference on Logic Programming 2021 Workshops (ICLP\u201921)","volume":"2970","author":"State Laura","year":"2021","unstructured":"Laura State. 2021. Logic programming for XAI: A technical perspective. In Proceedings of the International Conference on Logic Programming 2021 Workshops (ICLP\u201921), Vol. 2970. http:\/\/ceur-ws.org\/Vol-2970\/meepaper1.pdf"},{"key":"e_1_3_4_310_2","first-page":"17493","volume-title":"Advances in Neural Information Processing Systems","author":"Stein Gregory","year":"2021","unstructured":"Gregory Stein. 2021. Generating high-quality explanations for navigation in partially-revealed environments. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 17493\u201317506. https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/926ec030f29f83ce5318754fdb631a33-Paper.pdf"},{"key":"e_1_3_4_311_2","doi-asserted-by":"publisher","unstructured":"Deborah Sulem Michele Donini Muhammad Bilal Zafar Francois-Xavier Aubet Jan Gasthaus Tim Januschowski Sanjiv Das Krishnaram Kenthapadi and Cedric Archambeau. 2022. Diverse Counterfactual Explanations for Anomaly Detection in Time Series. DOI:10.48550\/ARXIV.2203.11103","DOI":"10.48550\/ARXIV.2203.11103"},{"key":"e_1_3_4_312_2","doi-asserted-by":"publisher","unstructured":"Ezzeldin Tahoun and Andre Kassis. 2020. Beyond Explanations: Recourse via Actionable Interpretability - Extended. DOI:10.13140\/RG.2.2.19076.14729","DOI":"10.13140\/RG.2.2.19076.14729"},{"volume-title":"Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics","year":"2017","author":"Tamagnini Paolo","key":"e_1_3_4_313_2","unstructured":"Paolo Tamagnini, Josua Krause, Aritra Dasgupta, and Enrico Bertini. 2017. Interpreting black-box classifiers using instance-level visual explanations. In Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics. ACM, New York, 6. DOI:10.1145\/3077257.3077260"},{"volume-title":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","year":"2021","author":"Tan Juntao","key":"e_1_3_4_314_2","unstructured":"Juntao Tan, Shuyuan Xu, Yingqiang Ge, Yunqi Li, Xu Chen, and Yongfeng Zhang. 2021. Counterfactual explainable recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. ACM, New York, 10. DOI:10.1145\/3459637.3482420"},{"key":"e_1_3_4_315_2","doi-asserted-by":"publisher","unstructured":"Sarah Tan Rich Caruana Giles Hooker and Yin Lou. 2018. Distill-and-compare: Auditing black-box models using transparent model distillation.In Proceedings of AIES \u201918. ACM New York 8. DOI:10.1145\/3278721.3278725","DOI":"10.1145\/3278721.3278725"},{"key":"e_1_3_4_316_2","unstructured":"Jason Tashea. 2017. Courts Are Using AI to Sentence Criminals. That Must Stop Now. https:\/\/www.wired.com\/2017\/04\/courts-using-ai-sentence-criminals-must-stop-now\/. Accessed: 2020-10-15."},{"key":"e_1_3_4_317_2","doi-asserted-by":"publisher","unstructured":"Mohammed Temraz and Mark T. Keane. 2021. Solving the Class Imbalance Problem Using a Counterfactual Method for Data Augmentation. DOI:10.48550\/ARXIV.2111.03516","DOI":"10.48550\/ARXIV.2111.03516"},{"key":"e_1_3_4_318_2","doi-asserted-by":"crossref","first-page":"216","DOI":"10.1007\/978-3-030-86957-1_15","volume-title":"Case-Based Reasoning Research and Development","author":"Temraz Mohammed","year":"2021","unstructured":"Mohammed Temraz, Eoin M. Kenny, Elodie Ruelle, Laurence Shalloo, Barry Smyth, and Mark T. Keane. 2021. Handling climate change using counterfactuals: Using counterfactuals in data augmentation to predict crop growth in an uncertain climate future. In Case-Based Reasoning Research and Development. Springer International Publishing, Cham, 216\u2013231."},{"key":"e_1_3_4_319_2","first-page":"2709","volume-title":"Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE\u201922)","author":"Teofili T.","year":"2022","unstructured":"T. Teofili, D. Firmani, N. Koudas, V. Martello, P. Merialdo, and D. Srivastava. 2022. Effective explanations for entity resolution models. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE\u201922). IEEE Computer Society, Los Alamitos, CA, USA, 2709\u20132721. DOI:10.1109\/ICDE53745.2022.00248"},{"key":"e_1_3_4_320_2","doi-asserted-by":"publisher","DOI":"10.1017\/S0140525X00057046"},{"key":"e_1_3_4_321_2","first-page":"16873","volume-title":"Advances in Neural Information Processing Systems","author":"Thiagarajan Jayaraman","year":"2021","unstructured":"Jayaraman Thiagarajan, Vivek Sivaraman Narayanaswamy, Deepta Rajan, Jia Liang, Akshay Chaudhari, and Andreas Spanias. 2021. Designing counterfactual generators using deep model inversion. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 16873\u201316884. https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/8ca01ea920679a0fe3728441494041b9-Paper.pdf"},{"key":"e_1_3_4_322_2","unstructured":"Erico Tjoa and Cuntai Guan. 2019. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. arxiv:cs.LG\/1907.07374"},{"key":"e_1_3_4_323_2","doi-asserted-by":"crossref","first-page":"113","DOI":"10.18653\/v1\/2022.acl-short.14","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)","author":"Tolkachev George","year":"2022","unstructured":"George Tolkachev, Stephen Mell, Stephan Zdancewic, and Osbert Bastani. 2022. Counterfactual explanations for natural language interfaces. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Dublin, Ireland, 113\u2013118. https:\/\/aclanthology.org\/2022.acl-short.14"},{"volume-title":"Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD\u201917).","year":"2017","author":"Tolomei Gabriele","key":"e_1_3_4_324_2","unstructured":"Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD\u201917). ACM, New York, 10. DOI:10.1145\/3097983.3098039"},{"key":"e_1_3_4_325_2","doi-asserted-by":"publisher","DOI":"10.1145\/3404835.3463005"},{"volume-title":"Proceedings of the 2021 25th International Conference on Circuits, Systems, Communications and Computers (CSCC\u201921)","year":"2021","author":"Tsiakmaki Maria","key":"e_1_3_4_326_2","unstructured":"Maria Tsiakmaki and Omiros Ragos. 2021. A case study of interpretable counterfactual explanations for the task of predicting student academic performance. In Proceedings of the 2021 25th International Conference on Circuits, Systems, Communications and Computers (CSCC\u201921). DOI:10.1109\/CSCC53858.2021.00029"},{"key":"e_1_3_4_327_2","first-page":"30127","volume-title":"Advances in Neural Information Processing Systems","author":"Tsirtsis Stratis","year":"2021","unstructured":"Stratis Tsirtsis, Abir De, and Manuel Rodriguez. 2021. Counterfactual explanations in sequential decision making under uncertainty. In Advances in Neural Information Processing Systems, Vol. 34. Curran Associates, Inc., 30127\u201330139. https:\/\/proceedings.neurips.cc\/paper\/2021\/file\/fd0a5a5e367a0955d81278062ef37429-Paper.pdf"},{"key":"e_1_3_4_328_2","unstructured":"Stratis Tsirtsis and Manuel Gomez-Rodriguez. 2020. Decisions Counterfactual Explanations and Strategic Behavior. arxiv:cs.LG\/2002.04333"},{"key":"e_1_3_4_329_2","article-title":"A model explanation system: Latest updates and extensions","volume":"1606","author":"Turner Ryan","year":"2016","unstructured":"Ryan Turner. 2016. A model explanation system: Latest updates and extensions. ArXiv abs\/1606.09517 (2016).","journal-title":"ArXiv"},{"key":"e_1_3_4_330_2","unstructured":"Aalto University. [n. d.]. The European Commission Offers Significant Support to Europe\u2019s AI Excellence. https:\/\/www.eurekalert.org\/pub_releases\/2020-03\/au-tec031820.php. Accessed: 2020-10-15."},{"key":"e_1_3_4_331_2","unstructured":"Sohini Upadhyay Shalmali Joshi and Himabindu Lakkaraju. 2021. Towards Robust and Reliable Algorithmic Recourse. arxiv:cs.LG\/2102.13620"},{"volume-title":"Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201919) (FAT* \u201919)","year":"2019","author":"Ustun Berk","key":"e_1_3_4_332_2","unstructured":"Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT\u201919) (FAT* \u201919). ACM, New York, 10. DOI:10.1145\/3287560.3287566"},{"key":"e_1_3_4_333_2","unstructured":"Arnaud Van Looveren and Janis Klaise. 2020. Interpretable Counterfactual Explanations Guided by Prototypes. http:\/\/arxiv.org\/abs\/1907.02584"},{"key":"e_1_3_4_334_2","doi-asserted-by":"publisher","unstructured":"Arnaud Van Looveren Janis Klaise Giovanni Vacanti and Oliver Cobb. 2021. Conditional Generative Models for Counterfactual Explanations. DOI:10.48550\/ARXIV.2101.10123","DOI":"10.48550\/ARXIV.2101.10123"},{"volume-title":"Proceedings of ECCV 2022","year":"2022","author":"Vandenhende Simon","key":"e_1_3_4_335_2","unstructured":"Simon Vandenhende, Dhruv Mahajan, Filip Radenovic, and Deepti Ghadiyaram. 2022. Making heads or tails: Towards semantically consistent visual counterfactuals. In Proceedings of ECCV 2022. DOI:10.1007\/978-3-031-19775-8_16"},{"key":"e_1_3_4_336_2","doi-asserted-by":"publisher","unstructured":"Sahil Verma John Dickerson and Keegan Hines. 2020. Counterfactual Explanations for Machine Learning: A Review. DOI:10.48550\/ARXIV.2010.10596","DOI":"10.48550\/ARXIV.2010.10596"},{"key":"e_1_3_4_337_2","doi-asserted-by":"publisher","unstructured":"Sahil Verma John Dickerson and Keegan Hines. 2021. Counterfactual Explanations for Machine Learning: Challenges Revisited. DOI:10.48550\/ARXIV.2106.07756","DOI":"10.48550\/ARXIV.2106.07756"},{"key":"e_1_3_4_338_2","unstructured":"Sahil Verma Keegan Hines and John P. Dickerson. 2021. Amortized Generation of Sequential Counterfactual Explanations for Black-box Models. arxiv:cs.LG\/2106.03962"},{"key":"e_1_3_4_339_2","first-page":"1","volume-title":"Proceedings of the International Workshop on Software Fairness (FairWare \u201918)","author":"Verma Sahil","year":"2018","unstructured":"Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness (FairWare \u201918). ACM, New York, 1\u20137. DOI:10.1145\/3194770.3194776"},{"key":"e_1_3_4_340_2","unstructured":"Sahil Verma Chirag Shah John P. Dickerson Anurag Beniwal Narayanan Sadagopan and Arjun Seshadri. 2023. RecXplainer: Amortized Attribute-based Personalized Explanations for Recommender Systems. arxiv:cs.IR\/2211.14935"},{"key":"e_1_3_4_341_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10044-021-01055-y"},{"key":"e_1_3_4_342_2","unstructured":"C\u00e9dric Villani. [n. d.]. For a Meaningful Artificial Intelligence. https:\/\/www.aiforhumanity.fr\/pdfs\/MissionVillani_Report_ENG-VF.pdf. Accessed: 2020-10-15."},{"key":"e_1_3_4_343_2","doi-asserted-by":"publisher","unstructured":"Marco Virgolin and Saverio Fracaros. 2022. On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations. DOI:10.48550\/ARXIV.2201.09051","DOI":"10.48550\/ARXIV.2201.09051"},{"volume-title":"Proceedings of the ICML 2021 Workshop on Algorithmic Recourse","year":"2021","author":"K\u00fcgelgen J. von","key":"e_1_3_4_344_2","unstructured":"J. von K\u00fcgelgen, N. Agarwal, J. Zeitler, A. Mastouri, and B. Sch\u00f6lkopf. 2021. Algorithmic recourse in partially and fully confounded settings through bounding counterfactual effects. In Proceedings of the ICML 2021 Workshop on Algorithmic Recourse. https:\/\/sites.google.com\/view\/recourse21\/home"},{"key":"e_1_3_4_345_2","first-page":"9584","volume-title":"Proceedings of the 36th AAAI Conference on Artificial Intelligence","volume":"9","author":"K\u00fcgelgen J. von","year":"2022","unstructured":"J. von K\u00fcgelgen, A.-H. Karimi, U. Bhatt, I. Valera, A. Weller, and B. Sch\u00f6lkopf. 2022. On the fairness of causal algorithmic recourse. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Vol. 9. AAAI Press, Palo Alto, CA, 9584\u20139594. DOI:10.1609\/aaai.v36i9.21192"},{"key":"e_1_3_4_346_2","doi-asserted-by":"publisher","DOI":"10.1093\/idpl\/ipx005"},{"key":"e_1_3_4_347_2","doi-asserted-by":"publisher","DOI":"10.2139\/ssrn.3063289"},{"volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920)","year":"2020","author":"Wang Pei","key":"e_1_3_4_348_2","unstructured":"Pei Wang and Nuno Vasconcelos. 2020. SCOUT: Self-aware discriminant counterfactual explanations. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR\u201920). DOI:10.1109\/CVPR42600.2020.00900"},{"volume-title":"Proceedings of CVPR","year":"2017","author":"Wang Xiaosong","key":"e_1_3_4_349_2","unstructured":"Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. 2017. ChestX-ray8: Hospital-scale chest X-Ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of CVPR. DOI:10.1007\/978-3-030-13969-8_18"},{"volume-title":"Proceedings of CIKM","year":"2021","author":"Wang Yongjie","key":"e_1_3_4_350_2","unstructured":"Yongjie Wang, Qinxu Ding, Ke Wang, Yue Liu, Xingyu Wu, Jinglong Wang, Yong Liu, and Chunyan Miao. 2021. The skyline of counterfactual explanations for machine learning decision models. In Proceedings of CIKM. ACM, New York, 10. DOI:10.1145\/3459637.3482397"},{"key":"e_1_3_4_351_2","doi-asserted-by":"publisher","unstructured":"Yongjie Wang Hangwei Qian and Chunyan Miao. 2022. DualCF: Efficient model extraction attack from counterfactual explanations. In Proceedings of FAccT \u201922. ACM New York. 12. DOI:10.1145\/3531146.3533188","DOI":"10.1145\/3531146.3533188"},{"key":"e_1_3_4_352_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-88942-5_29"},{"key":"e_1_3_4_353_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-77211-6_38"},{"key":"e_1_3_4_354_2","doi-asserted-by":"publisher","unstructured":"Greta Warren Mark T. Keane and Ruth M. J. Byrne. 2022. Features of Explainability: How Users Understand Counterfactual and Causal Explanations for Categorical and Continuous Features in XAI. DOI:10.48550\/ARXIV.2204.10152","DOI":"10.48550\/ARXIV.2204.10152"},{"key":"e_1_3_4_355_2","unstructured":"Greta Warren Mark T. Keane Christophe Gueret and Eoin Delaney. 2023. Explaining Groups of Instances Counterfactually for XAI: A Use Case Algorithm and User Study for Group-Counterfactuals. arxiv:cs.AI\/2303.09297"},{"key":"e_1_3_4_356_2","doi-asserted-by":"publisher","DOI":"10.1039\/D1SC05259D"},{"key":"e_1_3_4_357_2","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2934619"},{"key":"e_1_3_4_358_2","unstructured":"Adam White and Artur d\u2019Avila Garcez. 2019. Measurable Counterfactual Local Explanations for Any Classifier. http:\/\/arxiv.org\/abs\/1908.03020"},{"key":"e_1_3_4_359_2","doi-asserted-by":"publisher","unstructured":"Adam White and Artur d\u2019Avila Garcez. 2021. Counterfactual Instances Explain Little. DOI:10.48550\/ARXIV.2109.09809","DOI":"10.48550\/ARXIV.2109.09809"},{"key":"e_1_3_4_360_2","doi-asserted-by":"publisher","unstructured":"Adam White Kwun Ho Ngan James Phelan Saman Sadeghi Afgeh Kevin Ryan Constantino Carlos Reyes-Aldasoro and Artur d\u2019Avila Garcez. 2021. Contrastive Counterfactual Visual Explanations with Overdetermination. DOI:10.48550\/ARXIV.2106.14556","DOI":"10.48550\/ARXIV.2106.14556"},{"key":"e_1_3_4_361_2","unstructured":"Anjana Wijekoon Nirmalie Wiratunga Ikechukwu Nkisi-Orji Kyle Martin Chamath Palihawadana and David Corsar. 2021. Counterfactual explanations for student outcome prediction with moodle footprints.In Proceedings of the CEUR Workshop 1\u20138. https:\/\/rgu-repository.worktribe.com\/output\/1395861"},{"key":"e_1_3_4_362_2","first-page":"1466","volume-title":"Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI\u201921)","author":"Wiratunga Nirmalie","year":"2021","unstructured":"Nirmalie Wiratunga, Anjana Wijekoon, Ikechukwu Nkisi-Orji, Kyle Martin, Chamath Palihawadana, and David Corsar. 2021. DisCERN: Discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI\u201921). 1466\u20131473. DOI:10.1109\/ICTAI52525.2021.00233"},{"volume-title":"Making Things Happen: A Theory of Causal Explanation","year":"2003","author":"Woodward James","key":"e_1_3_4_363_2","unstructured":"James Woodward. 2003. Making Things Happen: A Theory of Causal Explanation. Oxford University Press."},{"key":"e_1_3_4_364_2","unstructured":"Xintao Xiang and Artem Lenskiy. 2022. Realistic Counterfactual Explanations by Learned Relations. https:\/\/arxiv.org\/abs\/2202.07356"},{"key":"e_1_3_4_365_2","unstructured":"Shuyuan Xu Yunqi Li Shuchang Liu Zuohui Fu Yingqiang Ge Xu Chen and Yongfeng Zhang. 2021. Learning causal explanations for recommendation. CEUR Workshop Proceedings 2911 (2021) 13\u201325."},{"key":"e_1_3_4_366_2","doi-asserted-by":"publisher","unstructured":"Yaniv Yacoby Ben Green Christopher L. Griffin and Finale Doshi Velez. 2022. \u201cIf it didn\u2019t happen why would I Change my Decision?\u201d: How Judges Respond to Counterfactual Explanations for the Public Safety Assessment. DOI:10.48550\/ARXIV.2205.05424","DOI":"10.48550\/ARXIV.2205.05424"},{"key":"e_1_3_4_367_2","doi-asserted-by":"publisher","unstructured":"Prateek Yadav Peter Hase and Mohit Bansal. 2021. Low-Cost Algorithmic Recourse for Users with Uncertain Cost Functions. DOI:10.48550\/ARXIV.2111.01235","DOI":"10.48550\/ARXIV.2111.01235"},{"key":"e_1_3_4_368_2","doi-asserted-by":"publisher","unstructured":"Fan Yang Sahan Suresh Alva Jiahao Chen and Xia Hu. 2021. Model-based counterfactual synthesizer for interpretation. In Proceedings ofKDD \u201921. ACM New York 1964\u20131974. DOI:10.1145\/3447548.3467333","DOI":"10.1145\/3447548.3467333"},{"key":"e_1_3_4_369_2","doi-asserted-by":"publisher","DOI":"10.1145\/3468507.3468517"},{"key":"e_1_3_4_370_2","first-page":"6150","volume-title":"Proceedings of ICCL","author":"Yang Linyi","year":"2020","unstructured":"Linyi Yang, Eoin Kenny, Tin Lok James Ng, Yi Yang, Barry Smyth, and Ruihai Dong. 2020. Generating plausible counterfactual explanations for deep transformers in financial text classification. In Proceedings of ICCL. 6150\u20136160. DOI:10.18653\/v1\/2020.coling-main.541"},{"key":"e_1_3_4_371_2","first-page":"1730","volume-title":"Proceedings of ICASSP 2022","author":"Yang Nakyeong","year":"2022","unstructured":"Nakyeong Yang, Taegwan Kang, and Kyomin Jung. 2022. Deriving explainable discriminative attributes using confusion about counterfactual class. In Proceedings of ICASSP 2022. 1730\u20131734. DOI:10.1109\/ICASSP43922.2022.9747693"},{"key":"e_1_3_4_372_2","doi-asserted-by":"publisher","unstructured":"Yuanshun Yao Chong Wang and Hang Li. 2022. Counterfactually Evaluating Explanations in Recommender Systems. DOI:10.48550\/ARXIV.2203.01310","DOI":"10.48550\/ARXIV.2203.01310"},{"key":"e_1_3_4_373_2","doi-asserted-by":"publisher","DOI":"10.24432\/C55S3H"},{"key":"e_1_3_4_374_2","unstructured":"Roozbeh Yousefzadeh and Dianne P. O\u2019Leary. 2019. Debugging Trained Machine Learning Models using Flip Points. https:\/\/debug-ml-iclr2019.github.io\/cameraready\/DebugML-19_paper_11.pdf"},{"key":"e_1_3_4_375_2","doi-asserted-by":"publisher","unstructured":"Zixuan Yuan Yada Zhu Wei Zhang Ziming Huang Guangnan Ye and Hui Xiong. 2021. Multi-Domain Transformer-Based Counterfactual Augmentation for Earnings Call Analysis. DOI:10.48550\/ARXIV.2112.00963","DOI":"10.48550\/ARXIV.2112.00963"},{"key":"e_1_3_4_376_2","doi-asserted-by":"publisher","unstructured":"Wencan Zhang and Brian Y Lim. 2022. Towards relatable explainable AI with the perceptual process. ACM New York DOI:10.1145\/3491102.3501826","DOI":"10.1145\/3491102.3501826"},{"key":"e_1_3_4_377_2","first-page":"518","volume-title":"Proceedings of ICAART","author":"Zhang. Yuhao","year":"2022","unstructured":"Yuhao Zhang., Kevin McAreavey., and Weiru Liu.2022. Developing and experimenting on approaches to explainability in AI systems. In Proceedings of ICAART. SciTePress, 518\u2013527. DOI:10.5220\/0010900300003116"},{"key":"e_1_3_4_378_2","doi-asserted-by":"publisher","unstructured":"Yunxia Zhao. 2020. Fast Real-time Counterfactual Explanations. DOI:10.48550\/ARXIV.2007.05684","DOI":"10.48550\/ARXIV.2007.05684"},{"key":"e_1_3_4_379_2","doi-asserted-by":"crossref","first-page":"1365","DOI":"10.1145\/3477314.3507029","volume-title":"Proceedings of the 37th ACM\/SIGAPP Symposium on Applied Computing","author":"Zhong Jinfeng","year":"2022","unstructured":"Jinfeng Zhong and Elsa Negre. 2022. Shap-enhanced counterfactual explanations for recommendations. In Proceedings of the 37th ACM\/SIGAPP Symposium on Applied Computing. ACM, New York,1365\u20131372. DOI:10.1145\/3477314.3507029"},{"key":"e_1_3_4_380_2","first-page":"2921","volume-title":"Proceedings of CVPR","author":"Zhou B.","year":"2016","unstructured":"B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of CVPR. IEEE, New York, USA, 2921\u20132929. DOI:10.1109\/CVPR.2016.319"},{"key":"e_1_3_4_381_2","doi-asserted-by":"publisher","unstructured":"Yao Zhou Haonan Wang Jingrui He and Haixun Wang. 2021. From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems. DOI:10.48550\/ARXIV.2110.14844","DOI":"10.48550\/ARXIV.2110.14844"},{"key":"e_1_3_4_382_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-04174-7_45"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3677119","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,3]],"date-time":"2024-10-03T12:44:05Z","timestamp":1727959445000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3677119"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,3]]},"references-count":381,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2024,12,31]]}},"alternative-id":["10.1145\/3677119"],"URL":"https:\/\/doi.org\/10.1145\/3677119","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"type":"print","value":"0360-0300"},{"type":"electronic","value":"1557-7341"}],"subject":[],"published":{"date-parts":[[2024,10,3]]},"assertion":[{"value":"2023-07-25","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-07-05","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-10-03","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}