PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A multi-criteria approach for selecting an explanation from the set of counterfactuals produced by an ensemble of explainers

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Counterfactuals are widely used to explain ML model predictions by providing alternative scenarios for obtaining more desired predictions. They can be generated by a variety of methods that optimize various, sometimes conflicting, quality measures and produce quite different solutions. However, choosing the most appropriate explanation method and one of the generated counterfactuals is not an easy task. Instead of forcing the user to test many different explanation methods and analysing conflicting solutions, in this paper we propose to use a multi-stage ensemble approach that will select a single counterfactual based on the multiple-criteria analysis. It offers a compromise solution that scores well on several popular quality measures. This approach exploits the dominance relation and the ideal point decision aid method, which selects one counterfactual from the Pareto front. The conducted experiments demonstrate that the proposed approach generates fully actionable counterfactuals with attractive compromise values of the quality measures considered.
Rocznik
Strony
119--133
Opis fizyczny
Bibliogr. 36 poz., rys., tab.
Twórcy
  • Institute of Computing Sciences, Poznan University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland
  • Institute of Computing Sciences, Poznan University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland
  • Institute of Computing Sciences, Poznan University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland
Bibliografia
  • [1] Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D. and Rinzivillo, S. (2021). Benchmarking and survey of explanation methods for black box models, Data Mining and Knowledge Discovery 37(5): 1719-1778.
  • [2] Branke, J., Deb, K., Miettinen, K. and Slowiński, R. (2008). Multiobjective Optimization: Interactive and Evolutionary Approaches, Springer, Berlin/Heidelberg.
  • [3] Chapman-Rounds, M., Bhatt, U., Pazos, E., Schulz, M.-A. and Georgatzis, K. (2021). Fimap: Feature importance by minimal adversarial perturbation, Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11433-11441, (virtual).
  • [4] Dandl, S., Molnar, C., Binder, M. and Bischl, B. (2020). Multi-objective counterfactual explanations, in T. Bäck et al. (Eds), Parallel Problem Solving from Nature, PPSN XVI, Springer, Cham, pp. 448-469.
  • [5] Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K. and Das, P. (2018). Explanations based on the missing: Towards contrastive explanations with pertinent negatives, 32nd International Conference Neural Information Processing Systems, Montreal, Canada, pp. 590-601.
  • [6] Ehrgott, M. (2005). Multicriteria Optimization, Springer-Verlag.
  • [7] Ehrgott, M. and Tenfelde-Podehl, D. (2003). Computation of ideal and nadir values and implications for their use in MCDM methods, European Journal of Operational Research 151(1): 119-139.
  • [8] Falbogowski, M., Stefanowski, J., Trafas, Z. and Wojciechowski, A. (2022). The impact of using constraints on counterfactual explanations, Proceedings of the 3rd Polish Conference on Artificial Intelligence, PP-RAI 2022, Gdynia, Poland, pp. 81-84.
  • [9] Förster, M., Hühn, P., Klier, M. and Kluge, K. (2021). Capturing users’ reality: A novel approach to generate coherent counterfactual explanations, Proceedings of the 54th Hawaii International Conference on System Sciences, Maui, USA, pp. 1274-1284.
  • [10] Guidotti, R. (2022). Counterfactual explanations and how to find them: Literature review and benchmarking, Data Mining and Knowledge Discovery, DOI: 10.1007/s10618-022-00831-6.
  • [11] Guidotti, R. and Ruggieri, S. (2021). Ensemble of counterfactual explainers, 24th International Conference on Discovery Science, Halifax, Canada, pp. 358-368.
  • [12] Inbar, Y., Botti, S. and Hanko, K. (2011). Decision speed and choice regret: When haste feels like waste, Journal of Experimental Social Psychology 47(3): 533-540.
  • [13] Iyengar, S. and Lepper, M.R. (2000). When choice is demotivating: Can one desire too much of a good thing?, Journal of Personality and Social Psychology 79(6): 995-1006.
  • [14] Klaise, J., Looveren, A. V., Vacanti, G. and Coca, A. (2021). Alibi explain: Algorithms for explaining machine learning models, Journal of Machine Learning Research 22(1): 1-7.
  • [15] Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, Wiley, Hoboken.
  • [16] Laugel, T., Lesot,M.-J.,Marsala, C., Renard, X. and Detyniecki, M. (2018). Comparison-based inverse classification for interpretability in machine learning, in J. Medina et al. (Eds), Information Processing and Management of Uncertainty in Knowledge-Based Systems: Theory and Foundations, Springer, Cham, pp. 100-111.
  • [17] Mertes, S., Huber, T., Weitz, K., Heimerl, A. and Andrè, E. (2022). GANterfactual - Counterfactual explanations for medical non-experts using generative adversarial learning, Frontiers in Artificial Intelligence 5.
  • [18] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence 267: 1-38.
  • [19] Moore, J., Hammerla, N. and Watkins, C. (2019). Explaining deep learning models with constrained adversarial examples, in A.C. Nayak and A. Sharma (Eds), PRICAI 2019: Trends in Artificial Intelligence, Springer, Cham, pp. 43-56.
  • [20] Mothilal, R.K., Sharma, A. and Tan, C. (2020). Explaining machine learning classifiers through diverse counterfactual explanations, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, pp. 607-617.
  • [21] Pawelczyk, M., Bielawski, S., van den Heuvel, J., Richter, T. and Kasneci, G. (2021). Carla: A Python library to benchmark algorithmic recourse and counterfactual explanation algorithms, arXiv 2108.00783.
  • [22] Pearl, J., Glymour, M. and Jewell, N. (2016). Causal Inference in Statistics: A Primer, Wiley, Hoboken.
  • [23] Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T. and Flach, P. (2020). FACE: Feasible and actionable counterfactual explanations, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, New York, USA, pp. 344-350.
  • [24] Rasouli, P. and Chieh Yu, I. (2022). CARE: Coherent actionable recourse based on sound counterfactual explanations, International Journal of Data Science and Analytics 17(1): 1-26.
  • [25] Skulimowski, A. (1990). Applicability of ideal points in multicriteria decision-making, Proceedings of the 9th International Conference on Multiple Criteria Decision-Making, Fairfax, USA, pp. 5-8.
  • [26] Spreitzer, N., Haned, H. and van der Linden, I. (2022). Evaluating the practicality of counterfactual explanations, Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022, New Orleans, USA.
  • [27] Stefanowski, J. (2023). Multi-criteria approaches to explaining black box machine learning models, Asian Conference on Intelligent Information and Database Systems ACIIDS 2023, Phuket, Thailand, pp. 195-208.
  • [28] Stepka, I., Lango, M. and Stefanowski, J. (2023). On usefulness of dominance relation for selecting counterfactuals from the ensemble of explainers, Proceedings of the 4th Polish Conference on Artificial Intelligence, PP-RAI 2023, Łódź, Poland, pp. 125-130.
  • [29] Steuer, R. (1986). Multiple Criteria Optimization: Theory, Computation, and Application, Wiley, Hoboken.
  • [30] Tolomei, G., Silvestri, F., Haines, A. and Lalmas, M. (2017). Interpretable predictions of tree-based ensembles via actionable feature tweaking, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Hailfax, Canada.
  • [31] Ustun, B., Spangher, A. and Liu, Y. (2019). Actionable recourse in linear classification, Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, USA, pp. 10-19.
  • [32] Van Looveren, A. and Klaise, J. (2021). Interpretable counterfactual explanations guided by prototypes, in N. Oliver et al. (Eds), Machine Learning and Knowledge Discovery in Databases: Research Track, Springer, Cham, pp. 650-665.
  • [33] Verma, S., Boonsanong, V., Hoang, M., Hines, K.E., Dickerson, J.P. and Shah, C. (2020). Counterfactual explanations and algorithmic recourses for machine learning: A review, arXiv 2010.10596.
  • [34] Wachter, S., Mittelstadt, B. and Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harvard Journal Law & Technology 31(2): 841-887.
  • [35] Wellawatte, G.P., Seshadri, A. and White, A.D. (2022). Model agnostic generation of counterfactual explanations for molecules, Chemical Science 13(13): 3697-3705.
  • [36] Wilson, D.R. and Martinez, T.R. (1997). Improved heterogeneous distance functions, Journal of Artificial Intelligence Research 6: 1-34.
Uwagi
PL
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-9279cf11-a3c0-45f0-af11-69b12b31882b
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.