Counterfactuals are widely used to explain ML model predictions by providing alternative scenarios for obtaining more desired predictions. They can be generated by a variety of methods that optimize various, sometimes conflicting, quality measures and produce quite different solutions. However, choosing the most appropriate explanation method and one of the generated counterfactuals is not an easy task. Instead of forcing the user to test many different explanation methods and analysing conflicting solutions, in this paper we propose to use a multi-stage ensemble approach that will select a single counterfactual based on the multiple-criteria analysis. It offers a compromise solution that scores well on several popular quality measures. This approach exploits the dominance relation and the ideal point decision aid method, which selects one counterfactual from the Pareto front. The conducted experiments demonstrate that the proposed approach generates fully actionable counterfactuals with attractive compromise values of the quality measures considered.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The techniques of explainability and interpretability are not alternatives for many realworld problems, as recent studies often suggest. Interpretable machine learning is nota subset of explainable artificial intelligence or vice versa. While the former aims to build glass-box predictive models, the latter seeks to understand a black box using an explanatory model, a surrogate model, an attribution approach, relevance importance, or other statistics. There is concern that definitions, approaches, and methods do not match, leading to the inconsistent classification of deep learning systems and models for interpretation and explanation. In this paper, we attempt to systematically evaluate and classify the various basic methods of interpretability and explainability used in the field of deep learning.One goal of this paper is to provide specific definitions for interpretability and explainability in Deep Learning. Another goal is to spell out the various research methods for interpretability and explainability through the lens of the literature to create a systematic classifier for interpretability and explainability in deep learning. We present a classifier that summarizes the basic techniques and methods of explainability and interpretability models. The evaluation of the classifier provides insights into the challenges of developinga complete and unified deep learning framework for interpretability and explainability concepts, approaches, and techniques.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Electronic Commerce (E-Commerce) has become one of the most significant consumer-facing tech industries in recent years. This industry has considerably enhanced people's lives by allowing them to shop online from the comfort of their own homes. Despite the fact that many people are accustomed to online shopping, e-commerce merchants are facing a significant problem, a high percentage of checkout abandonment. In this study, we have proposed an end-to-end Machine Learning (ML) system that will assist the merchant to minimize the rate of checkout abandonment with proper decision making and strategy. As a part of the system, we developed a robust machine learning model that predicts if someone will checkout the products added to the cart based on the customer's activity. Our system also provides the merchants with the opportunity to explore the underlying reasons for each single prediction output. This will indisputably help the online merchants in business growth and effective stock management.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.