Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  XAI
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Our study aimed to develop an explanatory method for predicting Coronary Artery Disease (CAD) classification using spect images. As we all know, deep neural networks usually consist of many layers connected to each other through interlocking network nodes. Even if we check the classes and describe their relationships, it is difficult to understand entirely how active neural networks make predictions. Therefore, deep learning is still considered a``Black box''. Existing XAI (eXplainable Artificial Intelligence) approach can provide insights into the inside of a Deep Learning model allowing for transparency and interpretation. Our previous research helps doctors diagnose the CAD of patients by developing deep learning models using a multi-stage transfer learning framework. The model achieved 0.955 accuracy, 0.932 AUC, 0.944 sensitivity, and 0.889 specificity, showing effective performance. Our dataset includes 218 SPECT images from 218 imported patients collected at 108 Hospital in Hanoi, Vietnam. In this paper, We propose an explainable Deep Learning framework using three popular XAI approaches: LIME, GradCam, and RISE. These XAI approaches are effective tools for interpreting the prediction of deep learning models. We evaluate the effectiveness of the interpretation by visualizing the explained regions and using improved deletion and insertion with a threshold limit suitable for Binary Classification. The experiment results show that our model effectively diagnoses CAD and provides medical interpretation. Furthermore, the proposed method for evaluating the deletion and insertion metrics is considered more efficient for binary classification than the traditional metrics.
PL
Niedawny rozwój sztucznej inteligencji w technologii informacyjnej jest niezwykły. Zmiany te doprowadziły do twierdzeń, że sztuczna inteligencja może być wykorzystywana w sądach do zastępowania sędziów. W artykule autor odnosi się do sedna tych problemów, używając koncepcji interpretowalnej sztucznej inteligencji (XAI – Explainable Artificial Intelligence). Analizie poddano to, w jaki sposób regulacja może zapewnić, że sztuczna inteligencja będzie etyczna, a także w jaki sposób ta etyczność jest ściśle powiązana z XAI. Stwierdzono, że obecnie wkład sztucznej inteligencji w proces decyzyjny jest ograniczony przez brak wystarczającej możliwości jej wyjaśnienia i interpretacji, chociaż aspekty te są odpowiednio uwzględnione i omówione. Ponadto kluczowe jest rozważenie wpływu sztucznej inteligencji na autorytet prawny, który stanowi podstawę wymiaru sprawiedliwości. Zasugerowano przy tym rozważenie przeprowadzenia badania eksperymentalnego polegającego na włączeniu sztucznej inteligencji do procesu arbitrażowego.
EN
The recent development of artificial intelligence (AI) in information technology (IT) is remarkable. These developments have led to claims that AI can be used in courts to replace judges. In the article, the author addresses a matrix of these issues using the concept of explainable AI (XAI). The article examines how regulation can ensure that AI is ethical, and how this ethicality is closely related to (XAI). It concludes that, in the current context, the contribution of AI to the decision-making process is limited by the lack of sufficient explainability and interpretability of AI, although these aspects are adequately addressed and discussed. In addition, it is crucial to consider the impact of AI’s contribution on the legal authority that forms the foundation of the justice system, and a possible approach is suggested to consider conducting an experimental study as AI arbitration.
3
94%
EN
Electronic Commerce (E-Commerce) has become one of the most significant consumer-facing tech industries in recent years. This industry has considerably enhanced people's lives by allowing them to shop online from the comfort of their own homes. Despite the fact that many people are accustomed to online shopping, e-commerce merchants are facing a significant problem, a high percentage of checkout abandonment. In this study, we have proposed an end-to-end Machine Learning (ML) system that will assist the merchant to minimize the rate of checkout abandonment with proper decision making and strategy. As a part of the system, we developed a robust machine learning model that predicts if someone will checkout the products added to the cart based on the customer's activity. Our system also provides the merchants with the opportunity to explore the underlying reasons for each single prediction output. This will indisputably help the online merchants in business growth and effective stock management.
4
Content available remote Explainable spark-based psoclustering for intrusion detection
84%
EN
Given the exponential growth of available data in large networks, the existenceof rapid, transparent, and explainable intrusion detection systems has becomeof highly necessity to effectively discover attacks in such huge networks. Todeal with this challenge, we propose a novel explainable intrusion detectionsystem based on Spark, Particle Swarm Optimization (PSO) clustering, andeXplainable Artificial Intelligence (XAI) techniques. Spark is used as a parallelprocessing model for the effective processing of large-scale data, PSO is inte-grated to improve the quality of the intrusion detection system by avoiding sen-sitive initialization and premature convergence of the clustering algorithm andfinally, XAI techniques are used to enhance interpretability and explainabilityof intrusion recommendations by providing both micro and macro explanationsof detected intrusions. Experiments are conducted on large collections of realdatasets to show the effectiveness of the proposed intrusion detection systemin terms of explainability, scalability, and accuracy. The proposed system hasshown high transparency in assisting security experts and decision-makers tounderstand and interpret attack behavior.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.