PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Review of XAI methods for application in heavy industry

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In recent years, considerable progress has been made in the field of artificial intelligence and machine learning. This progress allows us to solve increasingly complex problems, but it also requires providing appropriate explanations to understand the actions taken by AI. For this purpose, research into the development of Explainable Artificial Intelligence has been initiated and interest in this topic is constantly growing. This review of XAI methods includes a justification for the need to introduce solutions to explain artificial intelligence models, describes the differences between various methods and presents example methods that work in different cases. The purpose of this paper is to solve a real problem occurring in heavy industry. The third chapter describes the challenges to be faced, the solution developed and the results of the work. The entire study concludes with a summary of the research findings.
Wydawca
Rocznik
Strony
31--43
Opis fizyczny
Bibliogr. 38 poz., rys.
Twórcy
  • AGH University of Krakow, Department of Applied Computer Science and Modelling, Krakow, Poland
autor
  • AGH University of Krakow, Department of Applied Computer Science and Modelling, Krakow, Poland
  • AGH University of Krakow, Department of Applied Computer Science and Modelling, Krakow, Poland
Bibliografia
  • Ahmad Khan, M., Khan, M., Dawood, H., Dawood, H., & Daud, A. (2024). Secure Explainable-AI approach for brake faults prediction in heavy transport. IEEE Access,12, 114940–114950. https://doi.org/10.1109/ACCESS.2024.3444907
  • Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE,10(7), e0130140. https://doi.org/10.1371/journal.pone.0130140
  • Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 839–847. https://doi.org/10.1109/WACV.2018.00097
  • Ciatto, G., Schumacher, M. I., Omicini, A., & Calvaresi, D. (2020). Agent-based explanations in AI: towards an abstract framework. In D. Calvaresi, A. Najjar, M. Winikoff, & K. Främling (Eds.), Lecture Notes in Computer Science: Vol. 12175. Explainable, Transparent Autonomous Agents and Multi-Agent Systems (pp. 3–20). Springer. https://doi.org/10.1007/978-3-030-51924-7_1
  • Craven, M. W., & Shavlik, J. W. (1996). Extracting tree-structured representations of trained networks. In D. S. Touretzky, M. C. Mozer, M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems 8 (pp. 24–30). MIT Press.
  • Elenberg, E. R., Dimakis, A. G., Feldman, M., & Karbasi, A. (2018). Streaming weak submodularity: interpreting neural networks on the fly. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 4047–4055). Curran Associates.
  • Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. In 2017 IEEE International Conference on Computer Vision (pp. 3429–3437). https://doi.org/10.1109/ICCV.2017.371
  • Friedman, J. H., & Popescu, B. E. (2008). Predictive learning via rule ensembles. The Annals of Applied Statistics,2(3), 916–954. https://doi.org/10.1214/07-AOAS148
  • Gu, J., & Tresp, V. (2019). Contextual prediction difference analysis. ArXiv, arXiv:1910.09086. https://doi.org/10.48550/arXiv.1910.09086
  • Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local rule-based explanations of black box decision systems. ArXiv, arXiv:1805.10820. https://doi.org/10.48550/arXiv.1805.10820
  • Gulum, M. A., Trombley, Ch. M., & Kantardzic, M. (2021). A review of explainable deep learning cancer detection models in medical imaging. Applied Sciences,11(10), 4573. https://doi.org/10.3390/app11104573
  • Hall, P., Gill, N., Kurka, M., & Phan, W. (2024). Machine Learning Interpretability with H2O Driverless AI (A. Bartz, Ed.). H2O.ai.
  • Jiang, P., Ergu, D., Liu, F., Cai, Y., & Ma, B. (2022). A review of YOLO algorithm developments. Procedia Computer Science,199, 1066–1073. https://doi.org/10.1016/j.procs.2022.01.135
  • Kim, B., Khanna, R., & Koyejo, O. (2017). Examples are not enough, learn to criticize! Criticism for interpretability. In D. D. Lee, U. Von Luxburg, R. Garnett, M. Sugiyama, I. Guyon (Eds.), Advances in Neural Information Processing Systems 29 (pp. 2280–2288). Curran Associates.
  • Lundberg, S. M., & Lee, S.-I. (2018). A unified approach to interpreting model predictions. In U. Von Luxburg, I. Guyon, S. Bengio, H. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 4765–4774). Curran Associates.
  • Mothilal, R. K., Sharma, A., & Tan, Ch. (2020). Explaining machine learning classifiers through diverse counterfactual explanations. In FAT* ‘20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 607–617). https://doi.org/10.1145/3351095.3372850
  • Petsiuk, V., Das, A., & Saenko, K. (2018). RISE: Randomized Input Sampling for Explanation of black-box models. ArXiv, arXiv:1806.07421. https://doi.org/10.48550/arXiv.1806.07421
  • Plumb, G., Molitor, D., & Talwalkar, A. (2019). Model agnostic supervised local explanations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems 31 (pp. 2515–2524). Curran Associates.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the predictions of any classifier. In KDD ‘16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence,32(1), 1527–1535. https://doi.org/10.1609/aaai.v32i1.11491
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2020). Grad-CAM: visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision,128(2), 336–359. https://doi.org/10.1007/s11263-019-01228-7
  • Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning important features through propagating activation differences. In D. Precup, Y. Whye Teh (Eds.), ICML’17: Proceedings of the 34th International Conference on Machine Learning (vol. 70, pp. 3145–3153).
  • Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: visualising image classification models and saliency maps. ArXiv, arXiv:1312.6034. https://doi.org/10.48550/arXiv.1312.6034
  • Smilkov, D., Thorat, N., Kim, B., Viégas, F., & Wattenberg, M. (2017). SmoothGrad: removing noise by adding noise. ArXiv, arXiv:1706.03825. https://doi.org/10.48550/arXiv.1706.03825
  • Sofianidis, G., Rožanec, J. M., Mladenić, D., & Kyriazis, D. (2021). A review of explainable artificial intelligence in manufacturing. ArXiv, arXiv:2107.02295. https://doi.org/10.48550/arXiv.2107.02295
  • Sokol, K., & Flach, P. (2024). LIMEtree: Consistent and faithful multi-class explanations. ArXiv, arXiv:2005.01427. https://doi.org/10.48550/arXiv.2005.01427
  • Soomro, S., Niaz, A., & Nam Choi, K. (2024). Grad++ScoreCAM: Enhancing visual explanations of deep convolutional networks using incremented gradient and score- weighted methods. IEEE Access,12, 61104–61112. https://doi.org/10.1109/ACCESS.2024.3392853
  • Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: the all convolutional net. ArXiv, arXiv:1412.6806. https://doi.org/10.48550/arXiv.1412.6806
  • Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. In D. Precup, Y. Whye Teh (Eds.), ICML’17: Proceedings of the 34th International Conference on Machine Learning (vol. 70, pp. 3319–3328).
  • Thanathamathee, P., Sawangarreerak, S., & Nizam, D. N. M. (2024). Enhancing going concern prediction with anchor explainable AI and attention-weighted XGBoost. IEEE Access,12, 68345–68363. https://doi.org/10.1109/ACCESS.2024.3401007
  • Thombre, A. (2024). Explainable AI (XAI): Using decision trees to explain neural network model. ResearchGate. https://www.researchgate.net/publication/383898176_Explainable_AI_XAI_Using_decision_trees_to_explain_neural_network_model
  • Ustun, B., Tracà, S., & Rudin, C. (2013). Supersparse linear integer models for interpretable classification. ArXiv, arXiv:1306.6677. https://doi.org/10.48550/arXiv.1306.6677
  • Vilone, G., & Longo, L. (2020). Explainable Artificial Intelligence: a systematic review. ArXiv, arXiv:2006.00093. https://doi.org/10.48550/arXiv.2006.00093
  • Waa, J., van der, Robeer, M., Diggelen, J., van, Brinkhuis, M., & Neerincx, M. (2018). Contrastive explanations with local foil trees. ArXiv, arXiv:1806.07470. https://doi.org/10.48550/arXiv.1806.07470
  • Zafar, M. R., & Khan, N. M. (2019). DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. ArXiv, arXiv:1906.10263. https://doi.org/10.48550/arXiv.1906.10263
  • Zeiler, M. D., & Fergus, R. (2013). Visualizing and understanding convolutional networks. ArXiv, arXiv:1311.2901. https://doi.org/10.48550/arXiv.1311.2901
  • Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2921–2929. https://doi.org/10.1109/CVPR.2016.319
  • Zintgraf, L. M., Cohen, T. S., Adel, T., & Welling, M. (2017). Visualising deep neural network decisions: prediction difference analysis. ArXiv, arXiv:1702.04595. https://doi.org/10.48550/arXiv.1702.04595
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-8c73497d-b167-43e2-abea-4a41cf51db42
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.