PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Vulnerability to One-Pixel Attacks of Neural Network Architectures in Medical Image Classification

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Objective: The use of neural networks for disease classification based on medical imaging is susceptible to variations in results caused by even a single-pixel change, a phenomenon known as a one-pixel attack, which should be examined qualitatively and quantitatively. Methods: For an extended dataset of brain MRI images representing four diagnoses, the networks VGG-16, ResNet-50, DenseNet-121, MobileNetV2, EfficientNet-B0, NASNetMobile, and ViT Base were implemented. Each model was trained three times on 96 × 96 inputs, with the best-performing trial selected for adversarial testing (Phase 1). The three most robust models from Phase 1 (VGG-16, MobileNetV2, EfficientNet-B0) were then retrained on 224 × 224 inputs to assess the effect of higher resolution on susceptibility (Phase 2). The susceptibility of a diagnosis change to a single bright pixel alteration in the input image was assessed, and an average number of vulnerable pixels (ANVP) per image was carried out. Results: At 96 × 96 resolution, the least vulnerable model was MobileNetV2 (ANVP: 20.45, susceptibility: 0.22%). This was followed by ViT Base (22.20, 0.24%), EfficientNet-B0 (38.55, 0.42%), DenseNet-121 (43.52, 0.47%), ResNet-50 (69.11, 0.75%), and VGG-16 (78.66, 0.85%). The most vulnerable was NASNetMobile (119.52, 1.30%). At 224 × 224 resolution, robustness further improved for EfficientNet-B0 (37.53, 0.07%) and MobileNetV2 (49.51, 0.10%), while VGG-16 remained less stable (99.44, 0.20%). Conclusions: Implementing disease classification based on medical imaging using neural networks may pose a potential risk of misinterpretation due to changes in data irrelevant to the study, which are clearly noticeable to a human.
Rocznik
Strony
58--70
Opis fizyczny
Bibliogr. 25 poz., rys., tab.
Twórcy
  • AGH University of Krakow, Poland
  • Silesian University of Technology, Gliwice, Poland
  • AGH University of Krakow, Poland
Bibliografia
  • 1. Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med. Image Anal. 2023 Apr;85:102762.
  • 2. Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. Evol Intel. 2022 Mar;15(1):1-22.
  • 3. Marey A, Serdysnki KC, Killeen BD, Unberath M, Umair M. Applications and implementation of generative artificial intelligence in cardiovascular imaging with a focus on ethical and legal considerations: what cardiovascular imagers need to know! BJR Artificial Intelligence. 2024 Mar 4;1(1):ubae008.
  • 4. Ker J, Wang L, Rao J, Lim T. Deep Learning Applications in Medical Image Analysis. IEEE Access. 2018;6:9375-89.
  • 5. Shurrab S, Duwairi R. Self-supervised learning methods and applications in medical imaging analysis: A survey. PeerJ Computer Science. 2022 Jul 19;8:e1045.
  • 6. Bishop CM. Pattern recognition and machine learning. New York: Springer; 2006.
  • 7. Altaf F, Islam SMS, Akhtar N, Janjua NK. Going Deep in Medical Image Analysis: Concepts, Methods, Challenges, and Future Directions. IEEE Access. 2019;7:99540-72.
  • 8. Sindhura DN, Pai RM, Bhat SN, Pai MMM. A review of deep learning and Generative Adversarial Networks applications in medical image analysis. Multimed. Syst. 2024 Jun;30(3):161.
  • 9. Yang X. Medical Image Synthesis: Methods and Clinical Applications [Internet]. 1st ed. Boca Raton: CRC Press; 2023 [cited 2025 Jun 18]. Available from: https://www.taylorfrancis.com/books/9781003243458.
  • 10. Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial attacks on medical machine learning. Science. 2019 Mar 22;363(6433):1287-9.
  • 11. Hirano H, Minagi A, Takemoto K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging. 2021 Dec;21(1):9.
  • 12. Puttagunta MK, Ravi S, Nelson Kennedy Babu C. Adversarial examples: attacks and defences on medical deep learning systems. Multimed Tools Appl. 2023 Sep;82(22):33773-809.
  • 13. Tsai MJ, Lin PY, Lee ME. Adversarial Attacks on Medical Image Classification. Cancers. 2023 Aug 23;15(17):4228.
  • 14. Su J, Vargas DV, Kouichi S. One pixel attack for fooling deep neural networks. IEEE Trans Evol Computat. 2019 Oct;23(5):828-41.
  • 15. Wang Y, Wang W, Wang X, Chen Z. On One-Pixel Attacking Medical Images against Deep Learning Models. In: Proceedings of the 2023 4th International Symposium on Artificial Intelligence for Medicine Science [Internet]. Chengdu China: ACM; 2023 [cited 2025 Jun 9]. p. 248-57. Available from: https://dl.acm.org/doi/10.1145/3644116.3644161.
  • 16. Nickparvar M. Brain Tumor MRI Dataset [Internet]. Kaggle; [cited 2025 Jul 22]. Available from: https://www.kaggle.com/dsv/2645886.
  • 17. Bhuvaji S, Kadam A, Bhumkar P, Dedge S, Kanchan S. Brain Tumor Classification (MRI) [Internet]. Kaggle; [cited 2025 Jun 19]. Available from: https://www.kaggle.com/dsv/1183165.
  • 18. Simonyan K, Zisserman A. Very Deep Convolutional Networks For Large-Scale Image Recognition. 2015; arXiv:1409.1556. doi: https://doi.org/10.48550/arXiv.1409.1556.
  • 19. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. 2015; arXiv:1512.03385. doi: https://doi.org/10.48550/arXiv.1512.03385.
  • 20. Huang G, Liu Z, Maaten L van der, Weinberger KQ. Densely Connected Convolutional Networks. 2018; arXiv:1608.06993. doi: https://doi. org/10.48550/arXiv.1608.06993.
  • 21. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2019; arXiv:1801. doi: https://doi.org/10.48550/arXiv.1801.04381.
  • 22. Tan M, Le QV. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 2020; arXiv:1905.11946. doi: https://doi.org/10.48550/arXiv.1905.11946.
  • 23. Zoph B, Vasudevan V, Shlens J, Le QV. Learning Transferable Architectures for Scalable Image Recognition. 2018; arXiv:1707.07012. doi: https://doi.org/10.48550/arXiv.1707.07012.
  • 24. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. 2020; arXiv:2010.11929. doi: https://doi.org/10.48550/arXiv.2010.11929.
  • 25. Vargas DV, Su J. Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis. 2019; arXiv:1902.02947. doi: https://doi.org/10.48550/arXiv.1902.02947.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-1421d89a-df77-4e13-9632-2e7a4a053601
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.