Tytuł artykułu
Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The paper presents a novel approach to investigating adversarial attacks on machine learning classification models operating on tabular data. The employed method involves using diagnostic parameters calculated on an approximated representation of a model under attack and analyzing differences in these diagnostic parameters over time. The hypothesis researched by the authors is that adversarial attack techniques, even if attempting a low-profile modification of input data, influence those diagnostic attributes in a statistically significant way. Thus, changes in diagnostic attributes can be used for detecting attack events. Three attack approaches on real-world datasets were investigated. The experiments confirm the approach as a promising technique to be further developed for detecting adversarial attacks.
Rocznik
Tom
Strony
247--251
Opis fizyczny
Bibliogr. 29 poz., wykr., tab.
Twórcy
autor
- Silesian University of Technology, Faculty of Automatic Control, Electronics and Computer Science Akademicka 16, 44-100 Gliwice, Poland
- QED Software sp. z o.o., Mazowiecka 11/49, 00-052 Warsaw, Poland
autor
- Łukasiewicz Research Network, Institute of Innovative Technologies EMAG ul. Leopolda 31, 40-189 Katowice, Poland,
Bibliografia
- 1. M. Kozielski, M. Sikora, and Ł. Wróbel, “Disesor-decision support system for mining industry,” in 2015 Federated Conference on Computer Science and Information Systems (FedCSIS). IEEE, 2015, pp. 67–74.
- 2. M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science, vol. 349, no. 6245, pp. 255–260, 2015. http://dx.doi.org/10.1126/science.aaa8415. [Online]. Available: https://www.science.org/doi/abs/10.1126/science.aaa8415
- 3. N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14 410–14 430, 2018. http://dx.doi.org/10.1109/ACCESS.2018.2807385
- 4. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” pp. 506–519, 04 2017. http://dx.doi.org/10.1145/3052973.3053009
- 5. N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” pp. 39–57, 05 2017. http://dx.doi.org/10.1109/SP.2017.49
- 6. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2016, pp. 372–387.
- 7. M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of machine learning,” Machine Learning, vol. 81, no. 2, pp. 121–148, 2010.
- 8. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” p. 2154–2156, 2018. http://dx.doi.org/10.1145/3243734.3264418. [Online]. Available: https://doi.org/10.1145/3243734.3264418
- 9. Z. Pawlak, Rough sets: Theoretical aspects of reasoning about data. Springer Science & Business Media, 1991.
- 10. A. Skowron and L. Polkowski, Rough sets in knowledge discovery 1: Basic concepts. CRC Press, 1998.
- 11. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Good-fellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2013.
- 12. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint https://arxiv.org/abs/1412.6572, 2014.
- 13. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” 07 2016.
- 14. I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, “Robust physical-world attacks on machine learning models,” CoRR, vol. abs/1707.08945, 2017. [Online]. Available: http://arxiv.org/abs/1707.08945
- 15. K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial attacks and defenses in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020. http://dx.doi.org/https://doi.org/10.1016/j.eng.2019.12.012. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S209580991930503X
- 16. K. Kireev, B. Kulynych, and C. Troncoso, “Adversarial robustness for tabular data through cost and utility awareness,” in NeurIPS ML Safety Workshop, 2022. [Online]. Available: https://openreview.net/forum?id=3ieyhWF1Hk
- 17. D. Hendrycks and K. Gimpel, “Early methods for detecting adversarial images,” arXiv preprint https://arxiv.org/abs/1705.07263, 2017.
- 18. L. Li, X. Chen, Z. Bi, X. Xie, S. Deng, N. Zhang, C. Tan, M. Chen, and H. Chen, “Normal vs. adversarial: Salience-based analysis of adversarial samples for relation extraction,” in Proceedings of the 10th International Joint Conference on Knowledge Graphs, ser. IJCKG ’21. New York, NY, USA: Association for Computing Machinery, 2022. http://dx.doi.org/10.1145/3502223.3502237. ISBN 9781450395656 p. 115–120. [Online]. Available: https://doi.org/10.1145/3502223.3502237
- 19. J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “Detecting adversarial perturbations with neural networks,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
- 20. K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. Mcdaniel, “On the (statistical) detection of adversarial examples,” ArXiv, vol. abs/1702.06280, 2017.
- 21. G. K. Santhanam and P. Grnarova, “Defending against adversarial attacks by leveraging an entire gan,” 2018.
- 22. J. Chen, M. I. Jordan, and M. J. Wainwright, “Hopskipjumpattack: A query-efficient decision-based attack,” arXiv preprint https://arxiv.org/abs/1904.02144, 2019.
- 23. M. Hashemi and A. Fathi, “Permuteattack: Counterfactual explanation of machine learning credit scorecards,” 2020.
- 24. P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 15–26.
- 25. A. Janusz, A. Zalewska, Łukasz Wawrowski, P. Biczyk, J. Ludziejewski, M. Sikora, and D. Śl ̨ezak, “Brightbox—a rough set based technology for diagnosing mistakes of machine learning models,” Applied Soft Computing, p. 110285, 2023. http://dx.doi.org/https://doi.org/10.1016/j.asoc.2023.110285. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1568494623003034
- 26. A. Skowron and D. Śl ̨ezak, “Rough Sets Turn 40: From Information Systems to Intelligent Systems,” in Proceedings of the 17th Conference on Computer Science and Intelligence Systems, FedCSIS 2022, Sofia, Bulgaria, September 4-7, 2022, ser. Annals of Computer Science and Information Systems, M. Ganzha, L. A. Maciaszek, M. Paprzycki, and D. Ślęzak, Eds., vol. 30, 2022. http://dx.doi.org/10.15439/2022F310 pp. 23–34. [Online]. Available: https://doi.org/10.15439/2022F310
- 27. F. J. Massey Jr, “The kolmogorov-smirnov test for goodness of fit,” Journal of the American statistical Association, vol. 46, no. 253, pp. 68–78, 1951.
- 28. R. C. Blair and J. J. Higgins, “Comparison of the power of the paired samples t test to that of wilcoxon’s signed-ranks test under various population shapes.” Psychological Bulletin, vol. 97, no. 1, p. 119, 1985.
- 29. A. Gudyś, M. Sikora, and Ł. Wróbel, “Rulekit: A comprehensive suite for rule-based learning,” Knowledge-Based Systems, vol. 194, p. 105480, 2020.
Uwagi
1. Main Track Short Papers
2. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e02544a3-8186-4684-8d77-721c437e33c2