Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Federated learning (FL) involves joint model training by various devices while preserving the privacy of their data. However, it presents a challenge of dealing with heterogeneous data located on participating devices. This issue can further be complicated by the appearance of malicious clients, aiming to sabotage the training process by poisoning local data. In this context, a problem of differentiating between poisoned and non-identically-independently-distributed (non-IID) data appears. To address it, a technique utilizing data-free synthetic data generation is proposed, using a reverse concept of adversarial attack. Adversarial inputs allow for improving the training process by measuring clients’ coherence and favoring trustworthy participants. Experimental results, obtained from the image classification tasks for MNIST, EMNIST, and CIFAR-10 datasets are reported and analyzed.
Słowa kluczowe
Rocznik
Tom
Strony
1--13
Opis fizyczny
Bibliogr. 43 poz., rys.
Twórcy
autor
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Koszykowa 75, 00-662 Warsaw, Poland
Bibliografia
- [1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. “Communication-efficient learning of deep networks from decentralized data”, 2017.
- [2] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang. “On the convergence of fedavg on non-iid data”, 2020.
- [3] T.-M. H. Hsu, H. Qi, and M. Brown. “Measuring the effects of non-identical data distribution for federated visual classification”, 2019.
- [4] X. Ma, J. Zhu, Z. Lin, S. Chen, and Y. Qin, “A state-of-the-art survey on solving non-iid data in federated learning”, Future Generation Computer Systems, vol. 135, 2022, 244–258, https://doi.org/10.1016/j.future.2022.05.003.
- [5] R. Gosselin, L. Vieu, F. Loukil, and A. Benoit, “Privacy and security in federated learning: A survey”, Applied Sciences, vol. 12, no. 19, 2022.
- [6] P. Erbil and M. E. Gursoy, “Detection and mitigation of targeted data poisoning attacks in federated learning”. In: 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), 2022, 1–8, 10.1109/DASC/PiCom/CBDCom/Cy55231.022.9927914.
- [7] A. Danilenka, “Mitigating the effects of non-iid data in federated learning with a self-adversarial balancing method”, 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), 2023, 925–930.
- [8] Y. LeCun and C. Cortes, “MNIST handwritten digit database”, 2010.
- [9] G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. “Emnist: an extension of mnist to handwritten letters”, 2017.
- [10] A. Krizhevsky. “Learning multiple layers of features from tiny images”, 2009.
- [11] Z. Zhang, X. Cao, J. Jia, and N. Z. Gong, “Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients”. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2022, 2545–2555, 10.1145/3534678.3539231.
- [12] D. Li, W. E. Wong, W. Wang, Y. Yao, and M. Chau, “Detection and mitigation of label-flipping attacks in federated learning systems with kpca and k-means”. In: 2021 8th International Conference on Dependable Systems and Their Applications (DSA), 2021, 551–559, 10.1109/DSA52907.2021.00081.
- [13] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent”. In: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds., Advances in Neural Information Processing Systems, vol. 30, 2017.
- [14] D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, “Understanding distributed poisoning attackin federated learning”. In: 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), 2019, 233–239, 10.1109/ICPADS47876.2019.00042.
- [15] X. Cao, M. Fang, J. Liu, and N. Z. Gong. “Fltrust: Byzantine-robust federated learning via trust bootstrapping”, 2022.
- [16] D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. “Byzantine-robust distributed learning: Towards optimal statistical rates”, 2021.
- [17] C. Xie, O. Koyejo, and I. Gupta. “Generalized byzantine-tolerant sgd”, 2018.
- [18] C. Fung, C. J. M. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings”. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), San Sebastian, 2020, 301–316.
- [19] Y. Xie, W. Zhang, R. Pi, F. Wu, Q. Chen, X. Xie, and S. Kim. “Robust federated learning against both data heterogeneity and poisoning attack via aggregation optimization”, 2022.
- [20] S. Han, S. Park, F. Wu, S. Kim, B. Zhu, X. Xie, and M. Cha, “Towards attack-tolerant federated learning via critical parameter analysis”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, 4999–5008.
- [21] S. Park, S. Han, F. Wu, S. Kim, B. Zhu, X. Xie, and M. Cha, “Feddefender: Client-side attack-tolerant federated learning”. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2023, 1850–1861, 10.1145/3580305.3599346.
- [22] C. Chen, Y. Liu, X. Ma, and L. Lyu. “Calfat: Calibrated federated adversarial training with labelskewness”, 2023.
- [23] G. Zizzo, A. Rawat, M. Sinn, and B. Buesser. “Fat: Federated adversarial training”, 2020.
- [24] Z. Li, J. Shao, Y. Mao, J. H. Wang, and J. Zhang. “Federated learning with gan-based data synthesis for non-iid clients”, 2022.
- [25] Y. Lu, P. Qian, G. Huang, and H. Wang. “Personalized federated learning on long-tailed data via adversarial feature augmentation”, 2023.
- [26] X. Li, Z. Song, and J. Yang. “Federated adversarial learning: A framework with convergence analysis”, 2022.
- [27] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. “Intriguing properties of neural networks”, 2014.
- [28] O. Suciu, R. Marginean, Y. Kaya, H. D. III, and T. Dumitras, “When does machine learning FAIL? generalized transferability for evasion and poisoning attacks”. In: 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, 2018,1299–1316.
- [29] I. J. Goodfellow, J. Shlens, and C. Szegedy.“Explaining and harnessing adversarial examples”, 2015.
- [30] A. Kurakin, I. Goodfellow, and S. Bengio. “Adversarial examples in the physical world”, 2017.
- [31] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li. “Boosting adversarial attacks with momentum”, 2018.
- [32] G. Xia, J. Chen, C. Yu, and J. Ma, “Poisoning attacks in federated learning: A survey”, IEEE Access, vol. 11, 2023, 10708–10722, 10.1109/ACCESS.2023.3238823.
- [33] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks”. In: Neural Information Processing Systems, 2018.
- [34] V. Shejwalkar, A. Houmansadr, P. Kairouz, and D. Ramage, “Back to the drawing board: A critical evaluation of poisoning attacks on federated learning”, ArXiv, vol. abs/2108.10241, 2021.
- [35] H. Xiao, H. Xiao, and C. Eckert, “Adversarial label flips attack on support vector machines”. In: Proceedings of the 20th European Conference on Artificial Intelligence, NLD, 2012, 870–875.
- [36] V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu. “Data poisoning attacks against federated learning systems”, 2020.
- [37] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. “Pytorch: An imperative style, high-performance deep learning library”. In: Advances in Neural Information Processing Systems 32, 8024–8035. Curran Associates, Inc., 2019.
- [38] S. Marcel and Y. Rodriguez, “Torchvision the machine-vision package of torch”. In: Proceedings of the 18th ACM International Conference on Multimedia, New York, NY, USA, 2010, 1485–1488, 10.1145/1873951.1874254.
- [39] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, 1998, 2278–2324, 10.1109/5.726791.
- [40] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks”, 2019.
- [41] J. Deng,W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei Fei, “Imagenet: A large-scale hierarchical image database”. In: 2009 IEEE conference on computer vision and pattern recognition, 2009, 248–255.
- [42] L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and P. S. Yu, “Privacy and robustness in federated learning: Attacks and defenses”, IEEE Transactions on Neural Networks and Learning Systems, 2022, 1–21, 10.1109/TNNLS.2022.3216981.
- [43] F. Wilcoxon. Individual comparisons by ranking methods, 196–202. Springer, 1992.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e1d2d1a4-f86a-41ce-8987-614be09130e1
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.