PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

FedAssess: analysis for efficient communication and security algorithms over various federated learning frameworks and mitigation of label-flipping attack

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Federated learning is an upcoming concept used widely in distributed machine learning. Federated learning (FL) allows a large number of users to learn a single machine learning model together while the training data is stored on individual user devices. Nonetheless, federated learning lessens threats to data privacy. Based on iterative model averaging, our study suggests a feasible technique for the federated learning of deep networks with improved security and privacy. We also undertake a thorough empirical evaluation while taking various FL frameworks and averaging algorithms into consideration. Secure multi party computation, secure aggregation, and differential privacy are implemented to improve the security and privacy in a federated learning environment. In spite of advancements, concerns over privacy remain in FL, as the weights or parameters of a trained model may reveal private information about the data used for training. Our work demonstrates that FL can be prone to label-flipping attack and a novel method to prevent label-flipping attack has been proposed. We compare standard federated model aggregation and optimization methods, FedAvg and FedProx using benchmark data sets. Experiments are implemented in two different FL frameworks – Flower and PySyft and the results are analyzed. Our experiments confirm that classification accuracy increases in FL framework over a centralized model and the model performance is better after adding all the security and privacy algorithms. Our work has proved that deep learning models perform well in FL and also is secure.
Rocznik
Strony
art. no. e148944
Opis fizyczny
Bibliogr. 35 poz., rys., tab.
Twórcy
autor
  • Department of Information Technology, PSG College of Technology, Coimbatore, TN 641004, India
  • Department of Information Technology, PSG College of Technology, Coimbatore, TN 641004, India
Bibliografia
  • [1] K. Kuźniewski, K. Matusiewicz, and P. Sapiecha, “The high-level practical overview of open-source privacy-preserving machine learning solutions,” International Journal of Electronics and Telecommunications, pp. 741–747, 2022, doi: 10.24425/ijet.2022.143880.
  • [2] X. Zheng and Z. Cai, “Privacy-preserved data sharing towards multiple parties in industrial iots,” IEEE Journal on Selected Areas in Communications, vol. 38, no. 5, pp. 968–979, 2020, doi: 10.1109/JSAC.2020.2980802.
  • [3] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019, doi: 10.1145/3298981.
  • [4] M.H. Ur Rehman and M.M. Gaber, Federated learning systems: Towards next-generation AI. Springer Nature, 2021, vol. 965, doi: 10.1007/978-3-030-70604-3.
  • [5] P. Kairouz et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021, doi: 10.1561/2200000083.
  • [6] G. Bao and P. Guo, “Federated learning in cloud-edge collaborative architecture: key technologies, applications and challenges,” J. Cloud Comput., vol. 11, no. 1, p. 94, 2022, doi: 10.1186/s13677-022-00377-4.
  • [7] C. Dwork, “Differential privacy: A survey of results,” in Theory and Applications of Models of Computation: 5th International Conference, TAMC 2008, Proceedings 5 China: Springer, 2008, pp. 1–19, doi: 10.1007/978-3-540-79228-4_1.
  • [8] C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proceedings of the forty-first annual ACM symposium on Theory of computing, 2009, pp. 169–178, doi: 10.1145/1536414.1536440.
  • [9] R. Johnson and T. Zhang, “Accelerating stochastic gradient descent using predictive variance reduction”, Advances in Neural Information Processing Systems, vol. 26, 2013, [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2013/file/ac1dd209cbcc5e5d1c6e28598e8cbbe8-Paper.pdf
  • [10] L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020, doi: 10.48550/arXiv.2003.02133.
  • [11] N. Rodríguez-Barroso, D. Jiménez-López, M. V. Luzón, F. Herrera, and E. Martínez-Cámara, “Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges,” Inf. Fusion, vol. 90, pp. 148–173, 2023, doi: 10.1016/j.inffus.2022.09.011.
  • [12] P. Rieger, T.D. Nguyen, M. Miettinen, and A.-R. Sadeghi, “DeepSight: Mitigating backdoor attacks in federated learning through deep model inspection,” arXiv preprint arXiv:2201.00763, 2022, doi: 10.48550/arXiv.2201.00763.
  • [13] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 2938–2948. [Online]. Available: https://proceedings.mlr.press/v108/bagdasaryan20a.html
  • [14] Z. Chen, P. Tian, W. Liao, and W. Yu, “Towards multi-party targeted model poisoning attacks against federated learning systems,” High-Confidence Computing, vol. 1, no. 1, p. 100002, 2021, doi: 10.1016/j.hcc.2021.100002.
  • [15] S. Awan, B. Luo, and F. Li, “Contra: Defending against poisoning attacks in federated learning,” in Computer Security–ESORICS 2021: 26th European Symposium on Research in Computer Security, Darmstadt, Germany, October 4–8, 2021, Proceedings, Part I 26. Springer, 2021, pp. 455–475, doi: 10.1007/978-3-030-88418-5_22.
  • [16] V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data poisoning attacks against federated learning systems,” in Computer Security–ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020, Proceedings, Part I 25. UK: Springer, 2020, pp. 480–501, doi: 10.1007/978-3-030-58951-6_24.
  • [17] V. Valadi, X. Qiu, P.P.B. de Gusmão, N.D. Lane, and M. Alibeigi, “Fedval: Different good or different bad in federated learning,” arXiv preprint arXiv:2306.04040, 2023, doi: 10.48550/arXiv.2306.04040.
  • [18] R. Anusuya, D. Karthika Renuka, S. Ghanasiyaa, K. Harshini, K. Mounika, and K. Naveena, “Privacy-preserving blockchainbased ehr using zk-snarks,” in Computational Intelligence, Cyber Security and Computational Models. Recent Trends in Computational Models, Intelligent and Secure Systems: 5th International Conference, ICC3 2021, Revised Selected Papers. India: Springer, 2022, pp. 109–123, doi: 10.1007/978-3-031-15556-7_8.
  • [19] A. Ziller et al., “Pysyft: A library for easy federated learning,” Federated Learning Systems: Towards Next-Generation AI, pp. 111–139, 2021, doi: 10.1007/978-3-030-70604-3_5.
  • [20] D.J. Beutel et al., “Flower: A friendly federated learning research framework,” arXiv preprint arXiv:2007.14390, 2020, doi: 10.48550/arXiv.2007.14390.
  • [21] P. Blanchard, E.M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” in Advances in Neural Information Processing Systems 30 (NIPS 2017), vol. 30, 2017. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf
  • [22] C. Fung, C.J.M. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings,” in 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020). San Sebastian, USA: USENIX Association, 2020, pp. 301–316. [Online]. Available: https://www.usenix.org/conference/raid2020/presentation/fung
  • [23] N.M. Jebreel, J. Domingo-Ferrer, D. Sánchez, and A. Blanco-Justicia, “Defending against the label-flipping attack in federated learning,” arXiv preprint arXiv: 2207.01982, vol. abs/2207.01982, 2022, doi: 10.48550/arXiv.2207.01982.
  • [24] A.F. Siegel, “Robust regression using repeated medians,” Biometrika, vol. 69, no. 1, pp. 242–244, 1982.
  • [25] A.K. Singh, A. Blanco-Justicia, J. Domingo-Ferrer, D. Sánchez, and D. Rebollo-Monedero, “Fair detection of poisoning attacks in federated learning,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2020, pp. 224–229, doi: 10.1109/ICTAI50040.2020.00044.
  • [26] L. Zhao et al., “Shielding collaborative learning: Mitigating poisoning attacks through client-side detection,” IEEE Trans. Dependable Secur. Comput., vol. 18, no. 5, pp. 2029–2041, 2020, doi: 10.1109/ TDSC.2020.2986205.
  • [27] Z. Ma, J. Ma, Y. Miao, Y. Li, and R.H. Deng, “Shieldfl: Mitigating model poisoning attacks in privacy-preserving federated learning,” IEEE Trans. Inf. Forensic Secur., vol. 17, pp. 1639–1654, 2022, doi: 10.1109/TIFS.2022.3169918.
  • [28] S. Shen, S. Tople, and P. Saxena, “Auror: Defending against poisoning attacks in collaborative deep learning systems,” in Proceedings of the 32nd Annual Conference on Computer Security Applications, 2016, pp. 508–519, doi: 10.1145/2991079.2991125.
  • [29] M. Hirt and D. Tschudi, “Efficient general-adversary multi-party computation,” in Advances in Cryptology-ASIACRYPT 2013: 19th International Conference on the Theory and Application of Cryptology and Information Security, Bengaluru, India, December 1-5, 2013, Proceedings, Part II 19. Springer, 2013, pp. 181–200, doi: 10.1007/978-3-642-42045-0_10.
  • [30] S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure computation for neural network training.” Proc. Priv. Enhancing Technol., vol. 2019, no. 3, pp. 26–49, 2019, doi: 10.2478/popets-2019-0035.
  • [31] I. Damgård, M. Keller, E. Larraia, V. Pastro, P. Scholl, and N.P. Smart, “Practical covertly secure mpc for dishonest majority–or: breaking the spdz limits,” in Computer Security–ESORICS 2013: 18th European Symposium on Research in Computer Security, Egham, UK, September 9-13, 2013. Proceedings 18. Springer, 2013, pp. 1–18, doi: 10.1007/978-3-642-40203-6_1.
  • [32] K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191, doi: 10.1145/3133956.3133982.
  • [33] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B.A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282. [Online]. Available: https://proceedings.mlr.press/v54/mcmahan17a.html
  • [34] T. Li, A.K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020. [Online]. Available: https://proceedings.mlsys.org/paper/2020/file/38af86134b65d0f10fe33d30dd76442e-Paper.pdf
  • [35] S. Kit Lo, Q. Lu, L. Zhu, H.-y. Paik, X. Xu, and C. Wang, “Architectural patterns for the design of federated learning systems,” arXiv preprint arXiv: 2101.02373, 2021, doi: 10.48550/arXiv.2101.02373.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-a7370005-e872-4baa-811d-51def0921a73
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.