Open Access   Article Go Back

A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection

C. B. Biragbara1 , O.E. Taylor2 , D. Matthias3

  1. Computer Science/Science, River State University, Port Harcourt, Nigeria.
  2. Computer Science/Science, River State University, Port Harcourt, Nigeria.
  3. Computer Science/Science, River State University, Port Harcourt, Nigeria.

Section:Research Paper, Product Type: Journal Paper
Volume-12 , Issue-7 , Page no. 53-59, Jul-2024

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v12i7.5359

Online published on Jul 31, 2024

Copyright © C. B. Biragbara, O.E. Taylor, D. Matthias . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: C. B. Biragbara, O.E. Taylor, D. Matthias, “A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection,” International Journal of Computer Sciences and Engineering, Vol.12, Issue.7, pp.53-59, 2024.

MLA Style Citation: C. B. Biragbara, O.E. Taylor, D. Matthias "A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection." International Journal of Computer Sciences and Engineering 12.7 (2024): 53-59.

APA Style Citation: C. B. Biragbara, O.E. Taylor, D. Matthias, (2024). A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection. International Journal of Computer Sciences and Engineering, 12(7), 53-59.

BibTex Style Citation:
@article{Biragbara_2024,
author = {C. B. Biragbara, O.E. Taylor, D. Matthias},
title = {A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {7 2024},
volume = {12},
Issue = {7},
month = {7},
year = {2024},
issn = {2347-2693},
pages = {53-59},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=5711},
doi = {https://doi.org/10.26438/ijcse/v12i7.5359}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v12i7.5359}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=5711
TI - A Framework for Prevention of Backdoor Attacks in Federated Learning Using Differential Testing and Outlier Detection
T2 - International Journal of Computer Sciences and Engineering
AU - C. B. Biragbara, O.E. Taylor, D. Matthias
PY - 2024
DA - 2024/07/31
PB - IJCSE, Indore, INDIA
SP - 53-59
IS - 7
VL - 12
SN - 2347-2693
ER -

VIEWS PDF XML
178 152 downloads 57 downloads
  
  
           

Abstract

The integrity and security of federated learning systems, in which several users work together to train a single model while maintaining privacy, are seriously threatened by backdoor attacks. In this paper, we propose a preventive approach that includes differential testing and outlier detection mechanisms to identify and mitigate the risks associated with backdoor attacks. Federated learning aims for high model accuracy and performance, and the proposed preventive measures help maintain the integrity and reliability of the collaborative learning process. Differential testing is used to detect possible deviations or inconsistencies in the distribution of training data across multiple participants. By comparing the performance of models on different subsets of data, the presence of a backdoor attack can be identified. This differential testing framework acts as an early warning system, enabling the detection of introduced model biases or malicious attempts at data manipulation. Anomaly detection methods are also employed to find abnormalities or peculiar patterns that can point to the existence of a backdoor attack. When an outlier substantially deviates from the federated learning system`s expected behavior, it is identified and marked for additional research. This approach improves the robustness of federated learning models against malicious participants and manipulated data. Object-Oriented Analysis and Design (OOAD) techniques were used to ensure a structured and methodical design process. Python programming language was used for model implementation and simulation. The suggested defense strategy, which makes use of a federated CNN model, successfully reduces the possibility of backdoor attacks in federated learning systems. Other benchmark systems were compared with the suggested model. The results of the proposed system are stronger than existing systems as it achieves an accuracy of 99.95% in training and 99.97% in testing. In conclusion, we have detected backdoor attacks with different counts using both outlier and differential tests and prevented backdoor attacks (Differential, Gaussian_Backdoored, Gradient_Backdoored, Integral, Julia_Backdoored, Metaphor_Backdoored, Non-Backdoored, Pixel_Distort, Relu_Backdoored) using federated learning.

Key-Words / Index Term

Framework, Prevention, Backdoor, Attacks, Learning and Outlier Detection)

References

[1] Bagdasaryan, E., Veit, A., Hua, Y.,Estrin, D., & Shmatikov, V. How to back door federated earning. In International Conference on Learning Representations (ICLR), 2020.
[2] Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S., & Feamster, N. Analyzing federated learning through an adversarial lens. arXiv preprinted Xiv:1811.12470. 2018.
[3] Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Ingerman, A., Ivanov, V.,...Raykova, M. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046. 2019.
[4] Gu, X., Chen, Y., Wang, Q., & Kong, W. Back door attacks and defenses in federated learning: State-of-the art, taxonomy, and future directions. IEEE Wireless Communications, Vol.30, Issue.2, pp.114-121, 2022.
[5] Hong,Y.,& Kim, M., Automated differential testing of web applications using mutation analysis. Journal of Systems and Software, 157, 110405, 2019. https://doi.org/10.1016/j.jss.2019.
[6] Kairouz, P., Mc Mahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N.,... Zhang, H. Advances and open problems in federated learning, 2019.
[7] Li, X., Xu, J., & Wang,Y. Back door attacks in federated learning: A survey. IEEE Access, 9, pp.75232-75244, 2021.
[8] Melis, L., Song, C., De Cristofaro, E., & Shmatikov, V., Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 281-1295, 2019.
[9] Tian, F. Q., Wang, S. L., & Liew, A. W. C., Towards practical watermark for deep neural networks in federated learning. arXiv preprint arXiv:2105.03167, 2021.
[10] Wang, Z., Huang, Y., Song, M., Wu, L., Xue, F., & Ren, K., Poisoning-assisted property inference attack against federated learning. IEEE Transactions on Dependable and Secure Computing, 2022.
[11] Yang, Z., Li, J., & Luo, X., Federated learning: Challenges and future directions. Journal of Parallel and Distributed Computing, 144, pp.1-24, 2020.
[12] Yang, Q., Liu, Y., Chen, T., & Tong, Y., Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, Vol.10, Issue.2, 119, 2019.
[13] Zhou, W., Wang, H., Li, H., & Zhang, X., Backdoor attacks in federated learning: A survey. IEEE Communications Magazine, Vol.59, Issue.1, pp.80-86, 2020.