Skip to main content
Upcoming Events:

Ph.D. Thesis Proposal Defense Comprehensive Exam – Part Two: Mahdee Jodayree

Date & Time:
   Add All to Calendar
Event Contact:
Dr. Franya Frantisek (Chair)
Dr. Fei Chiang
Dr. Wenbo He (Co-Supervisor)
Dr. Ryszard Janicki (Co-Supervisor)

Preventing Data Poisoning Attacks on Federated Machine Learning by an encrypted verification key



Recent studies show significant security problems with most of the federated learning models. There is a false assumption that the participant is not the attacker and would not use any poisoned data. This vulnerability allows attackers to use polluted data to train their data locally and send the model updates to the edge server for aggregation, which generates an opportunity for data poisoning. In such a setting, it is challenging for an edge-server to thoroughly examine the data used for model training. This paper evaluates existing vulnerabilities, attacks, and defences of federated learning. It proposes a robust prevention scheme that allows the federated learning server to eliminate the infected participants in real-time and backdoor attacks by adding an encrypted verification scheme to the federated learning model.