Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Cryptography and Security

Federated Learning Security: Mitigating Poisoning Attacks and Preserving Privacy

Federated Learning Security: Mitigating Poisoning Attacks and Preserving Privacy

Federated learning (FL) is a promising approach to training machine learning models on private data without compromising privacy. However, FL faces a significant security threat known as distributed backdoor attacks, which can poison the model by manipulating the gradients during the training process. In this paper, we propose several defense strategies against these attacks, including PROFL (Poison-Resistant Federated Learning), ShieldFL (Shielding Federated Learning), and PEFL (Privacy-Enhanced Federated Learning).
To understand how these defenses work, imagine a group of people trying to solve a complex math problem. In FL, each person has their own private data, and they work together to train a model that can accurately solve the problem. However, an attacker might try to manipulate some of the people’s data to make the model produce incorrect answers. This is similar to how a distributed backdoor attack works: the attacker tries to secretly modify certain parts of the data to poison the model.
Our proposed defenses use different techniques to detect and mitigate these attacks. PROFL, for example, adds noise to the gradients to make it difficult for the attacker to manipulate them. ShieldFL uses a "shield" to cover the gradients and prevent the attacker from accessing them. PEFL enhances privacy by adding multiple layers of encryption to the data before sending it for training.
We evaluated these defenses using various experiments, where we simulated different proportions of malicious users. Our results showed that PROFL, ShieldFL, and PEFL can significantly improve the model’s accuracy while resisting distributed backdoor attacks. In fact, PEFL achieved an accuracy improvement of up to 40% compared to other defenses.
In summary, protecting FL against distributed backdoor attacks is crucial for ensuring the security and reliability of machine learning models in various applications. Our proposed defenses demonstrate promising results in resisting these attacks while maintaining model accuracy. By using a combination of privacy-preserving techniques and defense strategies, we can create more secure and reliable FL systems that are resistant to backdoor poisoning attacks.