In this paper, the authors propose a novel defense mechanism called FreqFed to protect federated learning (FL) against poisoning attacks, which are designed to manipulate the model training process by malicious participants. The attackers may intentionally introduce backdoors into the model to compromise its performance or steal sensitive information.
To tackle this problem, FreqFed leverages the frequency domain to identify and eliminate poisoned models. The proposed method first constructs a dictionary of the 50k most frequent words in the dataset and uses it to predict the next word in a sequence. Then, it employs a neural network architecture with two long short-term memory (LSTM) layers and a linear output layer to learn the patterns in the data.
The backdoor for this dataset is designed to insert advertisement by making the model predict a particular word after a trigger sentence. The authors evaluate FreqFed’s performance on the Reddit dataset, which includes blog posts from November 2017. They demonstrate that FreqFed effectively mitigates two distinct attacks while preserving the model accuracy.
To understand how FreqFed works, imagine you have a group of people working together to complete a project. Just like in FL, each person has their own set of data they contribute to the overall model. However, some individuals may try to sabotage the project by secretly adding incorrect information to their part of the data.
FreqFed is like a quality control inspector who checks each person’s work for accuracy. It identifies any inconsistencies or unusual patterns in the data and flags them for further investigation. By catching these errors early on, FreqFed can prevent poisonous models from being trained, ensuring the overall project remains robust and accurate.
In summary, FreqFed is a powerful defense mechanism that protects FL against poisoning attacks by identifying and eliminating backdoored models in the frequency domain. By detecting these subtle inconsistencies, FreqFed helps maintain the accuracy and reliability of the trained model, ensuring a robust federated learning experience.
Computer Science, Cryptography and Security