Federated Learning (FL) is a distributed machine learning framework that enables multiple devices to collaboratively train a model without sharing their data. The main goal of FL is to reduce communication costs and maintain data privacy. There are several techniques used in FL, such as model pruning, client selection, and quantization, which can be applied to decentralized scenarios. However, visualizing the progress of FL in decentralized networks is challenging due to the complexity of the communication patterns between multiple devices.
To address this challenge, Uddin et al. (2019) proposed a technique for visualizing the progress of FL by plotting the mutual information between the ground truth and output logits produced by every global model in each communication round. This technique can visualize the relationship between the ground truth and the global model at the central server, but it cannot visualize the relationship between more than two devices in decentralized scenarios.
In this article, we will focus on reducing the number of distilled values utilized in each communication round to improve the efficiency of FL. We will also explore other techniques such as dimensionality reduction and consensus algorithms to further reduce the communication costs of FL. By using these techniques, FL can be applied to a wider range of applications, including IoT sensor devices, wearable sensors, smartphones, smart homes, and connected vehicles.
In summary, FL is a promising framework for collaborative learning that can maintain data privacy and reduce communication costs. However, visualizing the progress of FL in decentralized networks is challenging due to the complexity of the communication patterns between multiple devices. By using techniques such as distilled values, dimensionality reduction, and consensus algorithms, we can improve the efficiency of FL and apply it to a wider range of applications.
Computer Science, Networking and Internet Architecture