Academy & Industry Research Collaboration Center (AIRCC)

Volume 11, Number 18, November 2021

Federated Learning with Random Communication and Dynamic Aggregation

  Authors

Ruolin Huang, Ting Lu, Yiyang Luo, Guohua Liu and Shan Chang, Donghua University, China

  Abstract

Federated Learning (FL) is a setting that allows clients to train a joint global model collaboratively while keeping data locally. Due to FL has advantages of data confidential and distributed computing, interest in this area has increased. In this paper, we designed a new FL algorithm named FedRAD. Random communication and dynamic aggregation methods are proposed for FedRAD. Random communication method enables FL system use the combination of fixed communication interval and constrained variable intervals in a single task. Dynamic aggregation method reforms aggregation weights and makes weights update automately. Both methods aim to improve model performance. We evaluated two proposed methods respectively, and compared FedRAD with three algorithms on three hyperparameters. Results at CIFAR-10 demonstrate that each method has good performance, and FedRAD can achieve higher classification accuracy than state-of-the-art FL algorithms.

  Keywords

Federated Learning, Random Communication, Dynamic Aggregation, Self-learning, Distributed Computing.