Repository logo
Institutional Digital Repository
Shreenivas Deshpande Library, IIT (BHU), Varanasi

Fed-NL: A Federated Learning Approach to Suppress Noise in Participant Datasets to Reduce Communication Rounds for Convergence

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Federated learning enables multiple participants to collaboratively train machine learning models without the need to share their private and limited data, thereby preserving privacy. When datasets used in federated learning contain noisy labels, it can lead to degraded performance and an increased number of communication rounds needed to achieve convergence. This, in turn, requires more time and energy to train the model. This paper proposes a federated learning approach to suppress the unequal distribution of the noisy labels in the dataset of each participant. The approach first estimates the noise ratio of the dataset for each participant and normalizes it using the server dataset. Next, the approach considers the influence of each participant and calculates the optimal weighted contributions for each one. The approach also considers bias in the server dataset and minimizes its impact on the participants. Further, the paper provides an expression to estimate the number of communication rounds required for convergence. Results demonstrate the superiority of the proposed approach over baselines in terms of communication rounds and performance. © 2002-2012 IEEE.

Description

Keywords

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By