Recent research has addressed the issue of label noise in the FL setting [
6,
8,
44,
46,
47,
49,
51]. Some approaches (e.g., [
8,
47]) focus on filtering noisy samples. For example, Yang et al. [
47] utilize communication of class-wise data centroids among clients to construct decision boundaries across classes, whereas Duan et al. [
8] rely on the communication of data features to the server for identifying noise instances. In addition to data filtering, label correction techniques have been investigated in FL [
44,
49,
51]. CLC [
49] applies consensus-based label correction technology, enabling clients to cooperate in correcting labels through a consensus mechanism. FedCorr [
44] employs a multi-stage scheme, where clean samples are first detected using GMM based on loss scores and then used to train a model that provides pseudo-labels for the noisy instances. Furthermore, Zhang et al. [
51] proposed the use of meta-learning to jointly learn the underlying recognition task and the noise distribution matrix, mapping noisy labeled instances to their correct counterparts during training. Recently, some works [
6,
9,
46] explored the utilization of a small clean dataset to quantify the credibility of on-device data and adjust the weighted aggregation process accordingly. However, a common limitation of these approaches is the significant computational overhead they often require for performing label correction. Alternatively, some approaches (e.g., [
8,
47]) necessitate the communication of user-related information and assume that clients share the same noise ratio for effective data filtering. Moreover, for efficient label noise handling with a low computational footprint on clients, some approaches (e.g., [
6,
8,
9,
46]) often assume the availability of additional clean data. However, for many practical FL scenarios, depending on additional clean data (e.g., [
6,
9,
46]), implementing complex learning schemes (e.g., [
44,
49,
51]), or exchanging client-sensitive data (e.g., [
8,
47]) can present significant challenges in the deployment of such approaches. To address the aforementioned problems, we propose
FedLN, a framework that provides simple yet effective approaches to accurately estimate label noise on a per-client basis and offer robust learning schemes to learn better generalizable federated models under the presence of label noise without relying on additional clean data, complex learning schemes, or communication of client-sensitive data.