Auto-weighted Robust Federated Learning with Corrupted Data Sources

  title={Auto-weighted Robust Federated Learning with Corrupted Data Sources},
  author={Shenghui Li and Edith Ngai and Fanghua Ye and Thiemo Voigt},
  journal={ACM Transactions on Intelligent Systems and Technology (TIST)},
  • Shenghui Li, Edith Ngai, T. Voigt
  • Published 14 January 2021
  • Computer Science
  • ACM Transactions on Intelligent Systems and Technology (TIST)
Federated learning provides a communication-efficient and privacy-preserving training process by enabling learning statistical models with massive participants without accessing their local data. Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions from outliers, systematic mislabeling, or even adversaries. In this paper, we address this challenge by proposing Auto-weighted Robust Federated Learning (ARFL), a novel approach… 

Figures and Tables from this paper

Performance Weighting for Robust Federated Learning Against Corrupted Sources
This work constructs a robust weight aggregation scheme based on geometric mean and demonstrates its effectiveness under random label shuffling and targeted label flipping attacks, and proposes a class of task-oriented performance-based methods computed over a distributed validation dataset with the goal to detect and mitigate corrupted clients.
Poster: A Good Representation Helps the Robustness of Federated Learning against Backdoor Attack
This work proposes Representation-Guided FedAvg (RG-FedAvg), a novel framework that aims to elevate the robustness of the conventional federated learning approach, and designed each client to perform sample-wise sampling, which aims to weed out samples suspicious of manipulation.
Robust federated learning based on metrics learning and unsupervised clustering for malicious data detection
This work proposes a novel robust federated learning method that utilizes Metrics Learning to encode the local data and leverages the unsupervised clustering method K-means to preclude malicious data during local training.


Dynamic Federated Learning Model for Identifying Adversarial Clients
A dynamic federated learning model is proposed that dynamically discards those adversarial clients, which allows to prevent the corruption of the global learning model.
Robust Federated Training via Collaborative Machine Teaching using Trusted Instances
This work proposes a collaborative and privacy-preserving machine teaching paradigm with multiple distributed teachers, to improve robustness of the federated training process against local data corruption and is a step toward trustworthy machine learning.
Robust Federated Learning via Collaborative Machine Teaching
This study uses a few trusted instances provided by teachers as benign examples in the teaching process to propose a collaborative and privacy-preserving machine teaching method that produces directly a robust prediction model despite the extremely pervasive systematic data corruption.
Agnostic Federated Learning
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
On the Byzantine Robustness of Clustered Federated Learning
This work investigates the application of CFL to byzantine settings, where a subset of clients behaves unpredictably or tries to disturb the joint training effort in an directed or undirected way, and demonstrates that CFL (without modifications) is able to reliably detect byZantine clients and remove them from training.
Federated Adversarial Domain Adaptation
This work presents a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node.
Robust Aggregation for Federated Learning
The experiments show that RFA is competitive with the classical aggregation when the level of corruption is low, while demonstrating greater robustness under high corruption, and establishes the convergence of the robust federated learning algorithm for the stochastic learning of additive models with least squares.
Learning to Detect Malicious Clients for Robust Federated Learning
This work proposes a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates using a powerful detection model, leading to targeted defense.
Federated Learning: Strategies for Improving Communication Efficiency
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
FedBoost: A Communication-Efficient Algorithm for Federated Learning
This work provides communication-efficient ensemble algorithms for federated learning, where per-round communication cost is independent of the size of the ensemble, and proves the optimality of ensemble methods for density estimation for standard empirical risk minimization and agnostic risk minimizations.