On the Initial Behavior Monitoring Issues in Federated Learning

@article{Mallah2021OnTI,
  title={On the Initial Behavior Monitoring Issues in Federated Learning},
  author={Ranwa Al Mallah and Godwin Badu-Marfo and Bilal Farooq},
  journal={IEEE Access},
  year={2021},
  volume={PP},
  pages={1-1}
}
In Federated Learning (FL), a group of workers participate to build a global model under the coordination of one node, the chief. Regarding the cybersecurity of FL, some attacks aim at injecting the fabricated local model updates into the system. Some defenses are based on malicious worker detection and behavioral pattern analysis. In this context, without timely and dynamic monitoring methods, the chief cannot detect and remove the malicious or unreliable workers from the system. Our work… 

References

SHOWING 1-10 OF 17 REFERENCES

Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

AttestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker, increased the accuracy of the model in different FL settings, under different attacking patterns, and scenarios e.g., attacks performed at different stages of the convergence, colluding attackers, and continuous attacks.

Chained Anomaly Detection Models for Federated Learning: An Intrusion Detection Case Study

A permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger, which supports the auditing of machine learning models without the necessity to centralize the training data.

Mitigating Sybils in Federated Learning Poisoning

FoolsGold is described, a novel defense to this problem that identifies poisoning sybils based on the diversity of client updates in the distributed learning process that exceeds the capabilities of existing state of the art approaches to countering sybil-based label-flipping and backdoor poisoning attacks.

Understanding Distributed Poisoning Attack in Federated Learning

This paper proposes a scheme, Sniper, to eliminate poisoned local models from malicious participants during training, and identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models will be ignored during global model updating.

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

This work performs the first systematic study on local model poisoning attacks to federated learning, assuming an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate.

Federated Machine Learning: Concept and Applications

This work proposes building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.

Reliable Federated Learning for Mobile Networks

In this article, the concept of reputation is introduced as a metric and a reliable worker selection scheme is proposed for federated learning tasks to improve the reliability of federatedLearning tasks in mobile networks.

Privacy and Robustness in Federated Learning: Attacks and Defenses

A comprehensive survey on privacy and robustness in FL over the past five years is conducted and a concise introduction to the concept of FL is provided and a unique taxonomy covering privacy attacks and defenses is provided.

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.

Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation Agent

Justinian’s GAAvernor (GAA), a Gradient Aggregation Agent which learns to be robust against Byzantine attacks via reinforcement learning techniques, which shows desirable robustness as if the systems were under no attacks, even in some case when 90% Byzantine workers are controlled by the adversary.