Corpus ID: 49557410

How To Backdoor Federated Learning

@inproceedings{Bagdasaryan2020HowTB,
  title={How To Backdoor Federated Learning},
  author={Eugene Bagdasaryan and Andreas Veit and Yiqing Hua and Deborah Estrin and Vitaly Shmatikov},
  booktitle={AISTATS},
  year={2020}
}
  • Eugene Bagdasaryan, Andreas Veit, +2 authors Vitaly Shmatikov
  • Published in AISTATS 2020
  • Computer Science
  • Federated learning enables multiple participants to jointly construct a deep learning model without sharing their private training data with each other. [...] Key Result We also show how to evade anomaly detection-based defenses by incorporating the evasion into the loss function when training the attack model.Expand Abstract

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 118 CITATIONS, ESTIMATED 94% COVERAGE

    Poisoning Attack in Federated Learning using Generative Adversarial Nets

    • Jiale Zhang, Junjun Chen, Di Wu, Bing Chen, Shui Yu
    • Computer Science
    • 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE)
    • 2019
    VIEW 3 EXCERPTS
    CITES BACKGROUND & METHODS

    Can You Really Backdoor Federated Learning?

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Eavesdrop the Composition Proportion of Training Labels in Federated Learning

    VIEW 3 EXCERPTS
    CITES BACKGROUND

    A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

    VIEW 1 EXCERPT
    CITES METHODS

    Mitigating Sybils in Federated Learning Poisoning

    VIEW 3 EXCERPTS
    CITES BACKGROUND

    FILTER CITATIONS BY YEAR

    2018
    2020

    CITATION STATISTICS

    • 10 Highly Influenced Citations

    • Averaged 39 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 61 REFERENCES

    Mitigating Sybils in Federated Learning Poisoning

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Auror: defending against poisoning attacks in collaborative deep learning systems

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Trojaning Attack on Neural Networks

    VIEW 2 EXCERPTS