Corpus ID: 211678150

Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study

@article{Jin2020AdversarialAA,
  title={Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study},
  author={Wei Jin and Yaxin Li and Han Xu and Yiqi Wang and Jiliang Tang},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.00653}
}
  • Wei Jin, Yaxin Li, +2 authors Jiliang Tang
  • Published 2020
  • Mathematics, Computer Science
  • ArXiv
  • Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-5 OF 5 CITATIONS

    DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses

    VIEW 1 EXCERPT
    CITES BACKGROUND

    GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    Graph Structure Learning for Robust Graph Neural Networks

    VIEW 1 EXCERPT
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 60 REFERENCES

    Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

    VIEW 1 EXCERPT

    Adversarial Attacks on Neural Networks for Graph Data

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    Can Adversarial Network Attack be Defended?

    VIEW 1 EXCERPT

    Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    The General Black-box Attack Method for Graph Neural Networks

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL