# Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study

@article{Jin2020AdversarialAA, title={Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study}, author={Wei Jin and Yaxin Li and Han Xu and Yiqi Wang and Jiliang Tang}, journal={ArXiv}, year={2020}, volume={abs/2003.00653} }

Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous…

## Figures, Tables, and Topics from this paper

## 56 Citations

Structack: Structure-based Adversarial Attacks on Graph Neural Networks

- Computer ScienceHT
- 2021

This work study's a new attack strategy on GNNs that is uninformed, where an attacker only has access to the graph structure, but no information about node attributes, and shows that structure-based uninformed attacks can approach the performance of informed attacks, while being computationally more efficient.

Practical Adversarial Attacks on Graph Neural Networks

- 2020

We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small…

Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access

- Computer ScienceArXiv
- 2020

This work shows that the common gradient-based white-box attacks can be generalized to the black-box setting via the connection between the gradient and an importance score similar to PageRank, and proposes a greedy procedure to correct the importance score that takes into account of the diminishing-return pattern.

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

- Computer Science, MathematicsArXiv
- 2021

The Black-Box Gradient Attack (BBGA) algorithm is proposed, which is able to achieve stable attack performance without accessing the training sets of the GNNs and is applicable when attacking against various defense methods.

Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem

- Computer ScienceArXiv
- 2021

This work study the problem of attacking GNNs in a restricted and realistic setup, by perturbing the features of a small set of nodes, with no access to model parameters and model predictions, and draws a connection between this type of attacks and an influence maximization problem on the graph.

Understanding Structural Vulnerability in Graph Convolutional Networks

- Computer ScienceIJCAI
- 2021

This work theoretically and empirically demonstrate that structural adversarial examples can be attributed to the non-robust aggregation scheme (i.e., the weighted mean) of GCNs, and takes advantage of the breakdown point which can quantitatively measure the robustness of aggregation schemes.

Adversarial Immunization for Improving Certifiable Robustness on Graphs

- Computer ScienceArXiv
- 2020

This paper forms the problem of graph adversarial immunization} as a bilevel optimization problem, i.e., vaccinating a fraction of node pairs, connected or unconnected, to improve the certifiable robustness of graph against any admissible adversarial attack and proposes an efficient algorithm with meta-gradient in a discrete way to circumvent the computationally expensive combinatorial optimization when solving the adversarial Immunization problem.

Adversarial Attacks and Defenses: Frontiers, Advances and Practice

- Computer ScienceKDD
- 2020

This tutorial provides a comprehensive overview on the frontiers and advances of adversarial attacks and their countermeasures, and introduces DeepRobust, a Pytorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field.

Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks

- Computer ScienceCIKM
- 2021

The idea is to learn a powerful attack strategy considering the long-term benefits of perturbations, then adjust it as little as possible to generate adversarial samples under different budgets, so the learned attack strategy has better attack performance.

Query-free Black-box Adversarial Attacks on Graphs

- Computer ScienceArXiv
- 2020

A query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model, is proposed and proved to be quantified by spectral changes, and thus approximated using the eigenvalue perturbation theory.

## References

SHOWING 1-10 OF 63 REFERENCES

All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs

- Computer ScienceWSDM
- 2020

This paper explores the properties of Nettack perturbations and proposes LowBlow, a low-rank adversarial attack which is able to affect the classification performance of both GCN and tensor-based node embeddings and it is shown that the low- rank attack is noticeable and making it unnoticeable results in a high-rank attack.

Adversarial Attack and Defense on Graph Data: A Survey

- Computer ScienceArXiv
- 2018

This work systemically organize the considered works based on the features of each topic and provides a unified formulation for adversarialLearning on graph data which covers most adversarial learning studies on graph.

Transferring Robustness for Graph Neural Network Against Poisoning Attacks

- Computer Science, MathematicsWSDM
- 2020

This work proposes PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients, and designs a meta-optimization algorithm that trains PA- GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-gNN on the poisoned graph.

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

- Computer Science, MathematicsInt. J. Autom. Comput.
- 2020

A systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures against adversarial examples, for three most popular data types, including images, graphs and text is reviewed.

Adversarial Attacks on Neural Networks for Graph Data

- Mathematics, Computer ScienceKDD
- 2018

This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.

Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

- Computer ScienceIJCAI
- 2019

This paper proposes both attack and defense techniques for adversarial attacks and shows that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations.

Can Adversarial Network Attack be Defended?

- Computer Science, PhysicsArXiv
- 2019

This paper proposes novel adversarial training strategies to improve GNNs' defensibility against attacks, and analytically investigates the robustness properties for GNN's granted by the use of smooth defense, and proposes two special smooth defense strategies: smoothing distillation and smoothing cross-entropy loss function.

Robust Graph Convolutional Networks Against Adversarial Attacks

- Computer ScienceKDD
- 2019

Robust GCN (RGCN), a novel model that "fortifies'' GCNs against adversarial attacks by adopting Gaussian distributions as the hidden representations of nodes in each convolutional layer, which can automatically absorb the effects of adversarial changes in the variances of the Gaussian distribution.

Link Prediction Adversarial Attack

- Physics, Computer ScienceArXiv
- 2018

It is concluded that most deep model based and other state-of-art link prediction algorithms cannot escape the adversarial attack just like GAE and link prediction attack can be a robustness evaluation metric for current link prediction algorithm in attack defensibility.

The General Black-box Attack Method for Graph Neural Networks

- Computer ScienceArXiv
- 2019

This paper begins by investigating the theoretical connections between different kinds of GNNs in a principled way and integrate different GNN models into a unified framework, dubbed as General Spectral Graph Convolution, and proposes a generalized adversarial attacker, which does not require any knowledge of the target classifiers used inGNNs.