# Adversarial Attack on Large Scale Graph

@article{Li2020AdversarialAO, title={Adversarial Attack on Large Scale Graph}, author={Jintang Li and Tao Xie and Liang Chen and Fenfang Xie and Xiangnan He and Zibin Zheng}, journal={ArXiv}, year={2020}, volume={abs/2009.03488} }

Recent studies have shown that graph neural networks are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Most works on attacking the graph neural networks are currently mainly using the gradient information to guide the attack and achieve outstanding performance. Nevertheless, the high complexity of time and space makes them unmanageable for large scale graphs. We argue that the main reason is that they have to use the entire graph for attacks…

## Figures and Tables from this paper

## 17 Citations

### GraphAttacker: A General Multi-Task Graph Attack Framework

- Computer ScienceIEEE Transactions on Network Science and Engineering
- 2022

This work proposes GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks, and introduces a novel similarity modification rate (SMR) to conduct a stealthier attack considering the change of node similarity distribution.

### A Targeted Universal Attack on Graph Convolutional Network by Using Fake Nodes

- Computer ScienceNeural Process. Lett.
- 2022

A targeted universal attack (TUA) against graph convolutional network (GCN) using a few nodes as the attack nodes, which proves its attack capability and raises community awareness of the threat from TUA and increase the attention given to its future defense.

### DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs

- Computer ScienceArXiv
- 2021

This paper theoretically investigates three well-known adversarial attack methods, i.e., Nettack, Meta Attack, and GradArgmax, and finds that different attack methods have their specific attack preferences on changing network structure, and utilizes the network attributes to design machine learning models for adversarial sample detection and attack method recognition.

### Robustness of Graph Neural Networks at Scale

- Computer ScienceNeurIPS
- 2021

This work proposes two sparsity-aware first-order optimization attacks that maintain an efficient representation despite optimizing over a number of parameters which is quadratic in the number of nodes, and designs a robust aggregation function, Soft Median, resulting in an effective defense at all scales.

### Graph Structural Attack by Spectral Distance

- Computer ScienceArXiv
- 2021

Qualitative analysis shows the connection between the attack behavior and the imposed changes on the spectral distribution, which provides empirical evidence that maximizing spectral distance is an effective manner to change the structural property of graphs in the spatial domain and perturb the frequency components in the Fourier domain.

### GUARD: Graph Universal Adversarial Defense

- Computer ScienceArXiv
- 2022

This work presents a sim-ple yet effective method, named GUARD, which improves robustness for several established GCNs against multiple adversarial attacks and outperforms existing adversarial defense methods by large margins.

### A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability

- Computer ScienceArXiv
- 2022

This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability, and gives the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNS.

### Single-Node Attacks for Fooling Graph Neural Networks

- Computer ScienceNeurocomputing
- 2022

### Model Inversion Attacks against Graph Neural Networks

- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2022

A systematic study on model inversion attacks against Graph Neural Networks (GNNs), one of the state-of-the-art graph analysis tools, and two methods based on gradient estimation and reinforcement learning (RL-GraphMI), which show that edges with greater inversion risk are more likely to be recovered.

### LRP2A: Layer-wise Relevance Propagation based Adversarial attacking for Graph Neural Networks

- Computer ScienceKnowledge-Based Systems
- 2022

## References

SHOWING 1-10 OF 50 REFERENCES

### Adversarial Attacks on Neural Networks for Graph Data

- Computer ScienceKDD
- 2018

This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.

### Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

- Computer ScienceIJCAI
- 2019

A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.

### Adversarial Attack on Graph Structured Data

- Computer ScienceICML
- 2018

This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.

### Link Prediction Adversarial Attack Via Iterative Gradient Attack

- Computer ScienceIEEE Transactions on Computational Social Systems
- 2020

The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE.

### A Survey of Adversarial Learning on Graphs

- Computer ScienceArXiv
- 2020

This work surveys and unify the existing works w.r.t. attack and defense in graph analysis tasks, and gives appropriate definitions and taxonomies at the same time, and emphasizes the importance of related evaluation metrics.

### Adversarial Attacks on Node Embeddings via Graph Poisoning

- Computer ScienceICML
- 2019

This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks to derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.

### Adversarial Attacks on Graph Neural Networks via Meta Learning

- Computer Science
- 2019

The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.

### Towards Evaluating the Robustness of Neural Networks

- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

### Towards Deep Learning Models Resistant to Adversarial Attacks

- Computer ScienceICLR
- 2018

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

### Graph Embedding Techniques, Applications, and Performance: A Survey

- Computer ScienceKnowl. Based Syst.
- 2018