# Towards More Practical Adversarial Attacks on Graph Neural Networks

@article{Ma2020TowardsMP, title={Towards More Practical Adversarial Attacks on Graph Neural Networks}, author={Jiaqi Ma and Shuangrui Ding and Qiaozhu Mei}, journal={arXiv: Learning}, year={2020} }

We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show…

## 26 Citations

Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models

- Computer ScienceArXiv
- 2021

The proposed diffusion attack aims to select and attack a small set of nodes to degrade the performance of the entire prediction model to improve the robustness of GCN-based traffic prediction models and better protect the smart mobility systems.

Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks

- Computer ScienceArXiv
- 2021

The effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks is investigated and the proposed SRLIM can constrain the topological structure of nodes from the input layer to the embedding space, to maintain the similarity of nodes in the propagation process.

Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem

- Computer ScienceArXiv
- 2021

This work study the problem of attacking GNNs in a restricted and realistic setup, by perturbing the features of a small set of nodes, with no access to model parameters and model predictions, and draws a connection between this type of attacks and an influence maximization problem on the graph.

Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection

- Computer Science
- 2022

Social networks are polluted by rumors, which can be detected by machine learning models. However, the models are fragile and understanding the vulnerabilities is critical to rumor detection. Certain…

Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge

- Computer ScienceArXiv
- 2021

This paper designs a generalized adversarial attacker: GF-Attack, which can perform an effective attack without knowing the number of layers of graph embedding models, and proves thatGF-Attack can perform the attack directly on the graph filter in a black-box fashion.

Adversarial Attacks on Graph Classification via Bayesian Optimisation

- Computer Science, MathematicsArXiv
- 2021

A novel Bayesian optimisation-based attack method for graph classification models that is black-box, query-efficient and parsimonious with respect to the perturbation applied, and empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack.

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

- Computer ScienceICML 2021
- 2021

It is found that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: it is desirable to make systems robust, independent of context, and attackers of systems are normatively bad and defenders of systems is normatively good.

Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense

- Computer Science, MathematicsArXiv
- 2021

The Black-Box Gradient Attack (BBGA) algorithm is proposed, which is able to achieve stable attack performance without accessing the training sets of the GNNs and is applicable when attacking against various defense methods.

Enhancing Self-supervised Video Representation Learning via Multi-level Feature Optimization

- Computer ScienceArXiv
- 2021

A multi-level feature optimization framework to improve the generalization and temporal modeling ability of learned video representations and a simple temporal modeling module from multi- level features to enhance motion pattern learning is proposed.

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

- Computer ScienceICML
- 2021

(Scalar case.) According to Definition 1 and Lemma 1, given large enough M , the domain (−∞,+∞) can be partitioned into M subintervals P = { [x0, x1], [x1, x2], · · ·, [xm−1, xm], · · ·, [xM−1, xM ]…

## References

SHOWING 1-10 OF 31 REFERENCES

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

- Computer Science, MathematicsIJCAI
- 2019

A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.

Adversarial Attacks on Neural Networks for Graph Data

- Mathematics, Computer ScienceKDD
- 2018

This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.

A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models

- Computer Science, MathematicsAAAI
- 2020

This paper investigates the theoretical connections between graph signal processing and graph embedding models in a principled way and forms a generalized adversarial attacker, GF-Attack, which is constructed by the graph filter and feature matrix and validated on several benchmark datasets.

Adversarial Attack on Graph Structured Data

- Computer Science, MathematicsICML
- 2018

This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.

Adversarial Attacks on Graph Neural Networks via Meta Learning

- Computer Science, Mathematics
- 2019

The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.

Adversarial Attacks on Node Embeddings via Graph Poisoning

- Computer ScienceICML
- 2019

This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks to derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.

Fast Gradient Attack on Network Embedding

- Computer Science, PhysicsArXiv
- 2018

A framework to generate adversarial networks based on the gradient information in Graph Convolutional Network (GCN) is proposed, and the proposed FGA behaves better than some baseline methods, i.e., the network embedding can be easily disturbed by only rewiring few links, achieving state-of-the-art attack performance.

Node Injection Attacks on Graphs via Reinforcement Learning

- Mathematics, Computer ScienceArXiv
- 2019

This paper describes a reinforcement learning based method, namely NIPA, to sequentially modify the adversarial information of the injected nodes, and reports the results of experiments that show the superior performance of the proposed method, relative to the existing state-of-the-art methods.

Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

- Computer ScienceIJCAI
- 2019

This paper proposes both attack and defense techniques for adversarial attacks and shows that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations.

Towards Evaluating the Robustness of Neural Networks

- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.