• Corpus ID: 225086593

Towards More Practical Adversarial Attacks on Graph Neural Networks

@article{Ma2020TowardsMP,
  title={Towards More Practical Adversarial Attacks on Graph Neural Networks},
  author={Jiaqi Ma and Shuangrui Ding and Qiaozhu Mei},
  journal={arXiv: Learning},
  year={2020}
}
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show… 

Figures and Tables from this paper

Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models
TLDR
The proposed diffusion attack aims to select and attack a small set of nodes to degrade the performance of the entire prediction model to improve the robustness of GCN-based traffic prediction models and better protect the smart mobility systems.
Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks
TLDR
The effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks is investigated and the proposed SRLIM can constrain the topological structure of nodes from the input layer to the embedding space, to maintain the similarity of nodes in the propagation process.
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem
TLDR
This work study the problem of attacking GNNs in a restricted and realistic setup, by perturbing the features of a small set of nodes, with no access to model parameters and model predictions, and draws a connection between this type of attacks and an influence maximization problem on the graph.
Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection
Social networks are polluted by rumors, which can be detected by machine learning models. However, the models are fragile and understanding the vulnerabilities is critical to rumor detection. Certain
Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge
TLDR
This paper designs a generalized adversarial attacker: GF-Attack, which can perform an effective attack without knowing the number of layers of graph embedding models, and proves thatGF-Attack can perform the attack directly on the graph filter in a black-box fashion.
Adversarial Attacks on Graph Classification via Bayesian Optimisation
TLDR
A novel Bayesian optimisation-based attack method for graph classification models that is black-box, query-efficient and parsimonious with respect to the perturbation applied, and empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack.
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
TLDR
It is found that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: it is desirable to make systems robust, independent of context, and attackers of systems are normatively bad and defenders of systems is normatively good.
Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense
TLDR
The Black-Box Gradient Attack (BBGA) algorithm is proposed, which is able to achieve stable attack performance without accessing the training sets of the GNNs and is applicable when attacking against various defense methods.
Enhancing Self-supervised Video Representation Learning via Multi-level Feature Optimization
TLDR
A multi-level feature optimization framework to improve the generalization and temporal modeling ability of learned video representations and a simple temporal modeling module from multi- level features to enhance motion pattern learning is proposed.
Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks
(Scalar case.) According to Definition 1 and Lemma 1, given large enough M , the domain (−∞,+∞) can be partitioned into M subintervals P = { [x0, x1], [x1, x2], · · ·, [xm−1, xm], · · ·, [xM−1, xM ]
...
1
2
3
...

References

SHOWING 1-10 OF 31 REFERENCES
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
TLDR
A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph.
Adversarial Attacks on Neural Networks for Graph Data
TLDR
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.
A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models
TLDR
This paper investigates the theoretical connections between graph signal processing and graph embedding models in a principled way and forms a generalized adversarial attacker, GF-Attack, which is constructed by the graph filter and feature matrix and validated on several benchmark datasets.
Adversarial Attack on Graph Structured Data
  • H. Dai, Hui Li, +4 authors Le Song
  • Computer Science, Mathematics
    ICML
  • 2018
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.
Adversarial Attacks on Node Embeddings via Graph Poisoning
TLDR
This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks to derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks.
Fast Gradient Attack on Network Embedding
TLDR
A framework to generate adversarial networks based on the gradient information in Graph Convolutional Network (GCN) is proposed, and the proposed FGA behaves better than some baseline methods, i.e., the network embedding can be easily disturbed by only rewiring few links, achieving state-of-the-art attack performance.
Node Injection Attacks on Graphs via Reinforcement Learning
TLDR
This paper describes a reinforcement learning based method, namely NIPA, to sequentially modify the adversarial information of the injected nodes, and reports the results of experiments that show the superior performance of the proposed method, relative to the existing state-of-the-art methods.
Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
TLDR
This paper proposes both attack and defense techniques for adversarial attacks and shows that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
...
1
2
3
4
...