Adversarial Attacks on Neural Networks for Graph Data

@article{Zgner2018AdversarialAO,
  title={Adversarial Attacks on Neural Networks for Graph Data},
  author={Daniel Z{\"u}gner and Amir Akbarnejad and Stephan G{\"u}nnemann},
  journal={Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
  year={2018}
}
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph… 

Figures and Tables from this paper

Adversarial Attack on Graph Structured Data
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
Adversarial Attacks on Graphs by Adding Fake Nodes
TLDR
This work proposes a reinforcement learning method, g-Faker, for black-box attack, and a gradient-based greedy method, GradAtt, for white- box attack, both of which are designed to add fake nodes to graphs.
Task and Model Agnostic Adversarial Attack on Graph Neural Networks
TLDR
This work investigates the problem of task and model agnostic evasion attacks where adversaries modify the test graph to affect the performance of any unknown downstream task, and designs an effective heuristic through a novel combination of Graph Isomorphism Network with deep Q-learning.
Adversarial Attack on Hierarchical Graph Pooling Neural Networks
TLDR
This paper proposes an adversarial attack framework for the vulnerability of the Hierarchical Graph Pooling (HGP) Neural Networks, which are advanced GNNs that perform very well in the graph classification in terms of prediction accuracy and designs a surrogate model that consists of convolutional and pooling operators to generate adversarial samples to fool the hierarchical GNN-based graph classification models.
Robust Graph Convolutional Networks Against Adversarial Attacks
TLDR
Robust GCN (RGCN), a novel model that "fortifies'' GCNs against adversarial attacks by adopting Gaussian distributions as the hidden representations of nodes in each convolutional layer, which can automatically absorb the effects of adversarial changes in the variances of the Gaussian distribution.
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.
Detection and Defense of Topological Adversarial Attacks on Graphs
TLDR
This work proposes a straightforward single node threshold test and describes a kernelbased two-sample test for detecting whether a given subset of nodes within a graph has been maliciously corrupted, a first step towards detecting adversarial attacks against graph models.
Exploratory Adversarial Attacks on Graph Neural Networks
  • Xixun Lin, Chuan Zhou, Bin Wang
  • Computer Science
    2020 IEEE International Conference on Data Mining (ICDM)
  • 2020
TLDR
A novel exploratory adversarial attack to boost the gradient-based perturbations on graphs, called EpoAtk, which significantly outperforms the state-of-the-art attacks with the same attack budgets.
Node Copying for Protection Against Graph Neural Network Topology Attacks
TLDR
This work proposes an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks, which is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
Adversarial Attack on Graph Structured Data
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.
Adversarial Network Embedding
TLDR
An Adversarial Network Embedding (ANE) framework is proposed, which leverages the adversarial learning principle to regularize the representation learning and is competitive with or superior to state-of-the-art approaches on benchmark network embedding tasks.
Certifiable Robustness and Robust Training for Graph Convolutional Networks
  • D. Zugner, Stephan Gunnemann
  • Computer Science
    Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
  • 2019
TLDR
This work proposes the first method for certifiable (non-)robustness of graph convolutional networks with respect to perturbations of the node attributes and proposes a robust semisupervised training procedure that treats the labeled and unlabeled nodes jointly.
Practical Attacks Against Graph-based Clustering
TLDR
This work design and evaluate two novel graph attacks against a state-of-the-art network-level, graph-based detection system, highlighting areas in adversarial machine learning that have not yet been addressed, specifically graph- based clustering techniques and a global feature space where realistic attackers without perfect knowledge must be accounted for in order to be practical.
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
TLDR
Graph2Gauss is proposed - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification and the benefits of modeling uncertainty are demonstrated.
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Data Poisoning Attacks on Multi-Task Relationship Learning
TLDR
This paper focuses on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data and proposes an efficient algorithm called PATOM for computing optimal attack strategies.
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
...
1
2
3
4
5
...