Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

@inproceedings{Wu2019AdversarialEF,
  title={Adversarial Examples for Graph Data: Deep Insights into Attack and Defense},
  author={Huijun Wu and Chen Wang and Y. Tyshetskiy and A. Docherty and Kai Lu and Liming Zhu},
  booktitle={IJCAI},
  year={2019}
}
Graph deep learning models, such as graph convolutional networks (GCN) achieve remarkable performance for tasks on graph data. Similar to other types of deep models, graph deep learning models often suffer from adversarial attacks. However, compared with non-graph data, the discrete features, graph connections and different definitions of imperceptible perturbations bring unique challenges and opportunities for the adversarial attacks and defenses for graph data. In this paper, we propose both… Expand
Adversarial Attacks on Deep Graph Matching
TLDR
An adversarial attack model with two novel attack techniques to perturb the graph structure and degrade the quality of deep graph matching is proposed and a meta learning-based projected gradient descent method is developed to improve the search performance for producing effective perturbations. Expand
Adversarial Attack and Defense on Graph Data: A Survey
TLDR
This work systemically organize the considered works based on the features of each topic and provides a unified formulation for adversarialLearning on graph data which covers most adversarial learning studies on graph. Expand
Detection and Defense of Topological Adversarial Attacks on Graphs
TLDR
This work proposes a straightforward single node threshold test and describes a kernelbased two-sample test for detecting whether a given subset of nodes within a graph has been maliciously corrupted, a first step towards detecting adversarial attacks against graph models. Expand
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data
TLDR
This work proposes a novel research task, adversarial defenses against GNN-based privacy attacks, and presents a graph perturbation-based approach, NetFense, to achieve the goal, keeping graph data unnoticeability and reducing the prediction confidence of targeted label classification. Expand
Exploratory Adversarial Attacks on Graph Neural Networks
  • Xixun Lin, Chuan Zhou, +4 authors Bin Wang
  • Computer Science
  • 2020 IEEE International Conference on Data Mining (ICDM)
  • 2020
TLDR
A novel exploratory adversarial attack to boost the gradient-based perturbations on graphs, called EpoAtk, which significantly outperforms the state-of-the-art attacks with the same attack budgets. Expand
Enhancing Robustness of Graph Convolutional Networks via Dropping Graph Connections
TLDR
This paper designs a biased graph-sampling scheme to drop graph connections such that random, sparse and deformed subgraphs are constructed for training and inference, which yields a significant regularization on graph learning, alleviates the sensitivity to edge manipulations, and thus enhances the robustness of GCNs. Expand
A Survey of Adversarial Learning on Graph
Deep learning models on graphs have achieved remarkable performance in various graph analysis tasks, e.g., node classi€cation, link prediction and graph clustering. However, they expose uncertaintyExpand
Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study
TLDR
A comprehensive overview of existing graph adversarial attacks and the countermeasures is provided, categorize existing attacks and defenses, and review the corresponding state of the art methods. Expand
GraphDefense: Towards Robust Graph Convolutional Networks
TLDR
This paper proposes a method called GraphDefense to defend against the adversarial perturbations of graph convolutional networks, and shows that with careful design, the proposed algorithm can scale to large graphs, such as Reddit dataset. Expand
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
  • Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu
  • Computer Science
  • ArXiv
  • 2021
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable toExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
Adversarial Attack on Graph Structured Data
  • H. Dai, Hui Li, +4 authors Le Song
  • Computer Science, Mathematics
  • ICML
  • 2018
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks. Expand
Adversarial Examples: Attacks and Defenses for Deep Learning
TLDR
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Mitigating adversarial effects through randomization
TLDR
This paper proposes to utilize randomization at inference time to mitigate adversarial effects, and uses two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input image in a random manner. Expand
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
TLDR
Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks. Expand
Ensemble Adversarial Training: Attacks and Defenses
TLDR
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. Expand
Attack Graph Convolutional Networks by Adding Fake Nodes
TLDR
This paper proposes a new type of "fake node attacks" to attack GCNs by adding malicious fake nodes, much more realistic than previous attacks; in social network applications, the attacker only needs to register a set of fake accounts and link to existing ones. Expand
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
TLDR
The DkNN algorithm is evaluated on several datasets, and it is shown the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures. Expand
Deep Gaussian Embedding of Attributed Graphs: Unsupervised Inductive Learning via Ranking
TLDR
Graph2Gauss is proposed – an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
...
1
2
3
4
...