Corpus ID: 237355107

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

@inproceedings{Xu2020BlindfoldedAS,
  title={Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs},
  author={Jiarong Xu and Yizhou Sun and Xin Jiang and Yanhao Wang and Yang Yang and Chunping Wang and Jiangang Lu},
  year={2020}
}
Adversarial attacks on graphs have attracted considerable research interests. Existing works assume the attacker is either (partly) aware of the victim model, or able to send queries to it. These assumptions are, however, unrealistic. To bridge the gap between theoretical graph attacks and real-world scenarios, in this work, we propose a novel and more realistic setting: strict black-box graph attack, in which the attacker has no knowledge about the victim model at all and is not allowed to… Expand

References

SHOWING 1-10 OF 36 REFERENCES
A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models
TLDR
This paper investigates the theoretical connections between graph signal processing and graph embedding models in a principled way and forms a generalized adversarial attacker, GF-Attack, which is constructed by the graph filter and feature matrix and validated on several benchmark datasets. Expand
Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study
TLDR
A comprehensive overview of existing graph adversarial attacks and the countermeasures is provided, categorize existing attacks and defenses, and review the corresponding state of the art methods. Expand
Attack Graph Convolutional Networks by Adding Fake Nodes
TLDR
This paper proposes a new type of "fake node attacks" to attack GCNs by adding malicious fake nodes, much more realistic than previous attacks; in social network applications, the attacker only needs to register a set of fake accounts and link to existing ones. Expand
Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
TLDR
This paper proposes both attack and defense techniques for adversarial attacks and shows that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations. Expand
Adversarial Attacks on Neural Networks for Graph Data
TLDR
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account. Expand
All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs
TLDR
This paper explores the properties of Nettack perturbations and proposes LowBlow, a low-rank adversarial attack which is able to affect the classification performance of both GCN and tensor-based node embeddings and it is shown that the low- rank attack is noticeable and making it unnoticeable results in a high-rank attack. Expand
Adversarial Attacks on Node Embeddings via Graph Poisoning
TLDR
This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks to derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. Expand
Adversarial Attack on Graph Structured Data
  • H. Dai, Hui Li, +4 authors Le Song
  • Computer Science, Mathematics
  • ICML
  • 2018
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks. Expand
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
TLDR
A novel gradient-based attack method is presented that facilitates the difficulty of tackling discrete graph data and yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph. Expand
Fast Gradient Attack on Network Embedding
TLDR
A framework to generate adversarial networks based on the gradient information in Graph Convolutional Network (GCN) is proposed, and the proposed FGA behaves better than some baseline methods, i.e., the network embedding can be easily disturbed by only rewiring few links, achieving state-of-the-art attack performance. Expand
...
1
2
3
4
...