Corpus ID: 52917305

Link Prediction Adversarial Attack

@article{Chen2018LinkPA,
  title={Link Prediction Adversarial Attack},
  author={Jinyin Chen and Ziqiang Shi and Yangyang Wu and Xuanheng Xu and Haibin Zheng},
  journal={ArXiv},
  year={2018},
  volume={abs/1810.01110}
}
Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep model has also been revealed by carefully designed adversarial examples generated by various adversarial attack methods. With the wider application of deep model in complex network analysis, in this paper we define and formulate the link prediction adversarial attack problem and put forward a novel… Expand
Can Adversarial Network Attack be Defended?
TLDR
This paper proposes novel adversarial training strategies to improve GNNs' defensibility against attacks, and analytically investigates the robustness properties for GNN's granted by the use of smooth defense, and proposes two special smooth defense strategies: smoothing distillation and smoothing cross-entropy loss function. Expand
Adversarial Attack and Defense on Graph Data: A Survey
TLDR
This work systemically organize the considered works based on the features of each topic and provides a unified formulation for adversarialLearning on graph data which covers most adversarial learning studies on graph. Expand
Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study
TLDR
A comprehensive overview of existing graph adversarial attacks and the countermeasures is provided, categorize existing attacks and defenses, and review the corresponding state of the art methods. Expand
Adversarial Attack on Hierarchical Graph Pooling Neural Networks
TLDR
This paper proposes an adversarial attack framework for the vulnerability of the Hierarchical Graph Pooling (HGP) Neural Networks, which are advanced GNNs that perform very well in the graph classification in terms of prediction accuracy and designs a surrogate model that consists of convolutional and pooling operators to generate adversarial samples to fool the hierarchical GNN-based graph classification models. Expand
GraphAttacker: A General Multi-Task GraphAttack Framework
TLDR
The results show that GraphAttacker can achieve state-of-the-art attack performance on graph analysis tasks of node classification, graph classification, and link prediction and the novel Similarity Modification Rate (SMR) to quantify the similarity between nodes thus constrain the attack budget is proposed. Expand
Survey on graph embeddings and their applications to machine learning problems on graphs
TLDR
This survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description, and presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs. Expand
A Game-Theoretic Algorithm for Link Prediction
TLDR
A new, quasi-local approach is proposed (i.e., one which considers nodes within some radius k) that combines generalised group closeness centrality and semivalue interaction indices and achieves very good results even when given a suboptimal radius k as a parameter. Expand
N2VSCDNNR: A Local Recommender System Based on Node2vec and Rich Information Network
TLDR
A novel clustering recommender system based on node2vec technology and rich information network, namely, N2VSCDNNR, to solve the data sparsity problem in the network and the two-phase personalized recommendation to realize the personalized recommendation of items for each user is proposed. Expand
Auditing the Sensitivity of Graph-based Ranking with Visual Analytics
TLDR
A visual analytics framework for explaining and exploring the sensitivity of any graph-based ranking algorithm by performing perturbation-based what-if analysis. Expand
Graph Ranking Auditing: Problem Definition and Fast Solutions
TLDR
This paper proposes to audit graph ranking by finding the influential graph elements (e.g., edges, nodes, attributes, and subgraphs) regarding their impact on the ranking results and formulate graph ranking auditing problem as quantifying the influence of graph elements on theranking results. Expand
...
1
2
...

References

SHOWING 1-10 OF 53 REFERENCES
Fast Gradient Attack on Network Embedding
TLDR
A framework to generate adversarial networks based on the gradient information in Graph Convolutional Network (GCN) is proposed, and the proposed FGA behaves better than some baseline methods, i.e., the network embedding can be easily disturbed by only rewiring few links, achieving state-of-the-art attack performance. Expand
Adversarial Attack on Graph Structured Data
  • H. Dai, Hui Li, +4 authors Le Song
  • Computer Science, Mathematics
  • ICML
  • 2018
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks. Expand
Adversarial Attacks on Neural Networks for Graph Data
TLDR
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account. Expand
Adversarial Attacks on Node Embeddings
TLDR
This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks, derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. Expand
Adversarial Attacks on Node Embeddings via Graph Poisoning
TLDR
This work provides the first adversarial vulnerability analysis on the widely used family of methods based on random walks to derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
TLDR
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust. Expand
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods. Expand
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand
The Impact of Unlinkability on Adversarial Community Detection: Effects and Countermeasures
TLDR
It is shown that a privacy conscious community can substantially disrupt community detection using only local knowledge even while facing up to the asymmetry of a completely knowledgeable mobile-adversary. Expand
...
1
2
3
4
5
...