• Corpus ID: 218502486

Stealing Links from Graph Neural Networks

@article{He2021StealingLF,
  title={Stealing Links from Graph Neural Networks},
  author={Xinlei He and Jinyuan Jia and Michael Backes and Neil Zhenqiang Gong and Yang Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2005.02131}
}
Graph data, such as social networks and chemical networks, contains a wealth of information that can help to build powerful applications. To fully unleash the power of graph data, a family of machine learning models, namely graph neural networks (GNNs), is introduced. Empirical results show that GNNs have achieved state-of-the-art performance in various tasks. Graph data is the key to the success of GNNs. High-quality graph is expensive to collect and often contains sensitive information, such… 
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation
TLDR
This paper comprehensively investigate and develop model extraction attacks against GNN models and systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker.
Node-Level Membership Inference Attacks Against Graph Neural Networks
TLDR
This paper systematically defines the threat models and proposes three node-level membership inference attacks based on an adversary’s background knowledge against graph neural networks, showing that GNNs are vulnerable to node- level membership inference even when the adversary has minimal background knowledge.
Quantifying Privacy Leakage in Graph Embedding
TLDR
It is shown that the strong correlation between the graph embeddings and node attributes allows the adversary to infer sensitive information (e.g., gender or location) through three inference attacks targeting Graph Neural Networks.
Graph Unlearning
TLDR
GraphEraser is proposed, a novel machine unlearning method tailored to graph data that achieves up to 112% higher F1 score than that of the majority vote aggregation and a learningbased aggregation method.
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
TLDR
The extensive experimental evaluation conducted over five model architectures and four datasets shows that the complexity of the training dataset plays an important role with respect to the attack’s performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated.
Quantifying and Mitigating Privacy Risks of Contrastive Learning
TLDR
The first privacy analysis of contrastive learning through the lens of membership inference and attribute inference is performed, and it is shown that contrastive models trained on image datasets are less vulnerable to membership inference attacks but more vulnerable to attribute inference attacks compared to supervised models.
Dataset-Level Attribute Leakage in Collaborative Learning
TLDR
This work considers settings where each party obtains black-box access to the model computed by their mutually agreed-upon algorithm on their joined data, and shows that multi-party computation can cause information leakage between the parties.
Label-Leaks: Membership Inference Attack with Label
TLDR
A systematic investigation of membership inference attack when the target model only provides the predicted label, which focuses on two adversarial settings and proposes different attacks, namely transfer-based attack and perturbation based attack.
LPGNet: Link Private Graph Networks for Node Classification
TLDR
A new neural network architecture called LPGNet is presented, which provides differential privacy (DP) guarantees for edges using a novel design for how graph edge structure is used during training and offers consistently better privacy-utility tradeoffs than DpGCN.
Model Stealing Attacks Against Inductive Graph Neural Networks
TLDR
This paper systematically defines the threat model and proposes six attacks based on the adversary’s background knowledge and the responses of the target models, showing that the proposed model stealing attacks against GNNs achieve promising performance.
...
...

References

SHOWING 1-10 OF 115 REFERENCES
Backdoor Attacks to Graph Neural Networks
TLDR
This work proposes a subgraph based backdoor attack to GNN for graph classification that predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
How Powerful are Graph Neural Networks?
TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.
Adversarial Attacks on Neural Networks for Graph Data
TLDR
This work introduces the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions, and generates adversarial perturbations targeting the node's features and the graph structure, taking the dependencies between instances in account.
Graph Neural Networks for Social Recommendation
TLDR
This paper provides a principled approach to jointly capture interactions and opinions in the user-item graph and proposes the framework GraphRec, which coherently models two graphs and heterogeneous strengths for social recommendations.
Adversarial Attacks on Graph Neural Networks via Meta Learning
TLDR
The core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks on graph neural networks for node classification that perturb the discrete graph structure, essentially treating the graph as a hyperparameter to optimize.
Robust Graph Convolutional Networks Against Adversarial Attacks
TLDR
Robust GCN (RGCN), a novel model that "fortifies'' GCNs against adversarial attacks by adopting Gaussian distributions as the hidden representations of nodes in each convolutional layer, which can automatically absorb the effects of adversarial changes in the variances of the Gaussian distribution.
Benchmarking Graph Neural Networks
TLDR
A reproducible GNN benchmarking framework is introduced, with the facility for researchers to add new models conveniently for arbitrary datasets, and a principled investigation into the recent Weisfeiler-Lehman GNNs (WL-GNNs) compared to message passing-based graph convolutional networks (GCNs).
Attacking Graph-based Classification via Manipulating the Graph Structure
TLDR
This work forms an attack as a graph-based optimization problem, solving which produces the edges that an attacker needs to manipulate to achieve its attack goal, and proposes several approximation techniques to solve the optimization problem.
Adversarial Attack on Graph Structured Data
TLDR
This paper proposes a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier, and uses both synthetic and real-world data to show that a family of Graph Neural Network models are vulnerable to adversarial attacks.
A Fair Comparison of Graph Neural Networks for Graph Classification
TLDR
By comparing GNNs with structure-agnostic baselines the authors provide convincing evidence that, on some datasets, structural information has not been exploited yet and can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.
...
...