GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation

@article{Sajadmanesh2022GAPDP,
  title={GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation},
  author={Sina Sajadmanesh and Ali Shahin Shamsabadi and Aur{\'e}lien Bellet and Daniel G{\'a}tica-P{\'e}rez},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.00949}
}
Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation by recursively aggregating information from each node’s local neighborhood. However, despite their state-of-the-art performance in predictive graph-based applications, recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information. As a result, in this paper, we study the problem of learning GNNs with Differential Privacy (DP). We… 

Figures and Tables from this paper

Certified Graph Unlearning
TLDR
This work introduces the first known framework for certified graph unlearning of GNNs, and demonstrates excellent performance-complexity trade-offs when compared to complete retraining methods and approaches that do not leverage graph information.
Differentially Private Subgraph Counting in the Shuffle Model
TLDR
This paper proposes accurate subgraph counting algorithms by introducing a recently studied shuffle model and shows that they significantly outperform the one-round local algorithms in terms of accuracy and achieve small estimation errors with a reasonable privacy budget, e.g., smaller than 1 in edge DP.
Group Privacy: An Underrated but Worth Studying Research Problem in the Era of Artificial Intelligence and Big Data
TLDR
The main objective is to advocate the possibility of group privacy breaches when big data meet AI in real-world scenarios, and set a hypothesis that group privacy is a genuine problem, and can likely occur when AI-based techniques meet high dimensional and large-scale datasets.

References

SHOWING 1-10 OF 76 REFERENCES
Node-Level Differentially Private Graph Neural Networks
TLDR
This work formally addresses the problem of learning GNN parameters with node-level privacy, and provides an algorithmic solution with a strong differential privacy guarantee, and employs a careful sensitivity analysis and provides a non-trivial extension of the privacy-by-amplification technique.
Locally Private Graph Neural Networks
TLDR
This paper proposes a privacy-preserving, architecture-agnostic GNN learning framework with formal privacy guarantees based on Local Differential Privacy (LDP), and develops a locally private mechanism to perturb and compress node features, which the server can efficiently collect to approximate the GNN's neighborhood aggregation step.
Releasing Graph Neural Networks with Differential Privacy Guarantees
TLDR
A new graph-specific scheme of releasing a student GNN, which avoids splitting private training data altogether and is theoretically analyzed in the Rènyi differential privacy framework and provides privacy guarantees.
Information Obfuscation of Graph Neural Networks
TLDR
This paper proposes a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance, which creates a strong defense against inference attacks, while only suffering small loss in task performance.
Quantifying Privacy Leakage in Graph Embedding
TLDR
It is shown that the strong correlation between the graph embeddings and node attributes allows the adversary to infer sensitive information (e.g., gender or location) through three inference attacks targeting Graph Neural Networks.
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis
TLDR
An in-depth understanding of the tradeoff between GCN model utility and robustness against potential privacy attacks is provided, and an existing algorithm for differentially private graph convolutional network (DP GCN) training is adapted and a new DP GCN mechanism LapGraph is proposed.
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data
TLDR
This work proposes a novel research task, adversarial defenses against GNN-based privacy attacks, and presents a graph perturbation-based approach, NetFense, to achieve the goal, keeping graph data unnoticeability and reducing the prediction confidence of targeted label classification.
Node-Level Membership Inference Attacks Against Graph Neural Networks
TLDR
This paper systematically defines the threat models and proposes three node-level membership inference attacks based on an adversary’s background knowledge against graph neural networks, showing that GNNs are vulnerable to node- level membership inference even when the adversary has minimal background knowledge.
GraphMI: Extracting Private Graph Data from Graph Neural Networks
TLDR
Graph Model Inversion attack, which aims to infer edges of the training graph by inverting Graph Neural Networks, one of the most popular graph analysis tools, is presented and it is shown that edges with greater influence are more likely to be recovered.
Membership Inference Attack on Graph Neural Networks
TLDR
To prevent MI attacks on GNN, two effective defenses are proposed that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance.
...
...