Corpus ID: 231979264

Privacy-Preserving Graph Convolutional Networks for Text Classification

@article{Igamberdiev2021PrivacyPreservingGC,
  title={Privacy-Preserving Graph Convolutional Networks for Text Classification},
  author={Timour Igamberdiev and Ivan Habernal},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.09604}
}
Graph convolutional networks (GCNs) are a 001 powerful architecture for representation learn002 ing on documents that naturally occur as 003 graphs, e.g., citation or social networks. How004 ever, sensitive personal information, such as 005 documents with people’s profiles or relation006 ships as edges, are prone to privacy leaks, 007 as the trained model might reveal the orig008 inal input. Although differential privacy 009 (DP) offers a well-founded privacy-preserving 010 framework, GCNs pose… Expand

Figures and Tables from this paper

Releasing Graph Neural Networks with Differential Privacy Guarantees
With the increasing popularity of Graph Neural Networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. MoreExpand
When differential privacy meets NLP: The devil is in the detail
Differential privacy provides a formal approach to privacy of individuals. Applications of differential privacy in various scenarios, such as protecting users’ original utterances, must satisfyExpand

References

SHOWING 1-10 OF 58 REFERENCES
When Differential Privacy Meets Graph Neural Networks
TLDR
This paper proposes an LDP algorithm in which a central server can communicate with graph nodes to privately collect their data and estimate the graph convolution layer of a GCN, and analyzes the theoretical characteristics of the method and compares it with state-of-the-art mechanisms. Expand
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks. Expand
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
TLDR
Cluster-GCN is proposed, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure and allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy. Expand
How Powerful are Graph Neural Networks?
TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs. Expand
Towards Differentially Private Text Representations
TLDR
A new deep learning framework under an untrusted server setting, which includes a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter ε on accuracy, and provide enhanced flexibility in choosing randomization probabilities for LDP. Expand
Question Answering by Reasoning Across Documents with Graph Convolutional Networks
TLDR
A neural model which integrates and reasons relying on information spread within documents and across multiple documents is introduced, which achieves state-of-the-art results on a multi-document question answering dataset, WikiHop. Expand
Privacy-preserving Neural Representations of Text
TLDR
This article measures the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Expand
Towards Robust and Privacy-preserving Text Representations
TLDR
This paper proposes an approach to explicitly obscure important author characteristics at training time, such that representations learned are invariant to these attributes, which leads to increased privacy in the learned representations. Expand
Using word embeddings to improve the privacy of clinical notes
TLDR
A privacy technique for anonymizing clinical notes that guarantees all private health information is secured and can secure clinical texts at a low cost and extremely high recall with a readability trade-off while remaining useful for natural language processing classification tasks. Expand
Revisiting Semi-Supervised Learning with Graph Embeddings
TLDR
On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, the proposed semi-supervised learning framework shows improved performance over many of the existing models. Expand
...
1
2
3
4
5
...