Preserving differential privacy in convolutional deep belief networks

@article{Phan2017PreservingDP,
  title={Preserving differential privacy in convolutional deep belief networks},
  author={Nhathai Phan and Xintao Wu and Dejing Dou},
  journal={Machine Learning},
  year={2017},
  volume={106},
  pages={1681-1704}
}
The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief… 

Differential Privacy in Deep Learning: An Overview

TLDR
This paper classifies threats and defenses, and identifies the points in deep learning to add random noises to input samples, gradient or function to protect privacy model, especially differential privacy.

Analysis of Application Examples of Differential Privacy in Deep Learning

TLDR
This paper comparatively analyze and classify several deep learning models under differential privacy, and pays attention to the application of differential privacy in Generative Adversarial Networks (GANs), comparing and analyzing these models.

A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks

TLDR
A neuron noise-injection technique based on layer-wise buffered contribution ratio forwarding and ϵ-differential privacy technique to preserve privacy in a DNN model is presented, which was able to narrow down the existing accuracy gap to a close proximity, as well outperforms the state-of-the-art approaches in this context.

Differentially Private Generative Adversarial Network

TLDR
This paper proposes a differentially private GAN (DPGAN) model, in which it is demonstrated that the method can generate high quality data points at a reasonable privacy level by adding carefully designed noise to gradients during the learning procedure.

A review of privacy-preserving techniques for deep learning

A layer-wise Perturbation based Privacy Preserving Deep Neural Networks

  • Tosin A. AdesuyiByeong-Man Kim
  • Computer Science
    2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
  • 2019
TLDR
This approach was able to narrow down the accuracy gap between privacy-preserving and non-privacy preserving DNN model and determine points of perturbation and preserve privacy.

Privacy in Deep Learning: A Survey

TLDR
This survey reviews the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues, and shows that there is a gap in the literature regarding test-time inference privacy.

Privacy and Security Issues in Deep Learning: A Survey

TLDR
This paper briefly introduces the four types of attacks and privacy-preserving techniques in DL, and summarizes the attack and defense methods associated with DL privacy and security in recent years.
...

References

SHOWING 1-10 OF 79 REFERENCES

Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction

TLDR
The main idea is to enforce ε-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results.

Privacy-preserving deep learning

  • R. ShokriVitaly Shmatikov
  • Computer Science
    2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
TLDR
This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.

Deep Learning with Differential Privacy

TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.

Privacy-preserving logistic regression

TLDR
This paper addresses the important tradeoff between privacy and learnability, when designing algorithms for learning from private databases by providing a privacy-preserving regularized logistic regression algorithm based on a new privacy- Preserving technique.

Differential privacy via wavelet transforms

TLDR
This paper develops a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range.

Personal privacy vs population privacy: learning to attack anonymization

TLDR
It is demonstrated that even under Differential Privacy, such classifiers can be used to infer "private" attributes accurately in realistic data and it is observed that the accuracy of inference of private attributes for differentially private data and $l$-diverse data can be quite similar.

Differentially private recommender systems: building privacy into the net

TLDR
This work considers the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users, and finds that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy.

Risk Prediction with Electronic Health Records: A Deep Learning Approach

TLDR
A deep learning approach for phenotyping from patient EHRs by building a fourlayer convolutional neural network model for extracting phenotypes and perform prediction and the proposed model is validated on a real world EHR data warehouse under the specific scenario of predictive modeling of chronic diseases.

Differentially Private Online Learning

TLDR
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well.

Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records

TLDR
The findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.
...