Preserving differential privacy in convolutional deep belief networks

@article{Phan2017PreservingDP,
  title={Preserving differential privacy in convolutional deep belief networks},
  author={Nhathai Phan and Xintao Wu and Dejing Dou},
  journal={Machine Learning},
  year={2017},
  volume={106},
  pages={1681-1704}
}
The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief… 

Differential Privacy in Deep Learning: An Overview

This paper classifies threats and defenses, and identifies the points in deep learning to add random noises to input samples, gradient or function to protect privacy model, especially differential privacy.

A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks

A neuron noise-injection technique based on layer-wise buffered contribution ratio forwarding and ϵ-differential privacy technique to preserve privacy in a DNN model is presented, which was able to narrow down the existing accuracy gap to a close proximity, as well outperforms the state-of-the-art approaches in this context.

Differentially Private Generative Adversarial Network

This paper proposes a differentially private GAN (DPGAN) model, in which it is demonstrated that the method can generate high quality data points at a reasonable privacy level by adding carefully designed noise to gradients during the learning procedure.

A review of privacy-preserving techniques for deep learning

Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack

This paper investigates the resistance of differentially private AI models to substantial privacy invasion attacks according to the degree of privacy guarantee, and analyzes how privacy parameters should be set to prevent the attacks while preserving the utility of the models.

A layer-wise Perturbation based Privacy Preserving Deep Neural Networks

  • Tosin A. AdesuyiByeong-Man Kim
  • Computer Science
    2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
  • 2019
This approach was able to narrow down the accuracy gap between privacy-preserving and non-privacy preserving DNN model and determine points of perturbation and preserve privacy.

Privacy in Deep Learning: A Survey

This survey reviews the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues, and shows that there is a gap in the literature regarding test-time inference privacy.

Privacy and Security Issues in Deep Learning: A Survey

This paper briefly introduces the four types of attacks and privacy-preserving techniques in DL, and summarizes the attack and defense methods associated with DL privacy and security in recent years.
...

References

SHOWING 1-10 OF 79 REFERENCES

Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction

The main idea is to enforce ε-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results.

Deep Learning with Differential Privacy

This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.

Privacy-preserving logistic regression

This paper addresses the important tradeoff between privacy and learnability, when designing algorithms for learning from private databases by providing a privacy-preserving regularized logistic regression algorithm based on a new privacy- Preserving technique.

Differential privacy via wavelet transforms

This paper develops a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range.

Personal privacy vs population privacy: learning to attack anonymization

It is demonstrated that even under Differential Privacy, such classifiers can be used to infer "private" attributes accurately in realistic data and it is observed that the accuracy of inference of private attributes for differentially private data and $l$-diverse data can be quite similar.

Differentially private recommender systems: building privacy into the net

This work considers the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users, and finds that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy.

Risk Prediction with Electronic Health Records: A Deep Learning Approach

A deep learning approach for phenotyping from patient EHRs by building a fourlayer convolutional neural network model for extracting phenotypes and perform prediction and the proposed model is validated on a real world EHR data warehouse under the specific scenario of predictive modeling of chronic diseases.

Differentially Private Online Learning

This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well.

Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records

The findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.

Functional Mechanism: Regression Analysis under Differential Privacy

The main idea is to enforce e-differential privacy by perturbing the objective function of the optimization problem, rather than its results, and it significantly outperforms existing solutions.
...