Corpus ID: 24865214

Differentially Private Variational Dropout

  title={Differentially Private Variational Dropout},
  author={Beyza Ermis and Ali Taylan Cemgil},
Deep neural networks with their large number of parameters are highly flexible learning systems. The high flexibility in such networks brings with some serious problems such as overfitting, and regularization is used to address this problem. A currently popular and effective regularization technique for controlling the overfitting is dropout. Often, large data collections required for neural networks contain sensitive information such as the medical histories of patients, and the privacy of the… Expand
Security and Privacy Issues in Deep Learning
The vulnerabilities and the developed defense methods on the security of the models and data privacy under the notion of secure and private AI (SPAI) are reviewed. Expand
Applying Deep Neural Networks over Homomorphic Encrypted Medical Data
The findings highlight the potential of the proposed privacy-preserving deep learning methods to outperform existing approaches by providing, within a reasonable amount of time, results equivalent to those achieved by unencrypted models. Expand
Reviewing and Improving the Gaussian Mechanism for Differential Privacy
The utilities of the mechanisms improve those of [1,2] and are close to that of the optimal yet more computationally expensive Gaussian mechanism. Expand


Differentially Private Dropout
This paper introduces a dropout technique that provides an elegant Bayesian interpretation to dropout, and shows that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. Expand
Differentially Private Variational Inference for Non-conjugate Models
Many machine learning applications are based on data collected from people, such as their tastes and behaviour as well as biological traits and genetic data. Regardless of how important theExpand
Stochastic Gradient Descent with Differentially Private Updates
Differentially private versions of single-point and mini-batch stochastic gradient descent (SGD) are proposed and used for optimizing the objective for logistic regression and it is concluded that the performance of mini- batch differentially private SGD is very close to non-private SGD, in contrast to single- point differentiallyPrivate SGD which does not converge and has a high variance. Expand
Deep Learning with Differential Privacy
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Expand
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Private Aggregation of Teacher Ensembles (PATE) is demonstrated, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users, which achieves state-of-the-art privacy/utility trade-offs on MNIST and SVHN. Expand
Privacy-preserving deep learning
  • R. Shokri, Vitaly Shmatikov
  • Computer Science
  • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Expand
Variational Dropout and the Local Reparameterization Trick
The Variational dropout method is proposed, a generalization of Gaussian dropout, but with a more flexibly parameterized posterior, often leading to better generalization in stochastic gradient variational Bayes. Expand
Variational Dropout Sparsifies Deep Neural Networks
Variational Dropout is extended to the case when dropout rates are unbounded, a way to reduce the variance of the gradient estimator is proposed and first experimental results with individual drop out rates per weight are reported. Expand
Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction
The main idea is to enforce e-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. Expand
Differentially Private Learning with Kernels
This paper derives differentially private learning algorithms with provable "utility" or error bounds from the standard learning model of releasing different private predictor using three simple but practical models. Expand