• Corpus ID: 211678154

Differentially Private Deep Learning with Smooth Sensitivity

@article{Sun2020DifferentiallyPD,
  title={Differentially Private Deep Learning with Smooth Sensitivity},
  author={Lichao Sun and Yingbo Zhou and Philip S. Yu and Caiming Xiong},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.00505}
}
Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice. One approach to study these concerns is through the lens of differential privacy. In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous. A particular instance of this approach is through a "teacher-student" framework, wherein the teacher, who owns the… 
Federated Model Distillation with Noise-Free Differential Privacy
TLDR
A novel framework called FedMD-NFDP is proposed, which applies the new proposed Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework and can effectively protect the privacy of local data with the least sacrifice of the model utility.
Secure Deep Graph Generation with Link Differential Privacy
TLDR
This paper leverage the differential privacy (DP) framework to formulate and enforce rigorous privacy constraints on deep graph generation models, with a focus on edge-DP to guarantee individual link privacy.

References

SHOWING 1-10 OF 34 REFERENCES
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
TLDR
Private Aggregation of Teacher Ensembles (PATE) is demonstrated, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users, which achieves state-of-the-art privacy/utility trade-offs on MNIST and SVHN.
Scalable Private Learning with PATE
TLDR
This work shows how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors, and introduces new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees.
PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees
TLDR
This paper investigates a method for ensuring (differential) privacy of the generator of the Generative Adversarial Nets (GAN) framework, and modifies the Private Aggregation of Teacher Ensembles (PATE) framework and applies it to GANs.
Deep Learning with Differential Privacy
TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Differentially Private Empirical Risk Minimization
TLDR
This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.
Bounds on the sample complexity for private learning and private data release
TLDR
This work examines several private learning tasks and gives tight bounds on their sample complexity, and shows strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexity of efficient and inefficient proper private learners.
PDLM: Privacy-Preserving Deep Learning Model on Cloud with Multiple Keys
TLDR
This paper proposes a novel privacy-preserving deep learning model, namely PDLM, to apply deep learning over the encrypted data under multiple keys and proves that it can achieve users’ privacy preservation and analyzes the efficiency of PDLM in theory.
Multiparty Differential Privacy via Aggregation of Locally Trained Classifiers
TLDR
This paper proposes a privacy-preserving protocol for composing a differentially private aggregate classifier using classifiers trained locally by separate mutually untrusting parties and presents a proof of differential privacy of the perturbed aggregate classifiers and a bound on the excess risk introduced by the perturbation.
Learning Differentially Private Recurrent Language Models
TLDR
This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data.
Learning privately from multiparty data
TLDR
This work proposes to transfer the `knowledge' of the local classifier ensemble by first creating labeled data from auxiliary unlabeled data, and then train a global $\epsilon$-differentially private classifier, and shows that majority voting is too sensitive and proposes a new risk weighted by class probabilities estimated from the ensemble.
...
1
2
3
4
...