DP2-VAE: Differentially Private Pre-trained Variational Autoencoders

@article{Jiang2022DP2VAEDP,
  title={DP2-VAE: Differentially Private Pre-trained Variational Autoencoders},
  author={Dihong Jiang and Guojun Zhang and Mahdi Karami and Xi Chen and Yunfeng Shao and Yaoliang Yu},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.03409}
}
Modern machine learning systems achieve great success when trained on large datasets. However, these datasets usually contain sensitive information (e.g. medical records, face images), leading to serious privacy concerns. Differentially private generative models (DPGMs) emerge as a solution to circumvent such privacy concerns by generating privatized sensitive data. Similar to other differentially private (DP) learners, the major challenge for DPGM is also on how to achieve a subtle balance… 

Figures and Tables from this paper

DPD-fVAE: Synthetic Data Generation Using Federated Variational Autoencoders With Differentially-Private Decoder

This work proposes DPD-fVAE, a federated Variational Autoencoder with Differentially-Private Decoder, to synthesise a new, labelled dataset for subsequent machine learning tasks, which can reduce the privacy cost per epoch and thus enable better data generators.