• Corpus ID: 235624036

When Differential Privacy Meets Interpretability: A Case Study

@article{Naidu2021WhenDP,
  title={When Differential Privacy Meets Interpretability: A Case Study},
  author={Rakshit Naidu and Aman Priyanshu and Aadith Kumar and Sasikanth Kotti and Haofan Wang and FatemehSadat Mireshghallah},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.13203}
}
Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off. However, little attention is given to the interpretability of these models, and how the application of DP affects the quality of interpretations. We propose an extensive study into the effects of DP training… 
2 Citations

Figures and Tables from this paper

A Comprehensive Analysis of Privacy Protection Techniques Developed for COVID-19 Pandemic

An extensive review of the PPTs that have been recently proposed to address the diverse privacy requirements/concerns stemming from the COVID-19 pandemic, as well as the paradigm shifts in personal data handling brought on by this pandemic.

Tensions Between the Proxies of Human Values in AI

It is argued that the AI community needs to consider all the consequences of choosing certain formulations of these pillars—not just the technical incompatibilities, but also the effects within the context of deployment.

References

SHOWING 1-10 OF 20 REFERENCES

Benchmarking Differentially Private Residual Networks for Medical Imagery

This paper compares two robust differential privacy mechanisms: Local-DP and DP-SGD and benchmark their performance when analyzing medical imagery records and analyzes the trade-off between the model's accuracy and the level of privacy it guarantees.

Deep Learning with Differential Privacy

This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings

This paper uses state-of-the-art methods for DP learning to train privacy-preserving models in clinical prediction tasks, including x-ray classification of images and mortality prediction in time series data, and uses these models to perform a comprehensive empirical investigation of the tradeoffs between privacy, utility, robustness to dataset shift and fairness.

Model Explanations with Differential Privacy

An adaptive differentially private algorithm is designed, which finds the minimal privacy budget required to produce accurate explanations of black-box feature-based model explanations, which locally approximate the model around the point of interest, using potentially sensitive data.

End-to-end privacy preserving deep learning on multi-institutional medical imaging

PriMIA (Privacy-preserving Medical Image Analysis), a free, open-source software framework for differentially private, securely aggregated federated learning and encrypted inference on medical imaging data, is presented.

Differentially Private Learning Needs Better Features (or Much More Data)

This work introduces simple yet strong baselines for differentially private learning that can inform the evaluation of future progress in this area and shows that private learning requires either much more private data, or access to features learned on public data from a similar domain.

U-Noise: Learnable Noise Masks for Interpretable Image Segmentation

This work introduces a new method for interpreting image segmentation models by learning regions of images in which noise can be applied without hindering downstream model performance, and applies this method to segmentation of the pancreas in CT scans.

Differentially Private Empirical Risk Minimization

This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.

Local Differential Privacy: a tutorial

An overview over different LDP algorithms for problems such as locally private heavy hitter identification and spatial data collection and an outlook on open problems in LDP are given.

Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising

An optimal Gaussian mechanism is developed whose variance is calibrated directly using the Gaussian cumulative density function instead of a tail bound approximation and equipped with a post-processing step based on adaptive estimation techniques by leveraging that the distribution of the perturbation is known.