Equity and Privacy: More Than Just a Tradeoff

@article{Pujol2021EquityAP,
  title={Equity and Privacy: More Than Just a Tradeoff},
  author={David Pujol and Ashwin Machanavajjhala},
  journal={IEEE Security \& Privacy},
  year={2021},
  volume={19},
  pages={93-97}
}
Organizations large and small collect information about individuals and groups and then want to share insights from this data to inform research and policy making. For instance, hospitals release deidentified data to innovate on disease detection and mitigation. Internet companies train and release machine learning (ML) models as a service for a variety of classification tasks. Government agencies distribute statistical data to enable research and policy making. However, releasing data, even in… 

Figures from this paper

References

SHOWING 1-7 OF 7 REFERENCES
Decision Making with Differential Privacy under a Fairness Lens
TLDR
It is shown that, when the decisions take as input differentially private data, the noise added to achieve privacy disproportionately impacts some groups over others.
Fair decision making using privacy-protected data
TLDR
This first-of-its-kind study into the impact of formally private mechanisms (based on differential privacy) on fair and equitable decision-making investigates novel tradeoffs on two real-world decisions made using U.S. Census data as well as a classic apportionment problem.
Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach
TLDR
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors and relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints while guaranteeing the Privacy of sensitive attributes.
Differential Privacy Has Disparate Impact on Model Accuracy
TLDR
It is demonstrated that in the neural networks trained using differentially private stochastic gradient descent (DP-SGD), accuracy of DP models drops much more for the underrepresented classes and subgroups, resulting in a disparate reduction of model accuracy.
Differentially Private Fair Learning
TLDR
New tradeoffs between fairness, accuracy, and privacy emerge only when requiring all three properties, and it is shown that these tradeoffs can be milder if group membership may be used at test time.
On the Compatibility of Privacy and Fairness
TLDR
This work investigates whether privacy and fairness can be simultaneously achieved by a single classifier in several different models and gives an efficient algorithm for classification that maintains utility and satisfies both privacy and approximate fairness with high probability.
Revealing information while preserving privacy
TLDR
A polynomial reconstruction algorithm of data from noisy (perturbed) subset sums and shows that in order to achieve privacy one has to add perturbation of magnitude (Ω√<i>n</i>).