• Corpus ID: 186980207

Do Different Groups Have Comparable Privacy Tradeoffs

@inproceedings{Joshaghani2018DoDG,
  title={Do Different Groups Have Comparable Privacy Tradeoffs},
  author={Rezvan Joshaghani and Michael D. Ekstrand and Bart P. Knijnenburg and Hoda Mehrpouyan},
  year={2018}
}
Personalized systems increasingly employ Privacy Enhancing Technologies (PETs) to protect the identity of their users. In this paper, we are interested in whether the cost-benefit tradeoff — the underlying economics of the privacy calculus — is fairly distributed, or whether some groups of people experience a lower return on investment for their privacy decisions. 
ReuseKNN: Neighborhood Reuse for Privacy-Aware Recommendations
TLDR
This work introduces ReuseKNN, a novel privacy-aware recommender system that can substantially reduce the number of users that need to be protected with DP, while outperforming related approaches in terms of accuracy, and illustrates how to address privacy risks in recommender systems through neighborhood reuse combined with DP.
Formal specification and verification of user-centric privacy policies for ubiquitous systems
TLDR
The concept of contextual integrity is extended to provide mathematical models and algorithms that enables the creations and management of privacy norms for individual users and includes the augmentation of environmental variables as part of the privacy norms.

References

SHOWING 1-10 OF 18 REFERENCES
Privacy? I Can’t Even! Making a Case for User-Tailored Privacy
TLDR
A user-tailored privacy approach makes privacy decisions less burdensome by giving users the right kind of information and the right amount of control so as to be useful but not overwhelming or misleading.
Mechanism Design via Differential Privacy
TLDR
It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie.
E-privacy in 2nd generation E-commerce: privacy preferences versus actual behavior
TLDR
An experiment in which self-reported privacy preferences of 171 participants were compared with their actual disclosing behavior during an online shopping episode, suggesting that current approaches to protect online users' privacy may face difficulties to do so effectively.
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Differential Privacy and Machine Learning: a Survey and Review
TLDR
This paper explores the interplay between machine learning and differential privacy, namely privacy-preserving machine learning algorithms and learning-based data release mechanisms, and describes some theoretical results that address what can be learned differentially privately and upper bounds of loss functions for differentially private algorithms.
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Dimensionality of information disclosure behavior
k-Anonymity: A Model for Protecting Privacy
  • L. Sweeney
  • Computer Science
    Int. J. Uncertain. Fuzziness Knowl. Based Syst.
  • 2002
TLDR
The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
On k-Anonymity and the Curse of Dimensionality
TLDR
It is shown that the curse of high dimensionality also applies to the problem of privacy preserving data mining, and when a data set contains a large number of attributes which are open to inference attacks, it becomes difficult to anonymize the data without an unacceptably high amount of information loss.
...
...