t-Closeness: Privacy Beyond k-Anonymity and l-Diversity

@article{Li2007tClosenessPB,
  title={t-Closeness: Privacy Beyond k-Anonymity and l-Diversity},
  author={Ninghui Li and Tiancheng Li and Suresh Venkatasubramanian},
  journal={2007 IEEE 23rd International Conference on Data Engineering},
  year={2007},
  pages={106-115}
}
The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain "identifying" attributes) contains at least k records. [...] Key Method We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments.Expand
Closeness: A New Privacy Measure for Data Publishing
TLDR
It is shown that ℓ-diversity has a number of limitations and is neither necessary nor sufficient to prevent attribute disclosure, and a new notion of privacy called “closeness” is proposed that offers higher utility.
A New Profile Based Privacy Measure for Data Publishing
TLDR
This paper is introducing performance based automatic data publishing to multiple users using User Profile Category UPC and enhances the present flexible privacy model called (n,t)-closeness, which has a number of limitations.
Privacy Preservation Measure using t-closeness with combined l-diversity and k-anonymity
TLDR
This work proposes a unique method by combining two of the most widely used privacy preservation techniques: K-anonymity and l-diversity, and presents a new notion of privacy called “closeness”.
microdata Connecting privacy models : synergies between k-anonymity , t-closeness and differential privacy
The usual approach to generate k-anonymous data sets, based on generalization of the quasi-identifier attributes, does not provide any control on the variability of the confidential attributes within
Privacy Preservation Measurement through Diversity and Anonymity using Closeness
TLDR
This work proposes a new notion of privacy called k-anonymity,l-diversity, closeness, which is a comprehensive technique to change the dataset to preserve the privacy while keeping the original meaning intact.
A new perspective of privacy protection: Unique distinct l-SR diversity
TLDR
A new model, Unique Distinct l-SR diversity based on the sensitivity of private information is proposed, which achieved better performance on minimizing inference of sensitive information and reached the comparable generalization data quality compared with other data publishing algorithms.
Anonymity: A Formalization of Privacy - '-Diversity
Anonymization of published microdata has become a very important topic nowadays. The major diculty is to publish data of individuals in a manner that the released table both provides enough
Constrained k-Anonymity : Privacy with Generalization Boundaries
In the last few years, due to new privacy regulations, research in data privacy has flourished. A large number of privacy models were developed most of which are based on the k-anonymity property.
On the comparison of microdata disclosure control algorithms
TLDR
This work rejects the notion that all anonymizations satisfying a particular privacy property, such as k-anonymity, are equally good and advocates the use of vector-based methods for representing privacy and other measurable properties of an anonymization.
Clustering Heuristics for Efficient t-closeness Anonymisation
TLDR
A t-clustering algorithm with an average time complexity of \(O(m^{2} \log n)\) where n and m are the number of tuples and attributes, respectively is proposed and is addressed by using heuristics based on noise additions to distort the anonymised datasets, while minimising information loss.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 24 REFERENCES
l-Diversity: Privacy Beyond k-Anonymity
TLDR
This paper shows with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems, and proposes a novel and powerful privacy definition called \ell-diversity, which is practical and can be implemented efficiently.
Achieving k-Anonymity Privacy Protection Using Generalization and Suppression
  • L. Sweeney
  • Computer Science
    Int. J. Uncertain. Fuzziness Knowl. Based Syst.
  • 2002
TLDR
This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity and shows that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.
On the complexity of optimal K-anonymity
TLDR
It is proved that two general versions of optimal k-anonymization of relations are NP-hard, including the suppression version which amounts to choosing a minimum number of entries to delete from the relation.
k-Anonymity: A Model for Protecting Privacy
  • L. Sweeney
  • Computer Science
    Int. J. Uncertain. Fuzziness Knowl. Based Syst.
  • 2002
TLDR
The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment and examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected.
Data privacy through optimal k-anonymization
  • R. Bayardo, R. Agrawal
  • Computer Science
    21st International Conference on Data Engineering (ICDE'05)
  • 2005
TLDR
This paper proposes and evaluates an optimization algorithm for the powerful de-identification procedure known as k-anonymization, and presents a new approach to exploring the space of possible anonymizations that tames the combinatorics of the problem, and develops data-management strategies to reduce reliance on expensive operations such as sorting.
Aggregate Query Answering on Anonymized Tables
TLDR
A general framework of permutations-based anonymization to support accurate answering of aggregate queries is presented and it is shown that, for the same grouping, permutation-based techniques can always answer aggregate queries more accurately than generalization-based approaches.
Incognito: efficient full-domain K-anonymity
TLDR
A set of algorithms for producing minimal full-domain generalizations are introduced, and it is shown that these algorithms perform up to an order of magnitude faster than previous algorithms on two real-life databases.
Personalized privacy preservation
TLDR
The authors' technique performs the minimum generalization for satisfying everybody's requirements, and thus, retains the largest amount of information from the microdata, and establishes the superiority of the proposed solutions.
Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression
TLDR
The concept of minimal generalization is introduced, which captures the property of the release process not to distort the data more than needed to achieve k-anonymity, and possible preference policies to choose among diierent minimal generalizations are illustrated.
Transforming data to satisfy privacy constraints
TLDR
This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process by the use of generalizations and suppressions on the potentially identifying portions of the data.
...
1
2
3
...