Desensitized RDCA Subspaces for Compressive Privacy in Machine Learning
@article{Filipowicz2017DesensitizedRS, title={Desensitized RDCA Subspaces for Compressive Privacy in Machine Learning}, author={Artur Filipowicz and Thee Chanyaswad and S. Y. Kung}, journal={ArXiv}, year={2017}, volume={abs/1707.07770} }
The quest for better data analysis and artificial intelligence has lead to more and more data being collected and stored. As a consequence, more data are exposed to malicious entities. This paper examines the problem of privacy in machine learning for classification. We utilize the Ridge Discriminant Component Analysis (RDCA) to desensitize data with respect to a privacy label. Based on five experiments, we show that desensitization by RDCA can effectively protect privacy (i.e. low accuracy on…
References
SHOWING 1-10 OF 22 REFERENCES
Collaborative PCA/DCA Learning Methods for Compressive Privacy
- Computer ScienceACM Trans. Embed. Comput. Syst.
- 2017
Compressive privacy is proposed, a privacy-preserving technique to enable the data creator to compress data via collaborative learning so that the compressed data uploaded onto the Internet will be useful only for the intended utility and not be easily diverted to malicious applications.
Ratio Utility and Cost Analysis for Privacy Preserving Subspace Projection
- Computer ScienceArXiv
- 2017
A compressive-privacy based method, namely RUCA (Ratio Utility and Cost Analysis), is proposed, which can not only maximize performance for a privacy-insensitive classification task but also minimize the ability of any classifier to infer private information from the data.
A compressive multi-kernel method for privacy-preserving machine learning
- Computer Science2017 International Joint Conference on Neural Networks (IJCNN)
- 2017
The results show that the compression regime is successful in privacy preservation as the privacy classification accuracies are almost at the random-guess level in all experiments, and the novel SNR-based multi-kernel shows utility classification accuracy improvement upon the state-of-the-art in both datasets.
Discriminant-component eigenfaces for privacy-preserving face recognition
- Computer Science2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP)
- 2016
This work leverages a supervised-learning subspace projection method called Discriminant Component Analysis (DCA) for privacy-preserving face recognition, and can serve as a key enabler for real-world deployment of privacy- Preserve face recognition applications.
On the design and quantification of privacy preserving data mining algorithms
- Computer SciencePODS '01
- 2001
It is proved that the EM algorithm converges to the maximum likelihood estimate of the original distribution based on the perturbed data, and proposed metrics for quantification and measurement of privacy-preserving data mining algorithms are proposed.
Compressive Privacy: From Information\/Estimation Theory to Machine Learning [Lecture Notes]
- Computer ScienceIEEE Signal Processing Magazine
- 2017
A new paradigm known as online privacy or Internet privacy is becoming a major concern regarding the privacy of personal and sensitive data.
Privacy-preserving data mining
- Computer ScienceSIGMOD '00
- 2000
This work considers the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed and proposes a novel reconstruction procedure to accurately estimate the distribution of original data values.
Privacy preserving data classification with rotation perturbation
- Computer ScienceFifth IEEE International Conference on Data Mining (ICDM'05)
- 2005
Several perturbation techniques have been proposed recently, among which the most typical ones are randomization approach (Agrawal and Srikant, 2000) and condensation approach (Aggarwal and Yu, 2004).
Privacy-Preserving Ridge Regression on Hundreds of Millions of Records
- Computer Science, Mathematics2013 IEEE Symposium on Security and Privacy
- 2013
This work implements the complete system and experiments with it on real data-sets, and shows that it significantly outperforms pure implementations based only on homomorphic encryption or Yao circuits.
On the Design and Analysis of the Privacy-Preserving SVM Classifier
- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2011
This paper proposes an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors, and introduces the Privacy-Preserving SVM Classifier (abbreviated as PPSVC), designed for the commonly used Gaussian kernel function.