Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type Conscious Transformation

  title={Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type Conscious Transformation},
  author={Bishwas Mandal and George T. Amariucai and Shuangqing Wei},
  journal={2022 International Joint Conference on Neural Networks (IJCNN)},
We propose an adversarial learning framework that deals with the privacy-utility tradeoff problem under two types of conditions: data-type ignorant, and data-type aware. Under data-type aware conditions, the privacy mechanism provides a one-hot encoding of categorical features, representing exactly one class, while under data-type ignorant conditions the categorical variables are represented by a collection of scores, one for each class. We use a neural network architecture consisting of a… 

Figures and Tables from this paper



Application-driven Privacy-preserving Data Publishing with Correlated Attributes

This work proposes a novel framework called PR-GAN that offers privacy-preserving mechanism using generative adversarial networks, and shows that it outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance in the target application.

Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

This work proposes an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data from the latent representation of deep networks from being reconstructed through model inversion attacks.

Learning Privacy Preserving Encodings Through Adversarial Training

This work presents a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information, and finds that it yields encoders that are resilient at maintaining privacy.

Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset

An innovative heuristic of cross-dataset training and evaluation is discussed, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in the privacy-preserving, video-based action recognition in deep learning.

Distributed generation of privacy preserving data with user customization

This work introduces a decoupling of the creation of a latent representation and the privatization of data that allows user-specific privatization to occur in a distributed setting with limited computation and minimal disturbance on the utility of the data.

SensitiveNets: Learning Agnostic Representations with Application to Face Images

A novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data, based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective.

Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study

This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed

Censoring Representations with an Adversary

This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.

An estimation-theoretic view of privacy

  • Hao WangF. Calmon
  • Computer Science, Mathematics
    2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2017
It is demonstrated how a x2-based information measure captures the fundamental PUT, and an estimation-theoretic analysis of the privacy-utility trade-off (PUT) is presented, and a sharp bound for the PUT is given.

Generative Adversarial Privacy

This work presents a data-driven framework called generative adversarial privacy (GAP), which allows the data holder to learn the privatization mechanism directly from the data and provides privacy guarantees against strong information-theoretic adversaries.