Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type Conscious Transformation

@article{Mandal2022UncertaintyAutoencoderBasedPA,
  title={Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type Conscious Transformation},
  author={Bishwas Mandal and George T. Amariucai and Shuangqing Wei},
  journal={2022 International Joint Conference on Neural Networks (IJCNN)},
  year={2022},
  pages={1-8}
}
We propose an adversarial learning framework that deals with the privacy-utility tradeoff problem under two types of conditions: data-type ignorant, and data-type aware. Under data-type aware conditions, the privacy mechanism provides a one-hot encoding of categorical features, representing exactly one class, while under data-type ignorant conditions the categorical variables are represented by a collection of scores, one for each class. We use a neural network architecture consisting of a… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 49 REFERENCES

Application-driven Privacy-preserving Data Publishing with Correlated Attributes

This work proposes a novel framework called PR-GAN that offers privacy-preserving mechanism using generative adversarial networks, and shows that it outperforms conventional methods in effectively hiding the sensitive attributes while guaranteeing high performance in the target application.

Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

This work proposes an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data from the latent representation of deep networks from being reconstructed through model inversion attacks.

Context-Aware Generative Adversarial Privacy

This work introduces a novel context-aware privacy framework called GAP, which leverages recent advancements in generative adversarial networks to allow the data holder to learn privatization schemes from the dataset itself, and demonstrates that the framework can be easily applied in practice, even in the absence of dataset statistics.

Learning Privacy Preserving Encodings Through Adversarial Training

This work presents a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information, and finds that it yields encoders that are resilient at maintaining privacy.

Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset

An innovative heuristic of cross-dataset training and evaluation is discussed, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in the privacy-preserving, video-based action recognition in deep learning.

Privacy-utility tradeoff under statistical uncertainty

  • A. MakhdoumiN. Fawaz
  • Computer Science
    2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2013
We focus on the privacy-accuracy tradeoff encountered by a user who wishes to release some data to an analyst, that is correlated with his private data, in the hope of receiving some utility. We rely

SensitiveNets: Learning Agnostic Representations with Application to Face Images

A novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data, based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective.

Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study

This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed

Censoring Representations with an Adversary

This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.

Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest

This paper applies relatively maximal perturbations of raw data to conditionally damage model's classification of one attribute while preserving the model performance over another attribute in order to protect privacy from human perception and model (machine) based detection.