Corpus ID: 85502695

Preserving Differential Privacy in Adversarial Learning with Provable Robustness

@article{Phan2019PreservingDP,
  title={Preserving Differential Privacy in Adversarial Learning with Provable Robustness},
  author={N. Phan and Ruoming Jin and M. Thai and Han Hu and D. Dou},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.09822}
}
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in differential privacy, to establish a new connection between differential privacy preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial… Expand
Gradient Masking and the Underestimated Robustness Threats of Differential Privacy in Deep Learning
TLDR
This paper experimentally evaluates the impact of training with Differential Privacy, a standard method for privacy preservation, on model vulnerability against a broad range of adversarial attacks, and suggests that private models are less robust than their non-private counterparts. Expand
Robustness Threats of Differential Privacy
TLDR
This paper empirically observe an interesting trade-off between the differential privacy and the security of neural networks, and extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Expand
Robustness, Privacy, and Generalization of Adversarial Training
TLDR
The privacy-robustness trade-off and generalization- Robustness Trade-off in adversarial training is established and quantifies from both theoretical and empirical aspects. Expand
Differentially Private Lifelong Learning
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in lifelong learning (L2M) for deep neural networks. Our key idea is to employ functional perturbationExpand
Towards A Guided Perturbation for Privacy Protection through Detecting Adversarial Examples with Provable Accuracy and Precision
TLDR
A practical mechanism to determine the boundary which can guide the design and implementation of privacy protection through perturbation, and leveraged the strategy of detecting adversarial examples through a set of detection methods to find the "blind corners" of detection, to create a detection mechanism which has very high accuracy and precision. Expand
CAPE: Context-Aware Private Embeddings for Private Language Learning
Deep learning-based language models have achieved state-of-the-art results in a number of applications including sentiment analysis, topic labelling, intent classification and others. Obtaining textExpand
DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning
TLDR
A framework for dynamic adversarial discovery of information (DADI), motivated by a scenario where information is used by third parties with unknown objectives, is introduced and group fairness (demographic parity) is attained by rewarding the agent with the adversary's loss, computed over the final feature set. Expand
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
TLDR
It is demonstrated that applying differential privacy can improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. Expand
When Machine Learning Meets Privacy: A Survey and Outlook
TLDR
The state of the art in privacy issues and solutions for machine learning is surveyed and future research directions in this field are pointed out. Expand
Artificial Neural Networks in Public Policy: Towards an Analytical Framework
TLDR
This dissertation assesses how artificial neural networks and other machine learning systems should be devised, built, and implemented in US governmental organizations (i.e. public agencies) and develops an analytical framework that public agency managers and analysts can utilize. Expand

References

SHOWING 1-10 OF 42 REFERENCES
Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning
TLDR
A novel mechanism to preserve differential privacy in deep neural networks, such that the privacy budget consumption is totally independent of the number of training steps, and it has the ability to adaptively inject noise into features based on the contribution of each to the output. Expand
Certified Robustness to Adversarial Examples with Differential Privacy
TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
DeepMask: Masking DNN Models for robustness against adversarial samples
TLDR
By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Expand
Discovering Adversarial Examples with Momentum
TLDR
A strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) is proposed to discover adversarial examples and can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods. Expand
Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction
TLDR
The main idea is to enforce e-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. Expand
Provable defenses against adversarial examples via the convex outer adversarial polytope
TLDR
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Preserving differential privacy in convolutional deep belief networks
TLDR
This work proposes the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions of traditional CDBNs, and shows that the pCDBN is highly effective and significantly outperforms existing solutions. Expand
Extending Defensive Distillation
TLDR
This work revisits defensive distillation---which is one of the mechanisms proposed to mitigate adversarial examples---to address its limitations and views the results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques. Expand
...
1
2
3
4
5
...