Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

@inproceedings{Jia2020DefendingAM,
  title={Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges},
  author={Jinyuan Jia and Neil Zhenqiang Gong},
  booktitle={Adaptive Autonomous Secure Cyber Systems},
  year={2020}
}
  • Jinyuan Jia, N. Gong
  • Published in
    Adaptive Autonomous Secure…
    17 September 2019
  • Computer Science
As machine learning (ML) becomes more and more powerful and easily accessible, attackers increasingly leverage ML to perform automated large-scale inference attacks in various domains. [] Key Result Our key observation is that attackers rely on ML classifiers in inference attacks. The adversarial machine learning community has demonstrated that ML classifiers have various vulnerabilities. Therefore, we can turn the vulnerabilities of ML into defenses against inference attacks. For example, ML classifiers are…
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
TLDR
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack.
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
TLDR
This work proposes the first backdoor attack against autoencoders and GANs where the adversary can control what the decoded or generated images are when the backdoor is activated, and shows that the adversary could build a backdoored autoencoder that returns a target output for all backdooring inputs, while behaving perfectly normal on clean inputs.
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
TLDR
It is found that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: it is desirable to make systems robust, independent of context, and attackers of systems are normatively bad and defenders of systems is normatively good.
WF-GAN: Fighting Back Against Website Fingerprinting Attack Using Adversarial Learning
TLDR
This paper designs WF-GAN, a GAN with an additional WF classifier component, to generate adversarial examples for WFclassifiers through adversarial learning, which achieves over 90% targeted defense success rate when the target websites set is twice as many as the source website set.
Federated Learning With Highly Imbalanced Audio Data
TLDR
This paper investigates using FL for a sound event detection task using audio from the FSD50K dataset, and shows that FL models trained using the high-volume clients can perform similarly to a centrally-trained model, though there is much more noise in results than would typically be expected for a central- trained model.
Applications of Game Theory and Advanced Machine Learning Methods for Adaptive Cyberdefense Strategies in the Digital Music Industry
  • Jing Jing
  • Computer Science
    Computational intelligence and neuroscience
  • 2022
TLDR
This study presents an innovative hybrid model that combines game theory and advanced machine learning methods for adaptive cyberdefense strategies that can predict the next steps in the game to produce the appropriate countermeasures and implement the best cyber defense strategies that govern an organization.
Face-Off: Adversarial Face Obfuscation
TLDR
This paper implements and evaluates Face-Off, a privacy-preserving framework that introduces strategic perturbations to images of the user’s face to prevent it from being correctly recognized, and finds that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++.

References

SHOWING 1-10 OF 93 REFERENCES
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
TLDR
StingRay is designed, a targeted poisoning attack that is broadly applicable---it is practical against 4 machine learning applications, which use 3 different learning algorithms, and it can bypass 2 existing defenses.
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
TLDR
A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
TLDR
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined.
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks With Adversarial Traces
TLDR
A novel defense, Mockingbird, is explored, a technique for generating traces that resists adversarial training by moving randomly in the space of viable traces and not following more predictable gradients, while incurring lower bandwidth overheads.
Machine Learning with Membership Privacy using Adversarial Regularization
TLDR
It is shown that the min-max strategy can mitigate the risks of membership inference attacks (near random guess), and can achieve this with a negligible drop in the model's prediction accuracy (less than 4%).
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
TLDR
This work is the first one that shows evasion attacks can be used as defensive techniques for privacy protection, and substantially outperforms existing methods.
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
TLDR
This work develops new DNNs that are robust to state-of-the-art evasion attacks, and proposes region-based classification to be robust to adversarial examples.
On Detecting Adversarial Perturbations
TLDR
It is shown empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans.
...
...