Corpus ID: 220525609

A Survey of Privacy Attacks in Machine Learning

@article{Rigaki2020ASO,
  title={A Survey of Privacy Attacks in Machine Learning},
  author={Maria Rigaki and Sebasti{\'a}n Garc{\'i}a},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.07646}
}
As machine learning becomes more widely used, the need to study its implications in security and privacy becomes more urgent. Research on the security aspects of machine learning, such as adversarial attacks, has received a lot of focus and publicity, but privacy related attacks have received less attention from the research community. Although there is a growing body of work in the area, there is yet no extensive analysis of privacy related attacks. To contribute into this research line we… Expand
Membership Inference Attacks on Machine Learning: A Survey
TLDR
This paper presents the first comprehensive survey of membership inference attacks and summarizes and categorizes existing membership inference attacked and defenses and explicitly present how to implement attacks in various settings. Expand
Revolutionizing Medical Data Sharing Using Advanced Privacy-Enhancing Technologies: Technical, Legal, and Ethical Synthesis
TLDR
It is argued multiparty homomorphic encryption fulfills legal requirements for medical data sharing under the European Union’s General Data Protection Regulation which has set a global benchmark for data protection. Expand
Optimal Private Median Estimation under Minimal Distributional Assumptions
TLDR
This work studies the fundamental task of estimating the median of an underlying distribution from a finite number of samples, under pure differential privacy constraints, and designs a polynomial-time differentially private algorithm which provably achieves the optimal performance. Expand
FedPara: Low-rank Hadamard Product Parameterization for Efficient Federated Learning
TLDR
The proposed FedPara method re-parameterizes the model’s layers using low-rank matrices or tensors followed by the Hadamard product to overcome the burdens on frequent model uploads and downloads during federated learning (FL). Expand
Privacy Inference Attacks and Defenses in Cloud-based Deep Neural Network: A Survey
TLDR
This survey presents the most recent findings of privacy attacks and defenses appeared in cloudbased neural network services and introduces a new theory, called cloud-based ML privacy game, which is extracted from the recently published literature to provide a deep understanding of state-of-the-art research. Expand
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
TLDR
This paper investigates the influence of the target model’s complexity on the accuracy of this type of attack, focusing on convolutional neural network classifiers, and investigates the implication of the property inference on personal data in the light of Data Protection Regulations and Guidelines. Expand
R-GAP: Recursive Gradient Attack on Privacy
TLDR
This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack. Expand
Recent advances in adversarial machine learning: status, challenges and perspectives
TLDR
A survey of adversarial machine learning and some associated countermeasures is presented and a taxonomy of ML/AI system attacks that follow the same properties and characteristics, allowing them to be linked with different defensive approaches is presented. Expand
Survey: Leakage and Privacy at Inference Time
TLDR
A taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications is proposed, focusing on inference-time leakage, as the most likely scenario for publicly available models. Expand
System Optimization in Synchronous Federated Training: A Survey
  • Zhifeng Jiang, Wei Wang
  • Computer Science
  • ArXiv
  • 2021
TLDR
This paper surveys highly relevant attempts in the FL literature and organizes them by the related training phases in the standard workflow: selection, configuration, and reporting, and reviews exploratory work including measurement studies and benchmarking tools to friendly support FL developers. Expand
...
1
2
...

References

SHOWING 1-10 OF 109 REFERENCES
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
TLDR
This paper measures the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples, and proposes two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Expand
The security of machine learning in an adversarial setting: A survey
TLDR
This work presents a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings, and analyze the ML security model to develop a blueprint for this interdisciplinary research area. Expand
Membership Inference Attack against Differentially Private Deep Learning Model
TLDR
The experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. Expand
SoK: Security and Privacy in Machine Learning
TLDR
It is apparent that constructing a theoretical understanding of the sensitivity of modern ML algorithms to the data they analyze, à la PAC theory, will foster a science of security and privacy in ML. Expand
Machine Learning with Membership Privacy using Adversarial Regularization
TLDR
It is shown that the min-max strategy can mitigate the risks of membership inference attacks (near random guess), and can achieve this with a negligible drop in the model's prediction accuracy (less than 4%). Expand
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model. Expand
Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning
TLDR
This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server with a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples. Expand
PRADA: Protecting Against DNN Model Stealing Attacks
TLDR
The first step towards generic and effective detection of DNN model extraction attacks is proposed, PRADA, which analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior, and it is shown that PRADA can detect all priormodel extraction attacks with no false positives. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
TLDR
This tutorial introduces the fundamentals of adversarial machine learning to the security community, and presents novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks. Expand
...
1
2
3
4
5
...