Corpus ID: 233231471

Towards Causal Federated Learning For Enhanced Robustness and Privacy

@article{Francis2021TowardsCF,
  title={Towards Causal Federated Learning For Enhanced Robustness and Privacy},
  author={Sreya Francis and Irene Tenison and I. Rish},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.06557}
}
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (noni.i.d… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 15 REFERENCES
Differentially Private Federated Learning: A Client Level Perspective
TLDR
The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance. Expand
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage. Expand
Federated Learning with Non-IID Data
TLDR
This work presents a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices, and shows that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data. Expand
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
TLDR
The reasons why deep learning models may leak information about their training data are investigated and new algorithms tailored to the white-box setting are designed by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. Expand
Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks
TLDR
It is shown that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. Expand
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
TLDR
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined. Expand
Stealing Machine Learning Models via Prediction APIs
TLDR
Simple, efficient attacks are shown that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees against the online services of BigML and Amazon Machine Learning. Expand
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Expand
Learning Differentially Private Language Models Without Losing Accuracy
TLDR
This work builds on recent advances in the training of deep networks on userpartitioned data and privacy accounting for stochastic gradient descent and finds that private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset. Expand
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
TLDR
A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name. Expand
...
1
2
...