• Publications
  • Influence
Advances and Open Problems in Federated Learning
TLDR
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. Expand
  • 493
  • 46
  • PDF
Analyzing Federated Learning through an Adversarial Lens
TLDR
We explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. Expand
  • 170
  • 24
  • PDF
Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms
TLDR
In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. Expand
  • 84
  • 13
  • PDF
Exploring the Space of Black-box Attacks on Deep Neural Networks
TLDR
We propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. Expand
  • 61
  • 7
  • PDF
DARTS: Deceiving Autonomous Cars with Toxic Signs
TLDR
We propose and examine realistic security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). Expand
  • 103
  • 6
  • PDF
Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers
TLDR
We propose a strategy for incorporating dimensionality reduction via Principal Component Analysis to enhance the resilience of machine learning, targeting both the classification and the training phase. Expand
  • 109
  • 6
  • PDF
Lower Bounds on Adversarial Robustness from Optimal Transport
TLDR
In this paper, we use optimal transport to characterize the maximum achievable accuracy in an adversarial classification scenario, which is often referred to as adversarial robustness. Expand
  • 28
  • 5
  • PDF
PAC-learning in the presence of evasion adversaries
TLDR
In this paper, we step away from the attack-defense arms race and seek to understand the limits of what can be learned in the presence of an evasion adversary. Expand
  • 26
  • 5
  • PDF
PAC-learning in the presence of adversaries
TLDR
The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. Expand
  • 33
  • 4
Enhancing robustness of machine learning systems via data transformations
TLDR
We present and investigate strategies for incorporating a variety of data transformations including dimensionality reduction via Principal Component Analysis to enhance the resilience of machine learning, targeting both the classification and the training phase. Expand
  • 124
  • 1
  • PDF