• Publications
  • Influence
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.
Privacy-preserving deep learning
TLDR
This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously.
De-anonymizing Social Networks
TLDR
A framework for analyzing privacy and anonymity in social networks is presented and a new re-identification algorithm targeting anonymized social-network graphs is developed, showing that a third of the users who can be verified to have accounts on both Twitter and Flickr can be re-identified in the anonymous Twitter graph.
Robust De-anonymization of Large Sparse Datasets
TLDR
This work applies the de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service, and demonstrates that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset.
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage.
The most dangerous code in the world: validating SSL certificates in non-browser software
TLDR
It is demonstrated that SSL certificate validation is completely broken in many security-critical applications and libraries and badly designed APIs of SSL implementations and data-transport libraries which present developers with a confusing array of settings and options are analyzed.
Airavat: Security and Privacy for MapReduce
TLDR
Airavat is a novel integration of mandatory access control and differential privacy, a MapReduce-based system which provides strong security and privacy guarantees for distributed computations on sensitive data.
Fast dictionary attacks on passwords using time-space tradeoff
TLDR
It is demonstrated that as long as passwords remain human-memorable, they are vulnerable to "smart-dictionary" attacks even when the space of potential passwords is large, calling into question viability of human- Memorable character-sequence passwords as an authentication mechanism.
Constraint solving for bounded-process cryptographic protocol analysis
The reachability problem for cryptographic protocols with non-atomic keys can be solved via a simple constraint satisfaction procedure.
...
1
2
3
4
5
...