• Publications
  • Influence
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Decentralized Privacy-Preserving Proximity Tracing
TLDR
This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain.
On the (Statistical) Detection of Adversarial Examples
TLDR
It is shown that statistical properties of adversarial examples are essential to their detection, and they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests.
Adversarial Examples for Malware Detection
TLDR
This paper presents adversarial examples derived from regular inputs by introducing minor—yet carefully selected—perturbations into machine learning models, showing their robustness against inputs crafted by an adversary.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
TLDR
This paper shows how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers, and evaluates to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification.
You Get Where You're Looking for: The Impact of Information Sources on Code Security
TLDR
Analyzing how the use of information resources impacts code security confirms that API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity.
Reliable Third-Party Library Detection in Android and its Security Applications
TLDR
This paper proposes a library detection technique that is resilient against common code obfuscations and that is capable of pinpointing the exact library version used in apps, and is first to quantify the security impact of third-party libs on the Android ecosystem.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
TLDR
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack.
Oxymoron: Making Fine-Grained Memory Randomization Practical by Allowing Code Sharing
TLDR
Oxymoron is the first solution to be secure against just-in-time code reuse attacks and it is demonstrated that fine-grained memory randomization is feasible without forfeiting the enormous memory savings of shared libraries.
Automatic Discovery and Quantification of Information Leaks
TLDR
This work presents the first automatic method for information-flow analysis that discovers what information is leaked and computes its comprehensive quantitative interpretation, which includes all established information-theoretic measures in quantitative information- flow.
...
1
2
3
4
5
...