ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
- A. Salem, Yang Zhang, Mathias Humbert, Mario Fritz, M. Backes
- Computer ScienceNetwork and Distributed System Security Symposium
- 4 June 2018
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Decentralized Privacy-Preserving Proximity Tracing
- C. Troncoso, Mathias Payer, J. Pereira
- Computer ScienceIEEE Data Engineering Bulletin
- 25 May 2020
This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain.
Adversarial Examples for Malware Detection
- Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, M. Backes, P. Mcdaniel
- Computer ScienceEuropean Symposium on Research in Computer…
- 11 September 2017
This paper presents adversarial examples derived from regular inputs by introducing minor—yet carefully selected—perturbations into machine learning models, showing their robustness against inputs crafted by an adversary.
On the (Statistical) Detection of Adversarial Examples
- Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, M. Backes, P. Mcdaniel
- Computer ScienceArXiv
- 21 February 2017
It is shown that statistical properties of adversarial examples are essential to their detection, and they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests.
Reliable Third-Party Library Detection in Android and its Security Applications
- M. Backes, Sven Bugiel, Erik Derr
- Computer ScienceConference on Computer and Communications…
- 24 October 2016
This paper proposes a library detection technique that is resilient against common code obfuscations and that is capable of pinpointing the exact library version used in apps, and is first to quantify the security impact of third-party libs on the Android ecosystem.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
- Jinyuan Jia, Ahmed Salem, M. Backes, Yang Zhang, N. Gong
- Computer ScienceConference on Computer and Communications…
- 23 September 2019
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack.
You Get Where You're Looking for: The Impact of Information Sources on Code Security
- Y. Acar, M. Backes, S. Fahl, Doowon Kim, Michelle L. Mazurek, Christian Stransky
- Computer ScienceIEEE Symposium on Security and Privacy
- 22 May 2016
Analyzing how the use of information resources impacts code security confirms that API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
- Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, M. Backes, P. Mcdaniel
- Computer ScienceArXiv
- 14 June 2016
This paper shows how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers, and evaluates to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification.
Comparing the Usability of Cryptographic APIs
- Y. Acar, M. Backes, Christian Stransky
- Computer ScienceIEEE Symposium on Security and Privacy
- 22 May 2017
This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries.
Fairwalk: Towards Fair Graph Embedding
- Tahleen A. Rahman, Bartlomiej Surma, M. Backes, Yang Zhang
- Computer ScienceInternational Joint Conference on Artificial…
- 1 August 2019
This paper proposes a fairness-aware embedding method, namely Fairwalk, which extends node2vec, and demonstrates that Fairwalk reduces bias under multiple fairness metrics while still preserving the utility.
...
...