The Limitations of Deep Learning in Adversarial Settings
- Nicolas Papernot, P. Mcdaniel, S. Jha, Matt Fredrikson, Z. B. Celik, A. Swami
- Computer ScienceEuropean Symposium on Security and Privacy
- 24 November 2015
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Practical Black-Box Attacks against Machine Learning
- Nicolas Papernot, P. Mcdaniel, Ian J. Goodfellow, S. Jha, Z. B. Celik, A. Swami
- Computer ScienceACM Asia Conference on Computer and…
- 8 February 2016
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Nicolas Papernot, P. Mcdaniel, Xi Wu, S. Jha, A. Swami
- Computer ScienceIEEE Symposium on Security and Privacy
- 14 November 2015
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Counterexample-guided abstraction refinement for symbolic model checking
An automatic iterative abstraction-refinement methodology that extends symbolic model checking to large hardware designs and devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly.
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
- Matt Fredrikson, S. Jha, T. Ristenpart
- Computer ScienceConference on Computer and Communications…
- 12 October 2015
A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name.
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
- Samuel Yeom, Irene Giacomelli, Matt Fredrikson, S. Jha
- Computer ScienceIEEE Computer Security Foundations Symposium
- 5 September 2017
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined.
Locally Differentially Private Protocols for Frequency Estimation
- Tianhao Wang, Jeremiah Blocki, Ninghui Li, S. Jha
- Computer ScienceUSENIX Security Symposium
- 1 August 2017
This paper introduces a framework that generalizes several LDP protocols proposed in the literature and yields a simple and fast aggregation algorithm, whose accuracy can be precisely analyzed, resulting in two new protocols that provide better utility than protocols previously proposed.
Automated generation and analysis of attack graphs
- Oleg Sheyner, J. Haines, S. Jha, R. Lippmann, Jeannette M. Wing
- Computer ScienceProceedings IEEE Symposium on Security and…
- 12 May 2002
This paper presents an automated technique for generating and analyzing attack graphs, based on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently.
Static Analysis of Executables to Detect Malicious Patterns
An architecture for detecting malicious patterns in executables that is resilient to common obfuscation transformations is presented, and experimental results demonstrate the efficacy of the prototype tool, SAFE (a static analyzer for executables).
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing
- Matt Fredrikson, Eric Lantz, S. Jha, Simon M Lin, David Page, T. Ristenpart
- Computer ScienceUSENIX Security Symposium
- 20 August 2014
It is concluded that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by this work.