Share This Author
Towards Evaluating the Robustness of Neural Networks
- Nicholas Carlini, D. Wagner
- Computer ScienceIEEE Symposium on Security and Privacy (SP)
- 16 August 2016
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
This work identifies obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples, and develops attack techniques to overcome this effect.
MixMatch: A Holistic Approach to Semi-Supervised Learning
- David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, A. Oliver, Colin Raffel
- Computer ScienceNeurIPS
- 6 May 2019
This work unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeling data using MixUp.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
It is concluded that adversarialExamples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not.
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
A white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end has a 100% success rate, and the feasibility of this attack introduce a new domain to study adversarial examples.
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between $5\times and $16\times less data to reach the same accuracy.
Hidden Voice Commands
- Nicholas Carlini, Pratyush Mishra, Wenchao Zhou
- Computer ScienceUSENIX Security Symposium
- 10 August 2016
This paper explores in this paper how voice interfaces can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices.
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy.
ROP is Still Dangerous: Breaking Modern Defenses
This paper introduces three new attack methods that break many existing ROP defenses and shows how to break kBouncer and ROPecker, two recent low-overhead defenses that can be applied to legacy software on existing hardware.