• Publications
  • Influence
DARTS: Deceiving Autonomous Cars with Toxic Signs
TLDR
We propose and examine realistic security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). Expand
  • 99
  • 6
  • PDF
On the Robustness of Deep K-Nearest Neighbors
TLDR
We propose a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well. Expand
  • 25
  • 4
  • PDF
Beyond Grand Theft Auto V for Training, Testing and Enhancing Deep Learning in Self Driving Cars
TLDR
We train a CNN to detect multiple affordance variables from an unlabeled image of a highway, including an angle between the car and the road, distances to the lane markings and to cars in front. Expand
  • 25
  • 2
  • PDF
Defending Against Adversarial Examples with K-Nearest Neighbor
TLDR
We propose a defense against adversarial examples based on a k-nearest neighbor (kNN) on the intermediate activation of neural networks. Expand
  • 14
  • 1
  • PDF
Analyzing the Robustness of Open-World Machine Learning
TLDR
We present the first analysis of the robustness of open-world learning frameworks in the presence of adversaries by introducing and designing øodAdvExamples. Expand
  • 12
  • 1
  • PDF
Improving Adversarial Robustness Through Progressive Hardening
TLDR
We propose Adversarial Training with Early Stopping (ATES), guided by principles from curriculum learning that emphasizes on starting "easy" and gradually ramping up on the "difficulty" of training. Expand
  • 8
  • 1
  • PDF
Enhancing robustness of machine learning systems via data transformations
TLDR
We present and investigate strategies for incorporating a variety of data transformations including dimensionality reduction via Principal Component Analysis to enhance the resilience of machine learning, targeting both the classification and the training phase. Expand
  • 123
  • PDF
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
TLDR
We propose a new real-world attack against the computer vision based systems of autonomous vehicles that exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence. Expand
  • 32
  • PDF
Inverse designed photonic fibers and metasurfaces for nonlinear frequency conversion
Typically, photonic waveguides designed for nonlinear frequency conversion rely on intuitive and established principles, including index guiding and band gap engineering, and are based on simpleExpand
  • 24
  • PDF
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
TLDR
A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification. Expand
  • 8
  • PDF