• Publications
  • Influence
DARTS: Deceiving Autonomous Cars with Toxic Signs
TLDR
A novel attack against vehicular sign recognition systems is proposed: signs are created that change as they are viewed from different angles, and thus, can be interpreted differently by the driver and sign recognition.
Enhancing robustness of machine learning systems via data transformations
TLDR
The use of data transformations as a defense against evasion attacks on ML classifiers is effective against the best known evasion attacks from the literature, resulting in a two-fold increase in the resources required by a white-box adversary with knowledge of the defense.
On the Robustness of Deep K-Nearest Neighbors
TLDR
This work proposes a heuristic attack that allows us to use gradient descent to find adversarial examples for kNN classifiers, and then apply it to attack the DkNN defense as well, and suggests that this attack is moderately stronger than any naive attack on kNN and significantly outperforms other attacks on DKNN.
Analyzing the Robustness of Open-World Machine Learning
TLDR
The first analysis of the robustness of open-world learning frameworks in the presence of adversaries is presented by introducing and designing øodAdvExamples, and the experimental results show that current OOD detectors can be easily evaded by slightly perturbing benign OOD inputs, revealing a severe limitation of current open- world learning frameworks.
Improving Adversarial Robustness Through Progressive Hardening
TLDR
Adversarial Training with Early Stopping with ATES stabilizes network training even for a large perturbation norm and allows the network to operate at a better clean accuracy versus robustness trade-off curve compared to AT.
Beyond Grand Theft Auto V for Training, Testing and Enhancing Deep Learning in Self Driving Cars
TLDR
The efficacy and flexibility of a "GTA-V"-like virtual environment is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving.
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
TLDR
It is demonstrated that SAT stabilizes network training even for a large perturbation norm and allows the network to operate at a better clean accuracy versus robustness trade-off curve compared to AT.
Defending Against Adversarial Examples with K-Nearest Neighbor
TLDR
A defense against adversarial examples based on a k-nearest neighbor (kNN) on the intermediate activation of neural networks is proposed, which surpasses state-of-the-art defenses on MNIST and CIFAR-10 against l2-perturbation by a significant margin.
Inverse designed photonic fibers and metasurfaces for nonlinear frequency conversion
Typically, photonic waveguides designed for nonlinear frequency conversion rely on intuitive and established principles, including index guiding and band gap engineering, and are based on simple
Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
TLDR
This work proposes a new real-world attack against the computer vision based systems of autonomous vehicles (AVs) that exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence.
...
...