On the Robustness to Adversarial Examples of Neural ODE Image Classifiers

@article{Carrara2019OnTR,
  title={On the Robustness to Adversarial Examples of Neural ODE Image Classifiers},
  author={Fabio Carrara and Roberto Caldelli and F. Falchi and G. Amato},
  journal={2019 IEEE International Workshop on Information Forensics and Security (WIFS)},
  year={2019},
  pages={1-6}
}
The vulnerability of deep neural networks to adversarial attacks currently represents one of the most challenging open problems in the deep learning field. The NeurIPS 2018 work that obtained the best paper award proposed a new paradigm for defining deep neural networks with continuous internal activations. In this kind of networks, dubbed Neural ODE Networks, a continuous hidden state can be defined via parametric ordinary differential equations, and its dynamics can be adjusted to build… Expand
3 Citations

References

SHOWING 1-10 OF 35 REFERENCES
Detecting Adversarial Samples from Artifacts
Exploring the space of adversarial images
  • Pedro Tabacof, E. Valle
  • Computer Science
  • 2016 International Joint Conference on Neural Networks (IJCNN)
  • 2016
Towards Deep Learning Models Resistant to Adversarial Attacks
Adversarial Examples Detection in Features Distance Spaces
Adversarial image detection in deep neural networks
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
  • Xin Li, Fuxin Li
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
Towards Evaluating the Robustness of Neural Networks
The Limitations of Deep Learning in Adversarial Settings
On Detecting Adversarial Perturbations
Exploiting CNN Layer Activations to Improve Adversarial Image Classification
...
1
2
3
4
...