What do AI algorithms actually learn? - On false structures in deep learning
@article{Thesing2019WhatDA, title={What do AI algorithms actually learn? - On false structures in deep learning}, author={L. Thesing and Vegard Antun and A. Hansen}, journal={ArXiv}, year={2019}, volume={abs/1906.01478} }
There are two big unsolved mathematical questions in artificial intelligence (AI): (1) Why is deep learning so successful in classification problems and (2) why are neural nets based on deep learning at the same time universally unstable, where the instabilities make the networks vulnerable to adversarial attacks. We present a solution to these questions that can be summed up in two words; false structures. Indeed, deep learning does not learn the original structures that humans use when… CONTINUE READING
Supplemental Code
Github Repo
Via Papers with Code
Code related to the paper "What do AI algorithms actually learn? - On False Structures in Deep Learning"
Figures, Tables, and Topics from this paper
4 Citations
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
- Computer Science
- 2019
- 1
- PDF
Invariance, encodings, and generalization: learning identity effects with neural networks
- Computer Science
- ArXiv
- 2021
- PDF
References
SHOWING 1-10 OF 45 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer Science, Mathematics
- ICLR
- 2018
- 2,940
- PDF
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
- Computer Science
- 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
- 1,840
- PDF
One Pixel Attack for Fooling Deep Neural Networks
- Computer Science, Mathematics
- IEEE Transactions on Evolutionary Computation
- 2019
- 852
- PDF
On instabilities of deep learning in image reconstruction and the potential costs of AI
- Computer Science, Medicine
- Proceedings of the National Academy of Sciences
- 2020
- 74
- PDF
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
- 2,059
- PDF
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
- Computer Science, Mathematics
- 2016 IEEE Symposium on Security and Privacy (SP)
- 2016
- 1,574
- PDF
Universal Adversarial Perturbations
- Computer Science, Mathematics
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
- 1,105
- PDF