Arguments for the Unsuitability of Convolutional Neural Networks for Non-Local Tasks

@article{Stabinger2021ArgumentsFT,
  title={Arguments for the Unsuitability of Convolutional Neural Networks for Non-Local Tasks},
  author={Sebastian Stabinger and David Peer and Antonio Rodr'iguez-S'anchez},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2021},
  volume={142},
  pages={
          171-179
        }
}

Evaluating Attention in Convolutional Neural Networks for Blended Images

The findings showed that adding a self-attention mechanism reliably increases the similarity to the V4 area of the human ventral stream, an area where attention has a large influence on the processing of visual stimuli.

Is current research on adversarial robustness addressing the right problem?

It is argued that the current formulation of the problem serves short term goals, and needs to be revised for us to achieve bigger gains and solve the problem of adversarial vulnerability.

A brain-inspired object-based attention network for multi-object recognition and visual reasoning

An encoder-decoder model inspired by the interacting bottom-up and top-down visual pathways making up the recognition-attention system in the brain achieves near-perfect accuracy and significantly outperforms larger models in generalizing to unseen stimuli.

Automatic Classification and Consistency verification of Digital drawings using Deep Learning

  • Computer Science
  • 2021
Two deep neural networks are proposed for the automatic classification of construction drawings and object detection using alternative strategies for feature extraction and training using two well-known CNN architectures.

Evaluating the progress of deep learning for visual relational concepts

It will be hypothesised that iterative processing of the input, together with shifting attention between the iterations will be needed to efficiently and reliably solve real world relational concept learning.

References

SHOWING 1-10 OF 15 REFERENCES

The Notorious Difficulty of Comparing Human and Machine Perception

It is shown that, despite their ability to solve closed-contour tasks, the authors' neural networks use different decision-making strategies than humans, and that neural networks do experience a "recognition gap" on minimal recognizable images.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Not-So-CLEVR: learning same–different relations strains feedforward neural networks

It is shown that feedforward neural networks struggle to learn abstract visual relations that are effortlessly recognized by non-human primates, birds, rodents and even insects, and that feedback mechanisms such as attention, working memory and perceptual grouping may be the key components underlying human-level abstract visual reasoning.

Testing Deep Neural Networks on the Same-Different Task

This work tries to understand to what extent state-of-the-art convolutional neural networks for image classification are able to deal with a challenging abstract problem, the so-called same-different task.

CORnet: Modeling the Neural Mechanisms of Core Object Recognition

The current best ANN model derived from this approach (CORnet-S) is among the top models on Brain-Score, a composite benchmark for comparing models to the brain, but is simpler than other deep ANNs in terms of the number of convolutions performed along the longest path of information processing in the model.

Comparing machines and humans on a visual categorization test

This work compares the efficiency of human and machine learning in assigning an image to one of two categories determined by the spatial arrangement of constituent parts and demonstrates that human subjects grasp the separating principles from a handful of examples, whereas the error rates of computer programs fluctuate wildly and remain far behind that of humans even after exposure to thousands of examples.

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.

Lower bounds for sorting networks

We establish a lower bound of (1.12 – o(l)) n log n on the size of any n-input sorting network; this is the first lower bound that improves upon the trivial information-theoretic bound by more than a

Same-different problems strain convolutional neural networks

It is argued that feedback mechanisms including attention and perceptual grouping may be the key computational components underlying abstract visual reasoning in modern machine vision algorithms.

A ‘complexity level’ analysis of immediate vision

  • John K. Tsotsos
  • Computer Science
    International Journal of Computer Vision
  • 2004
This paper shows how the seemingly intractable problem of visual perception can be converted into a much simpler problem by the application of several physical and biological constraints and argues strongly for the validity of the computational approach to modeling the human visual system.