An Inference Network Model for Goal-directed Attentional Selection

@article{Chu2019AnIN,
  title={An Inference Network Model for Goal-directed Attentional Selection},
  author={Yang Chu and Dan F. M. Goodman},
  journal={2019 Conference on Cognitive Computational Neuroscience},
  year={2019}
}
”Listen to the cello in this symphony!” How can we direct selective attention according to different goals, even in distracting environments which we haven’t experienced before? It is an essential cognitive ability of the brain, but remains challenging for machines. We developed a computational model that can identify individual digits in images containing multiple overlapping digits, without ever having seen overlapping digits during training. The goal-driven attentional selection is modelled… 

Figures from this paper

References

SHOWING 1-6 OF 6 REFERENCES

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Fully convolutional networks for semantic segmentation

The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.

Dynamic Routing Between Capsules

It is shown that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits.

EMNIST: an extension of MNIST to handwritten letters

A variant of the full NIST dataset is introduced, which is called Extended MNIST (EMNIST), which follows the same conversion paradigm used to create the MNIST dataset, and shares the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.

Learning Deep Features for Discriminative Localization

In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability