Central and peripheral vision for scene recognition: A neurocomputational modeling exploration.

@article{Wang2017CentralAP,
  title={Central and peripheral vision for scene recognition: A neurocomputational modeling exploration.},
  author={Panqu Wang and G. Cottrell},
  journal={Journal of vision},
  year={2017},
  volume={17 4},
  pages={
          9
        }
}
What are the roles of central and peripheral vision in human scene recognition? Larson and Loschky (2009) showed that peripheral vision contributes more than central vision in obtaining maximum scene recognition accuracy. However, central vision is more efficient for scene recognition than peripheral, based on the amount of visual area needed for accurate recognition. In this study, we model and explain the results of Larson and Loschky (2009) using a neurocomputational modeling approach. We… 

Figures and Tables from this paper

Objects and scenes classification with selective use of central and peripheral image content

The contributions of central and peripheral vision to scene gist recognition with a 180° visual field.

TLDR
This study investigated the relative contributions of central versus peripheral vision in scene gist acquisition, while testing how well the conclusions of Larson and Loschky (2009) generalize to more realistic viewing conditions.

Towards The Deep Model: Understanding Visual Recognition Through Computational Models

TLDR
This thesis describes how a neurocomputational model can be applied to explain the modulation of visual experience on the performance of subordinate-level face and object recognition, and shows a biologically-inspired model can develop realistic features of the early visual cortex, while performing well on object recognition datasets.

Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways

TLDR
A computational model is built to elucidate the computational advantages associated with the interactions between two pathways and finds that FineNet can teach CoarseNet through imitation and improve its performance considerably, and Coarse net can improve the noise robustness of FineNet through association.

A Gated Peripheral-Foveal Convolutional Neural Network for Unified Image Aesthetic Prediction

TLDR
A gated peripheral-foveal convolutional neural network that aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions for the fovea and a gated information fusion network to weigh their contributions.

Peripheral vision in real-world tasks: A systematic review.

TLDR
Three ways in which basic, sport, and applied science can benefit each other's methodology are recommended, furthering the understanding of peripheral vision more generally.

Learning Foveated Reconstruction to Preserve Perceived Image Statistics

TLDR
The primary goal is to make training procedure less sensitive to the distortions that humans cannot detect and focus on penalizing perceptually important artifacts, which aims to preserve perceived image statistics rather than natural image statistics.

What and Where: Location-Dependent Feature Sensitivity as a Canonical Organizing Principle of the Visual System

TLDR
It is proposed that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual experience, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information.

Approximating the Architecture of Visual Cortex in a Convolutional Network

  • B. Tripp
  • Computer Science, Biology
    Neural Computation
  • 2019
TLDR
A cortex-like CNN architecture is developed, via a loss function that quantifies the consistency of a CNN architecture with neural data from tract tracing, cell reconstruction, and electrophysiology studies, a hyperparameter-optimization approach for reducing this loss, and heuristics for organizing units into convolutional-layer grids.

Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

TLDR
Surprisingly, the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos and represented the rotational motion for illusion images that were not moving physically, much like human visual perception.

References

SHOWING 1-10 OF 127 REFERENCES

The contributions of central versus peripheral vision to scene gist recognition.

TLDR
Results indicated the periphery was more useful than central vision for maximal performance (i.e., equal to seeing the entire image) but central vision was more efficient for scene gist recognition than the periphery on a per-pixel basis.

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

TLDR
These evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task and propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity.

Pixels to Voxels: Modeling Visual Representation in the Human Brain

TLDR
The fit models provide a new platform for exploring the functional principles of human vision, and they show that modern methods of computer vision and machine learning provide important tools for characterizing brain function.

Pinpointing the peripheral bias in neural scene-processing networks during natural viewing.

TLDR
Functional MRI results show a fine-scale relationship between eccentricity biases and functional correlation during natural perception, giving new insight into the structure of the scene-perception network.

Learning Deep Features for Scene Recognition using Places Database

TLDR
A new scene-centric database called Places with over 7 million labeled pictures of scenes is introduced with new methods to compare the density and diversity of image datasets and it is shown that Places is as dense as other scene datasets and has more diversity.

A general account of peripheral encoding also predicts scene perception performance.

TLDR
It is shown that an encoding model previously shown to predict performance in crowded object recognition and visual search might also underlie the performance on those tasks, and that this model does a reasonably good job of predicting performance on these scene tasks, suggesting that scene tasks may not be so special.

Peripheral pooling is tuned to the localization task.

TLDR
It is found that peripheral pooling, but not reduced acuity, affects localization performance positively, whereas it is detrimental to object recognition performance.

A space-variant model for motion interpretation across the visual field.

TLDR
A neural model for the estimation of the focus of radial motion (FRM) at different retinal locations is implemented and its results are compared by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic motion stimuli, validating the model's potential application to neuromimetic robotic architectures.

Neural mechanisms of rapid natural scene categorization in human visual cortex

TLDR
Findings indicate that the rapid detection of categorical information in natural scenes is mediated by a category-specific biasing mechanism in object-selective cortex that operates in parallel across the visual field, and biases information processing in favour of objects belonging to the target object category.
...