A new framework for understanding vision from the perspective of the primary visual cortex

  title={A new framework for understanding vision from the perspective of the primary visual cortex},
  author={Li Zhaoping},
  journal={Current Opinion in Neurobiology},
  • L. Zhaoping
  • Published 28 May 2019
  • Biology
  • Current Opinion in Neurobiology

Figures from this paper

Priority coding in the visual system.
It is proposed that the brain combines different types of priority into a unified priority signal while also retaining the ability to differentiate between them, and that this happens by leveraging partially overlapping low-dimensional neural subspaces for each type of priority that are shared with the downstream neural populations involved in decision-making.
A review of interactions between peripheral and foveal vision
Findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea.
Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient
The effect of target salience and size in visual search within naturalistic scenes under degraded vision
The results lend support for search models that incorporate salience for predicting eye-movement behavior, and investigate how important the different regions of the visual field are for different subprocesses of search (target localization, verification).
Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
Insight is provided into the mechanism of the trained DCNN saliency map model and it is suggested that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis.
Human visual search follows a suboptimal Bayesian strategy revealed by a spatiotemporal computational model and experiment
A continuous-time eye movement model capable of predicting both eye fixation location and duration and, applied to real data, shows that humans may use an eye movement strategy that balances task performance and costs when searching for a target.
Feature blindness: a challenge for understanding and modelling visual object recognition
It is argued that the reason underlying human behaviour may be a bias to look for features that are less hungry for cognitive resources and generalise better to novel instances, which may be why human vision overly relies on global features, such as shape, and glosses over many other features that is perfectly diagnostic.
Intracranial Recordings Demonstrate Both Cortical and Medial Temporal Lobe Engagement in Visual Search in Humans
Intracranial recordings are used to delineate the neural correlates of Search and Pop-out with an unprecedented combination of spatiotemporal resolution and coverage across cortical and subcortical structures and affirm a central role for the right lateral frontal cortex in Search.
The central-peripheral dichotomy and metacontrast masking
: According to the central-peripheral dichotomy (CPD), feedback from higher to lower cortical areas along the visual pathway to aid recognition is weaker in the more peripheral visual field.


Superior colliculus encodes visual saliency before the primary visual cortex
While the response latency to visual stimulus onset was earlier for V1 neurons than superior colliculus superficial visual-layer neurons (SCs), the saliency representation emerged earlier in SCs than in V1, which is consistent with the hypothesis that SCs neurons pool the inputs from multiple V1 neuron to form a feature-agnostic saliency map, which may be relayed to other brain areas.
A saliency map in primary visual cortex
A summary-statistic representation in peripheral vision explains visual crowding.
It is shown that the difficulty of performing an identification task within a single pooling region using this representation of the stimuli is correlated with peripheral identification performance under conditions of crowding, and provides evidence that a unified neuronal mechanism may underlie peripheral vision, ordinary pattern recognition in central vision, and texture perception.
Theoretical understanding of the early visual processes by data compression and data selection
Two lines of theoretical work which understand processes in retina and primary visual cortex in this framework are reviewed, with the hypothesis that neural activities in V1 represent the bottom up saliencies of visual inputs, such that information can be selected for, or discarded from, detailed or attentive processing.
Feedback of pVisual Object Information to Foveal Retinotopic Cortex
It is found that the pattern of functional magnetic resonance imaging response in human foveal retinotopic cortex contained information about objects presented in the periphery, far away from the fovea, which has not been predicted by prior theories of feedback.
Bottom-up saliency and top-down learning in the primary visual cortex of monkeys
V1’s early responses are directly linked with behavior and represent the bottom-up saliency signals, likely serving as the basis for making the detection task more reflexive and less top-down driven.
Gaze capture by eye-of-origin singletons: interdependence with awareness.
In visual searches for an orientation singleton target bar among uniformly oriented background bars, an ocular singleton non-target bar, at the same eccentricity as the target from the center of the search display, often captured the first search saccade.
Selectivity and tolerance for visual texture in macaque V2
Evidence is presented that neurons in area V2 are selective for local statistics that occur in natural visual textures, and tolerant of manipulations that preserve these statistics.