A new framework for understanding vision from the perspective of the primary visual cortex

@article{Zhaoping2019ANF,
  title={A new framework for understanding vision from the perspective of the primary visual cortex},
  author={Li Zhaoping},
  journal={Current Opinion in Neurobiology},
  year={2019},
  volume={58},
  pages={1-10}
}
  • L. Zhaoping
  • Published 1 October 2019
  • Biology
  • Current Opinion in Neurobiology
A new discovery on visual information dynamic changes from V1 to V2: corner encoding
TLDR
A visual network model for retina-lateral geniculate nucleus (LGN)-V1–V2 is established and quantitatively accounted for that response to the scarcity of visual information and encoding rules, based on the principle of neural mapping from V1 to V2.
Automatic encoding of a view-centered background image in the macaque temporal lobe
TLDR
The results suggest that a combination of two distinct visual signals on relational space and retinotopic space may provide the first person’s perspective serving for perception and presumably subsequent episodic memory.
Automatic Encoding of a View-Centered Background Image in the Macaque Temporal Lobe.
TLDR
Compared neural activities in the temporal lobe between when monkeys gazed on an object and when they fixated on the screen center with an object in their peripheral vision suggested two distinct visual signals on relational space and retinotopic space may provide the first person's perspective serving for perception and presumably subsequent episodic memory.
Early Visual Saliency Based on Isolated Optimal Features
TLDR
The optimal features predicted by the reference model, turn out to be more salient than others, despite the lack of any clues coming from a global meaningful structure, suggesting that active vision is efficiently adapted to maximize information in natural visual scenes.
Priority coding in the visual system.
TLDR
It is proposed that the brain combines different types of priority into a unified priority signal while also retaining the ability to differentiate between them, and that this happens by leveraging partially overlapping low-dimensional neural subspaces for each type of priority that are shared with the downstream neural populations involved in decision-making.
Late disruption of central visual field disrupts peripheral perception of form and color
TLDR
This work employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color, and showed a behavioral effect consistent with disrupting feedback to the fovea, in line with thefoveal feedback suggested by previous neuroimaging studies.
Parallel Advantage: Further Evidence for Bottom-up Saliency Computation by Human Primary Visual Cortex
Finding a target among uniformly oriented non-targets is typically faster when this target is perpendicular, rather than parallel, to the non-targets. The V1 Saliency Hypothesis (V1SH), that neurons
Central-peripheral dichotomy: color-motion and luminance-motion binding show stronger top-down feedback in central vision.
TLDR
It is found that top-down feedback is more directed to central vision, which can resolve ambiguities in feature binding at more central visual locations.
The Flip Tilt Illusion: Visible in Peripheral Vision as Predicted by the Central-Peripheral Dichotomy
TLDR
The flip tilt illusion arises because top-down feedback from higher to lower visual cortical areas is too weak or absent in the periphery to veto confounded feedforward signals from the primary visual cortex (V1).
Human Visual Search Follows Suboptimal Bayesian Strategy Revealed by a Spatiotemporal Computational Model
TLDR
This work measured the temporal course of human visibility map and recorded the eye movements of human subjects performing a visual search task to suggest that human visual search strategy is not strictly optimal in the sense of fully utilizing the visibility map, but instead tries to balance between search performance and the costs to perform the task.
...
1
2
3
...

References

SHOWING 1-10 OF 68 REFERENCES
Superior colliculus encodes visual saliency before the primary visual cortex
TLDR
While the response latency to visual stimulus onset was earlier for V1 neurons than superior colliculus superficial visual-layer neurons (SCs), the saliency representation emerged earlier in SCs than in V1, which is consistent with the hypothesis that SCs neurons pool the inputs from multiple V1 neuron to form a feature-agnostic saliency map, which may be relayed to other brain areas.
A saliency map in primary visual cortex
A summary-statistic representation in peripheral vision explains visual crowding.
TLDR
It is shown that the difficulty of performing an identification task within a single pooling region using this representation of the stimuli is correlated with peripheral identification performance under conditions of crowding, and provides evidence that a unified neuronal mechanism may underlie peripheral vision, ordinary pattern recognition in central vision, and texture perception.
Theoretical understanding of the early visual processes by data compression and data selection
TLDR
Two lines of theoretical work which understand processes in retina and primary visual cortex in this framework are reviewed, with the hypothesis that neural activities in V1 represent the bottom up saliencies of visual inputs, such that information can be selected for, or discarded from, detailed or attentive processing.
Feedback of pVisual Object Information to Foveal Retinotopic Cortex
TLDR
It is found that the pattern of functional magnetic resonance imaging response in human foveal retinotopic cortex contained information about objects presented in the periphery, far away from the fovea, which has not been predicted by prior theories of feedback.
Bottom-up saliency and top-down learning in the primary visual cortex of monkeys
TLDR
V1’s early responses are directly linked with behavior and represent the bottom-up saliency signals, likely serving as the basis for making the detection task more reflexive and less top-down driven.
Gaze capture by eye-of-origin singletons: interdependence with awareness.
TLDR
In visual searches for an orientation singleton target bar among uniformly oriented background bars, an ocular singleton non-target bar, at the same eccentricity as the target from the center of the search display, often captured the first search saccade.
Selectivity and tolerance for visual texture in macaque V2
TLDR
Evidence is presented that neurons in area V2 are selective for local statistics that occur in natural visual textures, and tolerant of manipulations that preserve these statistics.
...
1
2
3
4
5
...