Foreground-Background Segmentation Revealed during Natural Image Viewing

  title={Foreground-Background Segmentation Revealed during Natural Image Viewing},
  author={Paolo Papale and Andrea Leo and Luca Cecchetti and Giacomo Handjaras and Kendrick Norris Kay and Pietro Pietrini and Emiliano Ricciardi},
Abstract One of the major challenges in visual neuroscience is represented by foreground-background segmentation. Data from nonhuman primates show that segmentation leads to two distinct, but associated processes: the enhancement of neural activity during figure processing (i.e., foreground enhancement) and the suppression of background-related activity (i.e., background suppression). To study foreground-background segmentation in ecological conditions, we introduce a novel method based on… 

Resolving the Spatial Profile of Figure Enhancement in Human V1 through Population Receptive Field Modeling

The detection and segmentation of meaningful figures from their background is one of the primary functions of vision. While work in nonhuman primates has implicated early visual mechanisms in this

Feedback brings scene information to the representation of occluded image regions in area V1 of monkeys and humans

The results reveal that contextual influences alter the spiking activity of V1 in monkeys across large distances on a rapid time scale, carry information about scene identity and resemble those in human V1.

Shape coding in occipito-temporal cortex relies on object silhouette, curvature and medial-axis

It is found that object shape is encoded in a multi-dimensional fashion and thus defined by the interaction of multiple features and that the relevance of shared representations linearly increases moving from posterior to anterior regions.

Shape coding in occipito-temporal cortex relies on object silhouette, curvature and medial-axis.

Results indicate that the visual cortex encodes shared relations between different features in a topographic fashion and that object shape is encoded along different dimensions, each representing orthogonal features.

Common spatiotemporal processing of visual features shapes object representation

The temporal dynamics of feature processing in human subjects attending to objects from six semantic categories are revealed, showing that low-level properties, shape and category are represented within the same spatial locations early in time: 100–150 ms after stimulus onset.

SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image

A saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images with higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches.

The Contribution of Shape Features and Demographic Variables to Disembedding Abilities

Humans naturally perceive visual patterns in a global manner and are remarkably capable of extracting object shapes based on properties such as proximity, closure, symmetry, and good continuation.

Emotionotopy: Gradients encode emotion dimensions in right temporo-parietal territories

Right TPJ activity is explained by orthogonal and spatially overlapping gradients encoding the polarity, complexity and intensity of emotional experiences, and emotionotopy is proposed as the underlying principle of emotion perception in TPJ.



Object segmentation controls image reconstruction from natural scenes

This work describes a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images and suggests instead that these two modes are best viewed as an integrated perceptual mechanism.

Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex

This work compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background, which provides new insights into the mechanisms for figure–ground organization.

Texture segregation in the human visual cortex: A functional MRI study.

Using functional MRI, this work investigates the level at which neural correlates of texture segregation can be found in the human visual cortex and provides evidence that higher order areas with large receptive fields play an important role in the segregation of visual scenes based on texture-defined boundaries.

Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.

Evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex is reported, which suggests that the enhanced responses to objects were not a manifestation of low-level visual processing.

Identifying natural images from human brain activity

A decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas is developed and it is suggested that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.

Feedforward and Recurrent Processing in Scene Segmentation: Electroencephalography and Functional Magnetic Resonance Imaging

Electroencephalography (EEG) and functional magnetic resonance imaging data are presented with a paradigm that makes it possible to differentiate between boundary detection and scene segmentation in humans and conclude that texture boundaries are detected in a feedforward fashion and are represented at increasing latencies in higher visual areas.

Local figure-ground cues are valid for natural images.

This work quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions and developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input.