Nonie J. Finlayson

Learn More
In visual search, target detection times are relatively insensitive to set size when targets and distractors differ on a single feature dimension. Search can be confined to only those elements sharing a single feature, such as color (Egeth, Virzi, & Garbart, 1984). These findings have been taken as evidence that elementary feature dimensions support a(More)
Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of(More)
The stereoscopic fusion limit denotes the largest binocular disparity for which a single fused image is perceived. Several criteria can be employed when judging whether or not a stereoscopic display is fused, and this may be a factor contributing to a discrepancy in the literature. Schor, Wood, and Ogawa (1984 Vision Research, 24, 661-665) reported that(More)
Are objects moving in depth searched for efficiently? Previous studies have reported conflicting results, with some finding efficient search for only approaching motion (Franconeri & Simons, 2003), and others reporting that both approaching and receding motion are found more efficiently than static targets (Skarratt, Cole, & Gellatly, 2009). This may be due(More)
How we perceive the environment is not stable and seamless. Recent studies found that how a person qualitatively experiences even simple visual stimuli varies dramatically across different locations in the visual field. Here we use a method we developed recently that we call multiple alternatives perceptual search (MAPS) for efficiently mapping such(More)
We live in a 3D world, and yet the majority of vision research is restricted to 2D phenomena. Previous research has shown that neural representations of 2D visual space are present throughout visual cortex. Many of these visual areas are also known to be sensitive to depth information (including V3, V3A, V3B/KO, V7, LO, and MT) - how does this depth(More)
Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI(More)
A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object(More)
Depth is a frequently overlooked aspect in vision research, despite the fact that recognizing and perceiving depth cues are essential when it comes to appropriately interacting with our surroundings. Behavioral and physiological studies have provided a solid framework for understanding depth perception, but we have yet to establish the precise neural(More)
  • 1