Depth perception in disparity-defined objects: finding the balance between averaging and segregation
Neural mechanisms underlying depth perception are reviewed with respect to three computational goals: determining surface depth order, gauging depth intervals, and representing 3D surface geometry and object shape. Accumulating evidence suggests that these three computational steps correspond to different stages of cortical processing. Early visual areas appear to be involved in depth ordering, while depth intervals, expressed in terms of relative disparities, are likely represented at intermediate stages. Finally, 3D surfaces appear to be processed in higher cortical areas, including an area in which individual neurons encode 3D surface geometry, and a population of these neurons may therefore represent 3D object shape. How these processes are integrated to form a coherent 3D percept of the world remains to be understood.