The visual encoding of purely proprioceptive intermanual tasks is due to the need of transforming joint signals, not to their inter-hemispheric transfer.

Abstract

To perform goal-oriented hand movement humans combine multiple sensory signals (e.g. vision and proprioception) that can be encoded in various reference frames (body-centered and/or exo-centered). In a previous study we showed that when aligning a hand to a remembered target orientation the brain encodes both target and response in visual space when the target is sensed by one hand and the response is performed by the other, even though both are sensed only through proprioception. Here we ask whether such visual encoding is due i) to the necessity of transferring sensory information across the brain hemispheres or ii) to the necessity, due to the arms' anatomical mirror symmetry, of transforming the joint signals of one limb into the reference frame of the other. To answer this question we asked subjects to perform purely proprioceptive tasks in different conditions: Intra - the same arm sensing the target and performing the movement; Inter/Parallel - one arm sensing the target and the other reproducing its orientation; and Inter/Mirror - one arm sensing the target and the other mirroring its orientation. Performance was very similar between Intra and Inter/Mirror (conditions not requiring joint-signal transformations) while both differed from Inter/Parallel. Manipulation of the visual scene in a virtual-reality paradigm showed visual encoding of proprioceptive information only in the latter conditions. These results suggest that the visual encoding of purely proprioceptive tasks is not due to inter-hemispheric transfer of the proprioceptive information per se, but to the necessity of transforming joint signals between mirror-symmetric limbs.

DOI: 10.1152/jn.00140.2017