Coordinating multiple sensory modalities while learning to reach
edited by: RobertM French, JacquesP Sougné
By the onset of reaching, young infants are already able to coordinate vision of a target with the felt position of their arm . How is this coordination achieved? In order to investigate the hypothesis that infants learn to link vision and proprioception via the sense of touch, we implemented a recent computational model of reaching . The model employs a genetic algorithm as a proxy for sensorimotor development in young infants. The three principal findings of our simulations were that tactile perception: (1) facilitates learning to coordinate vision and proprioception, (2) promotes an efficient reaching strategy, and (3) accelerates the remapping of vision and proprioception after perturbation of the multimodal map. Follow-up analyses of the model provide additional support for our hypothesis, and suggest that touch helps to coordinate vision and proprioception by providing a third, correlated information channel.