When we knock on a door, we perceive the impact as a collection of simultaneous events, combining sound, sight, and tactile sensation. In reality, information from different modalities but from a single source is flowing inside the brain along different pathways, reaching processing centers at different times. Therefore, interpreting different sensory modalities which seem to occur simultaneously requires information processing that accounts for these different delays. As in a computer-based robotic system, does the brain use some explicit estimation of the time delay, to realign the sensory flows? Or does it compensate for temporal delays by representing them as changes in the body/environment mechanics? Using delayed-state or an approximation for delayed-state manipulations between visual and proprioceptive feedback during a tracking task, we show that tracking errors, grip forces, and learning curves are consistent with predictions of a representation that is based on approximation for delay, refuting an explicit delayed-state representation. Delayed-state representations are based on estimating the time elapsed between the movement commands and their observed consequences. In contrast, an approximation for delay representations result from estimating the instantaneous relation between the expected and observed motion variables, without explicit reference to time.
ASJC Scopus subject areas