Scientific article
OA Policy
English

Prediction across sensory modalities: A neurocomputational model of the McGurk effect

Published inCortex, vol. 68, p. 61-75
Publication date2015
Abstract

The McGurk effect is a textbook illustration of the automaticity with which the human brain integrates audio-visual speech. It shows that even incongruent audiovisual (AV) speech stimuli can be combined into percepts that correspond neither to the auditory nor to the visual input, but to a mix of both. Typically, when presented with, e.g., visual /aga/ and acoustic /aba/ we perceive an illusory /ada/. In the inverse situation, however, when acoustic /aga/ is paired with visual /aba/, we perceive a combination of both stimuli, i.e., /abga/ or /agba/. Here we assessed the role of dynamic cross-modal predictions in the outcome of AV speech integration using a computational model that processes continuous audiovisual speech sensory inputs in a predictive coding framework. The model involves three processing levels: sensory units, units that encode the dynamics of stimuli, and multimodal recognition/identity units. The model exhibits a dynamic prediction behavior because evidence about speech tokens can be asynchronous across sensory modality, allowing for updating the activity of the recognition units from one modality while sending top-down predictions to the other modality. We explored the model's response to congruent and incongruent AV stimuli and found that, in the two-dimensional feature space spanned by the speech second formant and lip aperture, fusion stimuli are located in the neighborhood of congruent /ada/, which therefore provides a valid match. Conversely, stimuli that lead to combination percepts do not have a unique valid neighbor. In that case, acoustic and visual cues are both highly salient and generate conflicting predictions in the other modality that cannot be fused, forcing the elaboration of a combinatorial solution. We propose that dynamic predictive mechanisms play a decisive role in the dichotomous perception of incongruent audiovisual inputs.

Citation (ISO format)
OLASAGASTI RODRIGUEZ, Miren Itsaso, BOUTON, Sophie, GIRAUD MAMESSIER, Anne-Lise. Prediction across sensory modalities: A neurocomputational model of the McGurk effect. In: Cortex, 2015, vol. 68, p. 61–75. doi: 10.1016/j.cortex.2015.04.008
Main files (1)
Article (Published version)
accessLevelPublic
Identifiers
Journal ISSN0010-9452
594views
951downloads

Technical informations

Creation09/01/2015 12:41:00 PM
First validation09/01/2015 12:41:00 PM
Update time03/14/2023 11:41:15 PM
Status update03/14/2023 11:41:15 PM
Last indexation10/31/2024 1:29:39 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack