en
Scientific article
Open access
English

Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories

Published ineLife, vol. 9, e44516
Publication date2020
Abstract

Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations.

Citation (ISO format)
OLASAGASTI RODRIGUEZ, Miren Itsaso, GIRAUD MAMESSIER, Anne-Lise. Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories. In: eLife, 2020, vol. 9, p. e44516. doi: 10.7554/eLife.44516
Main files (1)
Article (Published version)
Identifiers
ISSN of the journal2050-084X
185views
77downloads

Technical informations

Creation10/13/2020 5:03:00 PM
First validation10/13/2020 5:03:00 PM
Update time03/15/2023 10:50:03 PM
Status update03/15/2023 10:50:03 PM
Last indexation02/12/2024 11:56:32 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack