Document Type





This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a nat- uralistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multi- sensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory in- tegration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.



Eunice Kennedy Shriver National Institute of Child Health and Human Development (P50 HD103536 – to JJF); Albert Einstein College of Medicine; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (RFK-IDDRC), Eunice Kennedy Shriver National Institute of Child Health and Human Development (U54 HD090260 – to SM).

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.