The supramodal brain: implications for auditory perception

LD Rosenblum, JW Dias, J Dorsi - Journal of Cognitive Psychology, 2017 - Taylor & Francis
The perceptual brain is designed around multisensory input. Areas once thought dedicated
to a single sense are now known to work with multiple senses. It has been argued that the …

Acoustic–phonetic and auditory mechanisms of adaptation in the perception of sibilant fricatives

E Chodroff, C Wilson - Attention, Perception, & Psychophysics, 2020 - Springer
Listeners are highly proficient at adapting to contextual variation when perceiving speech. In
the present study, we examined the effects of brief speech and nonspeech contexts on the …

Selective adaptation in speech: Measuring the effects of visual and lexical contexts.

J Dorsi, LD Rosenblum, AG Samuel… - Journal of Experimental …, 2021 - psycnet.apa.org
Speech selective adaptation is a phenomenon in which repeated presentation of a speech
stimulus alters subsequent phonetic categorization. Prior work has reported that lexical, but …

Synthetic faces generated with the facial action coding system or deep neural networks improve speech-in-noise perception, but not as much as real faces

Y Yu, A Lado, Y Zhang, JF Magnotti… - Frontiers in …, 2024 - frontiersin.org
The prevalence of synthetic talking faces in both commercial and academic environments is
increasing as the technology to generate them grows more powerful and available. While it …

Repetitive exposure to orofacial somatosensory inputs in speech perceptual training modulates vowel categorization in speech perception

T Ito, R Ogane - Frontiers in Psychology, 2022 - frontiersin.org
Orofacial somatosensory inputs may play a role in the link between speech perception and
production. Given the fact that speech motor learning, which involves paired auditory and …

Repeatedly experiencing the McGurk effect induces long-lasting changes in auditory speech perception

JF Magnotti, A Lado, Y Zhang, A Maasø… - Communications …, 2024 - nature.com
In the McGurk effect, presentation of incongruent auditory and visual speech evokes a fusion
percept different than either component modality. We show that repeatedly experiencing the …

[HTML][HTML] The Effect on Speech-in-Noise Perception of Real Faces and Synthetic Faces Generated with either Deep Neural Networks or the Facial Action Coding …

Y Yu, A Lado, Y Zhang, JF Magnotti, MS Beauchamp - bioRxiv, 2024 - ncbi.nlm.nih.gov
The prevalence of synthetic talking faces in both commercial and academic environments is
increasing as the technology to generate them grows more powerful and available. While it …

The impact and status of Carol Fowler's supramodal theory of multisensory speech perception

LD Rosenblum, J Dorsi, JW Dias - Ecological Psychology, 2016 - Taylor & Francis
One important contribution of Carol Fowler's direct approach to speech perception is its
account of multisensory perception. This supramodal account proposes a speech function …

Tolerance for audiovisual asynchrony is enhanced by the spectrotemporal fidelity of the speaker's mouth movements and speech

AJ Shahin, S Shen, JR Kerlin - Language, cognition and …, 2017 - Taylor & Francis
We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA)
and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In …

Attentional resources contribute to the perceptual learning of talker idiosyncrasies in audiovisual speech

A Jesse, E Kaplan - Attention, Perception, & Psychophysics, 2019 - Springer
To recognize audiovisual speech, listeners evaluate and combine information obtained from
the auditory and visual modalities. Listeners also use information from one modality to adjust …