P41Session 1 (Thursday 11 January 2024, 15:35-18:00)Perceptual learning of dysarthric speech requires phonological processing: A dual-task study
Rationale: Listeners can adapt to speech signals that can be initially difficult to understand, including artificially degraded signals such as noise-vocoded speech. We recently demonstrated (Wang et al., 2023) that listeners can perceptually adapt to noise-vocoded speech under divided attention (using a dual task design). Here, we evaluated the role of divided attention in perceptual learning of naturally (neurologically) degraded speech, i.e., dysarthric speech (Borrie & Lansford, 2021). We conducted an online between-subject experiment with four groups (N = 192). We examined the reliance of perceptual learning of dysarthric speech on selective attention to establish if perceptual adaptation to degraded speech qualifies as an automatic cognitive process.
Methods: Participants completed a speech recognition task in which they repeated forty sentences spoken by a male dysarthric speaker, in a between-group design. Participants completed a speech-only task or performed this task with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. If perceptual learning of distorted speech qualifies as a largely automatic process, we expected no difference in rate or shape of adaptation across the four groups. However, if perceptual learning of speech requires domain-specific processes that matched the type of variation present in the speech signal, we expected a lower rate of adaption for the phonological group.
Results: We observed perceptual learning for all groups, except for the phonological group. Speech recognition improvement in the single speech, lexical, and visuomotor groups was around 10-11%, while improvement in the phonological group was not significant (5%).
Conclusions: Perceptual learning of dysarthric speech can occur under divided attention, as long as the dual task does not require phonological processes. Perceptual learning of speech is thus a largely automatic process, but engagement of domain-specific processes distorts learning.
References:
- Borrie, S. A., & Lansford, K. L. (2021). A perceptual learning approach for dysarthria remediation: An updated review. Journal of Speech, Language, and Hearing Research, 64(8), 3060-3073, doi:10.1044/2021_JSLHR-21-00012.
- Wang, H., Chen, R., Yan, Y., McGettigan, C., Rosen, S., & Adank, P. (2023). Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends in Hearing, 27, doi:10.1177/23312165231192297.