The present study utilized a cross-modal priming paradigm to investigate dimensional information processing in speech. Primes were facial expressions that varied in two dimensions: affect (happy, neutral, or angry) and mouth shape (corresponding to either /a/ or /i/ vowels). Targets were CVC words that varied by prosody and vowel identity. In both the phonetic and prosodic conditions, adult participants responded to congruence or incongruence of the visual-auditory stimuli. Behavioral results showed a congruency effect in percent correct and reaction time measures. Two ERP responses, the N400 and late positive response, were identified for the effect with systematic between-condition differences. Localization and time-frequency analyses indicated different cortical networks for selective processing of phonetic and emotional information in the words. Overall, the results suggest that cortical processing of phonetic and emotional information involves distinct neural systems, which has important implications for further investigation of language processing deficits in clinical populations.
University of Minnesota M.A. thesis. May 2015. Major: Speech-Language Pathology. Advisor: Yang Zhang. 1 computer file (PDF); vii, 62 pages.
Cortical Processing of Phonetic and Emotional Information in Speech: A Cross-Modal Priming Study.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.