Browsing by Subject "Speech perception"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Auditory stream segregation using cochlear implant simulations.(2010-06) Nie, YingjiuThis project studies auditory stream segregation as an underlying factor for poor speech perception skills in cochlear implant (CI) users by testing normal-hearing adults who listen to CI simulated sounds. Segregation ability was evaluated by behavioral responses to stimulus sequences consisting of two interleaved sets of noise bursts (A and B bursts). The two sets differed in physical attributes of the noise bursts including spectrum, or amplitude modulation (AM) rate, or both. The amount of the difference between the two sets of noise bursts was varied. Speech perception in noise was measured as the AM rate of the noise varied and at different spectral separations between noise and speech. Speech understanding and segregation ability are correlated statistically. Results show the following: 1. Stream segregation ability increased with greater spectral separation, with no segregation seen when A and B bursts had the same spectrum or when they involved the most overlapping spectra. 2. Larger AM-rate separations were associated with stronger segregation abilities in general. 3. When A and B bursts were different in both spectrum and AM rate, larger AM-rate separations were associated with stronger stream segregation only for the condition that A and B bursts were most overlapping in spectrum. 4. Speech perception in noise decreased as the spectral overlapping of speech and noise increased. 5. Nevertheless, speech perception was not different as the AM rate of the noise varied. 6. Speech perception in both steady-state and modulated noise was found to be correlated with stream segregation ability based on both spectral separation and AM-rate separation. The findings suggest that spectral separation is a primary/stronger cue for CI listeners to perform stream segregation. In addition, AM-rate separation could be a secondary/weaker cue to facilitate stream segregation. The spectral proximity of noise and speech has a strong effect on CI simulation listeners' speech perception in noise. Although neither the presence of noise modulation nor the modulation rate affected CI simulation listeners' speech understanding, the ability to use the AM-rate cue for segregation is correlated with their speech understanding. The results suggest that CI users could segregate different auditory streams if the spectral and modulation rate differences are large enough; and that their ability to use these cues for stream segregation may be a predictor of their speech perception in noise.Item An Electrophysiological Investigation of Linguistic Pitch Processing in Tonal-language-speaking Children with Autism(2018-09) Yu, LuodiSpeech perception is a fundamental skill interfacing sound to meaning; however, systematic characterization of autism in relation to this issue is still lacking, presumably due to insufficient consideration of the language-specific nature of speech processing. Although nearly 70% of world languages are tonal, tonal language users have been significantly under-represented in autism research. An overview of the limited literature reveals that there is a trend of distinct patterns across different language users (i.e., tonal language vs. non-tonal language), indicating potentially disrupted neural specialization for linguistic structures in individuals with autism. This dissertation examined the rapid cortical processing of pitch patterns varying in linguistic status in native Chinese school-age children with autism and age-matched typically developing (TD) peers using electroencephalography (EEG). The auditory stimuli were nonsense speech and nonspeech sounds presented in passive listening conditions. In comparison with the TD group, the autism group displayed neural timing issues at various levels of information processing as indicated by neural response latency. Moreover, the autism group displayed not only hyposensitivity for native vs. nonnative (or prototypical vs. non-prototypical) difference in the early information processing stage but also hypersensitivity in the later processing stage accompanied by diffusive scalp distribution with a rightward dominance. The results collectively support the idea of disrupted neural specialization for linguistic structures in autism. The findings underscore the proposition that autism is bound with auditory and phonological atypicalities in addition to the syndromic social and communication deficits, which have important implications for requiring language-specific considerations in autism research and clinical practice.Item Muscle tension dysphonia as a disorder of motor learning(2013-04) Urberg-Carlson, Kari ElizabethBackground: Adaptive learning has been demonstrated in many areas of motor learning. In speech, adaptive responses to auditory perturbation of fundamental frequency, formant frequencies and centroid frequencies of fricatives have been demonstrated. This dissertation presents the hypothesis that the motor changes observed in muscle tension dysphonia may be due to adaptive learning. To begin to test this hypothesis, an experiment was designed to look for evidence of an adaptive learning response to imposed auditory perturbation of voice quality. Methods: 16 participants repeated the syllable /ha/ while listening to noise under a number of experimental conditions. The training condition presented a re-synthesized recording of the participants own voices, which had an artificially increased noise-toharmonic ratio intended to simulate breathiness. A control condition presented speech babble at the same intensity. Catch trials in which the noise was turned off were included to test for evidence of motor learning, and trials where the participants repeated /he/ were included to test for evidence of generalization to untrained stimuli. H1-H2, a measure of spectral slant, was the dependent measure. A second experiment compared participants’ performance on a task of auditory perception of breathiness to their response to the auditory perturbation. Results: 12 of 16 participants showed statistically different values of H1-H2 between the training and control conditions. As none of the group differences between conditions were significant, this experiment was not able to demonstrate adaptive learning. There was no relationship between performance on the auditory perception task and performance on the adaptive learning task. Conclusions: Given the large body of evidence supporting the concept of adaptive learning in many domains of motor behavior, it is unlikely that behaviors that control voice quality are not subject to adaptive learning. Limitations of the experiment are discussed.Item Self-adjustment of Hearing Aid Amplification: Listener Preferences and Speech Recognition Performance(2019-07) Perry, TrevorSelf-adjustment of amplification parameters is a potential method for improving satisfaction with hearing aids, particularly in noisy environments. People with mild-to-moderate hearing loss adjusted gain parameters in quiet and in several types of noise by using a simple touchscreen interface to control a research device which emulated the basic functionality of a digital hearing aid in real time. Results of self-adjustment indicated reliable individual preferences but a great deal of between-listener variability, indicating that people have stable preferences for amplification and are able to select preferred parameters consistently. The large individual differences suggest that preferred gain configurations can differ greatly from prescriptive settings in both quiet and in noise and underscore the need for an efficient method of customizing amplification parameters beyond prescribed settings. Audiological listener factors such as age, hearing loss, and experience using hearing aids, predicted little of the between-listener variability. It is unlikely that modifications to prescriptive fitting formulae based on the factors examined here would result in amplification parameters that are similar to user-customized settings. Most self-adjustments were completed in only a minute or two, demonstrating that self-adjustment is a rapid and efficient method for matching hearing aid output to preferred settings. When self-adjustments were made with speech presented at average conversational levels, gain adjustments did not strongly affect speech recognition within the range of signal-to-noise ratios tested. For speech at a lower presentation level, preferences for amplification were related to speech recognition performance, suggesting that listeners include their subjective sense of speech clarity among their criteria for selecting amplification parameters during self-adjustment. Self-adjusted amplification was overwhelmingly rated as satisfactory or very satisfactory and as producing a comfortable loudness. Taken together, the results of these experiments support the conclusion that for people with mild-to-moderate hearing loss, self-adjustment is likely to produce satisfactory and comfortable amplification that provides speech recognition comparable to that of hearing aids fit according to current clinical best practices.Item Socially stratified phonetic variation and perceived identity in Puerto Rican Spanish.(2009-08) Mack, Sara LynnThis dissertation examines the interaction between phonetic variation and perceptions of speaker identity in Puerto Rican Spanish. Using an interdisciplinary approach, three experiments were designed and carried out: (1) an descriptive study of stereotypes about sexual orientation and male speech, (2) an observational study examining the relationship between acoustic parameters and perceived sexual orientation, perceived height, perceived social class, and perceived age, and (3) an implicit-processing experiment examining the influence of social stereotypes on memory for voices. The study was carried out in the San Juan, Puerto Rico, metropolitan area and included ninety-six participants. Results of the first experiment indicate that there is considerable uniformity in notions of speech variation associated with the gay male speech stereotype for the participants in the study, and that the most cited stereotypical markers of sexual orientation are related to stereotypical notions of gender. However, a majority of the respondents explicitly stated that although they realize a stereotype exists, they do not believe there is necessarily a correspondence between stereotypes of gay men's speech and real life production. Results of the second experiment show that listeners do evaluate speakers' voices differently in terms of perceived sexual orientation, and that perceptions of sexual orientation are most strongly predicted by one acoustic measure of vowel quality (the second resonant frequency of the vowel /e/, which relates to tongue position in the anterior-posterior dimension). An examination of the relationship between perceptions of sexual orientation and perceptions of height, age, and social class revealed that perceptions of height were correlated with perceived sexual orientation. The third experiment showed that listeners responded more quickly to speakers previously rated as more gay sounding than they did to speakers rated as more straight sounding, and the slowest mean responses were for the deleted variant. Most significantly, a d-prime analysis showed the strongest signal detection in the case of the sibilant ([s]) when produced by stereotypically gayer sounding speakers. The results suggest a relationship between /s/ variation and listener perceptions of sexual orientation as well as a possible effect of perceived sexual orientation on speech processing. Taken together, these results underscore the need for methods that measure both conscious and subconscious effects of stereotypes in speech production and perception.Item Spectral resolution and speech recognition in noise by cochlear implant users.(2011-07) Anderson, Elizabeth SusanFor cochlear implant (CI) users, the relationship between spectral resolution and speech perception in noise has remained ambiguous. An even more fundamental question has been how to measure spectral resolution in CI listeners. This dissertation describes work exploring the relationships among different measures of spectral resolution, and between each of those measures and speech recognition in quiet and in noise. Spectral ripple discrimination was found to correlate strongly with spatial tuning curves, when the measures were matched in frequency region. Broadband spectral ripple discrimination correlated well with sentence recognition in quiet, but not in background noise. Spectral ripple detection correlated strongly with speech recognition in quiet, but its validity as a measure of spectral resolution was not empirically supported. Spectral ripple discrimination thresholds were compared to sentence recognition in noise, using spectrally-limited maskers that did not overlap with the entire speech spectrum. Speech reception thresholds were measured in the presence of four low- or high-frequency maskers, all bandpass-filtered from speech-shaped noise, and a broadband masker encompassing most of the speech spectrum. The findings revealed substantial between-subject variability in susceptibility to masking by each of these noises and in spectral release from masking, which cannot be explained simply in terms of energetic masking and does not appear to be strongly related to spectral resolution. Better CI users appeared to show stronger relationships between spectral resolution and speech perception than did poorer users, implying that advanced CI processing strategies designed to maximize the number of spectral channels may not benefit all CI users equally.