Browsing by Subject "Speech-Language Pathology"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Auditory stream segregation using cochlear implant simulations.(2010-06) Nie, YingjiuThis project studies auditory stream segregation as an underlying factor for poor speech perception skills in cochlear implant (CI) users by testing normal-hearing adults who listen to CI simulated sounds. Segregation ability was evaluated by behavioral responses to stimulus sequences consisting of two interleaved sets of noise bursts (A and B bursts). The two sets differed in physical attributes of the noise bursts including spectrum, or amplitude modulation (AM) rate, or both. The amount of the difference between the two sets of noise bursts was varied. Speech perception in noise was measured as the AM rate of the noise varied and at different spectral separations between noise and speech. Speech understanding and segregation ability are correlated statistically. Results show the following: 1. Stream segregation ability increased with greater spectral separation, with no segregation seen when A and B bursts had the same spectrum or when they involved the most overlapping spectra. 2. Larger AM-rate separations were associated with stronger segregation abilities in general. 3. When A and B bursts were different in both spectrum and AM rate, larger AM-rate separations were associated with stronger stream segregation only for the condition that A and B bursts were most overlapping in spectrum. 4. Speech perception in noise decreased as the spectral overlapping of speech and noise increased. 5. Nevertheless, speech perception was not different as the AM rate of the noise varied. 6. Speech perception in both steady-state and modulated noise was found to be correlated with stream segregation ability based on both spectral separation and AM-rate separation. The findings suggest that spectral separation is a primary/stronger cue for CI listeners to perform stream segregation. In addition, AM-rate separation could be a secondary/weaker cue to facilitate stream segregation. The spectral proximity of noise and speech has a strong effect on CI simulation listeners' speech perception in noise. Although neither the presence of noise modulation nor the modulation rate affected CI simulation listeners' speech understanding, the ability to use the AM-rate cue for segregation is correlated with their speech understanding. The results suggest that CI users could segregate different auditory streams if the spectral and modulation rate differences are large enough; and that their ability to use these cues for stream segregation may be a predictor of their speech perception in noise.Item Effects of noise on fast mapping and word learning scores in preschool children with and without hearing loss.(2010-01) Blaiser, Kristina M.This study examines the fast mapping and word learning abilities of three- to five-year old children with and without hearing loss, in quiet and noise conditions. Nineteen children with hearing loss (HL) and 17 normal hearing peers (NH) participated in this study. Children were introduced to eight novel words in each condition. Children's ability to `fast map' (i.e., comprehend or produce new words after minimal experience) was measured in the first session (Time 1). `Word learning' (the comprehension or production of previously unfamiliar words following additional exposures) was measured following three individual training sessions (i.e., Time 2). Results indicated that children in the HL group performed similarly to NH peers on fast mapping and word learning measures in quiet. In noise, the HL group performed significantly poorer at the fast mapping time point than the NH group. However, at Time 2 there were no significant between-group differences in the noise condition. A series of correlation and regression analyses was used to investigate variables associated with fast mapping and novel word learning in quiet and noise conditions. Age was significantly correlated to fast mapping and word learning performance in quiet and noise in the NH group, but not in the HL group. Age fit with hearing aids was the only traditional hearing loss factor that was correlated with fast mapping performance in noise for the HL group. Results showed that age was a significant predictor of fast mapping performance in noise for the NH group, but not the HL group. Word learning in quiet was a significant predictor for word learning in noise for the NH group, fast mapping in noise was a significant predictor for the HL group. In addition, performance in quiet significantly predicted fast mapping and word learning scores in noise for the NH group; however, there was no significant correlation between performance in quiet and noise for the HL group.Item Nasal airflow and oral pressure during speech in Spanish speakers.(2010-06) Holzwart, StephaniePerceptual and acoustic measures have indicated that the velopharyngeal mechanism may not be completely closed during oral speech sounds in native speakers of Spanish (SP); however, there is no direct evidence that this is the case. This lack of evidence makes it difficult for clinicians to differentiate a spoken language difference from a disorder, velopharyngeal inadequacy (VPI). Using aeromechanical measurements, this study determined if the velopharyngeal (VP) mechanism was closed during oral-only speech production in SP speakers. These measurements were obtained from seven native English (AE) speakers (control) and seven native SP speakers. Results revealed no statistically significant differences between groups for all aeromechanical measurements. However, a trend was observed that the SP group spoke at a faster rate (syllables per second); and the implications of this observation in relation to nasality is discussed.Item A Palette of Transmasculine Voices(2023-10-26) Dolquist, Devin V.; Munson, Benjamin; munso005@umn.edu; Munson, Benjamin; Studies in the Applied Sociolinguistics of Speech and Language (SASS) Laboratory, Department of Speech-Language-Hearing Sciences, College of Liberal Arts; Center for Applied and Translational Sensory SciencesThe growing practice of gender-affirming voice in Speech-Language Pathology often overlooks the voices of transmasculine people. Previous research in this topic focuses primarily on obtaining acoustic information that will help trans folks assimilate to cis-sounding voices. This is a new corpus of voices from a diverse set of 20 masculine-identifying people, including transmasculine men, cisgender men, and transmasculine nonbinary people. The corpus includes recordings of materials commonly used in speech-language pathology (the rainbow passage [Dietsch et al, 2003], the CAPE-V sentences [Kempster et al., 2009]) and a set of 27 sentences created for this project. The corpus contains individual audio files for all of the materials, and Praat TextGrids for the novel sentences. This corpus can be used in clinical services to model different male-sounding voices, and in clinical and preprofessional education in speech-language pathologyItem The role of clinical experience in listening for covert contrasts in children’s speech.(2010-06) Johnson, Julie M.Children acquire speech sounds gradually. This gradual acquisition is reflected in numerous aspects of speech-sound development, from an infant’s ability to distinguish between sounds that have slight variations to the production of sounds that are identifiably adult-like. Evidence of gradual acquisition is seen in acoustic studies of children's speech-sound production, many of which have shown that children develop contrasts in certain speech sounds gradually and produce intermediate stages as they progress from incorrect to correct productions. It has also been shown that adults can perceive these fine differences in young children’s speech. This study examined whether experienced speech-language pathologists perceive children's consonants differently from untrained listeners. The stimuli sets consisted of /t/-/k/ (88 tokens), /s/-/q/ (200 tokens), and /d/-/g/ (135 tokens). Forty-two participants (21 experienced speech-language clinicians and 21 non-clinician undergraduate students) heard consonant-vowel syllables truncated from words produced by children ages two through five. Listeners were asked to provide a rating of the beginning target sound using a visual-analog scale (VAS), which contained a double-headed arrow labeled with the target sound on each side. For example, one end was labeled “the ‘t’ sound” and the other end was labeled “the ‘k’ sound.” The rating involved clicking on the line at a location that represented the token’s proximity to an ideal /t/ or /k/. The participants’ click locations on the VAS line are strongly correlated with the acoustic parameters that differentiate between the endpoint categories for a variety of contrasts for the stimuli sets. Results indicated three main differences between the way clinicians and laypeople perceived the stimuli. First, clinicians were more willing to click closer to the ends of each scale, indicating that a token was closer to a perfect representation of the target sound; second, clinicians had higher intra-rater reliability than the naïve listeners; and third, clinicians showed a tighter relationship between the acoustic properties and the VAS ratings than laypeople