Browsing by Subject "hearing"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Data supporting "Informational Masking Constrains Vocal Communication in Nonhuman Animals"(2023-01-09) Gupta, Saumya; Kalra, Lata; Rose, Gary J; Bee, Mark A; gupta333@umn.edu; Gupta, SaumyaNoisy social environments constrain human speech communication in two important ways: spectrotemporal overlap between signals and noise can reduce speech audibility (“energetic masking”) and noise can also interfere with processing the informative features of otherwise audible speech (“informational masking”). To date, informational masking has not been investigated in studies of vocal communication in nonhuman animals, even though many animals make evolutionarily consequential decisions that depend on processing vocal information in noisy social environments. In this study of a treefrog, in which females choose mates in noisy breeding choruses, we investigated whether informational masking disrupts the processing of vocal information in the contexts of species recognition and sexual selection. The associated data for this work is being released prior to the publication of the manuscript for peer review.Item Data supporting Lung-to-ear sound transmission does not improve directional hearing in green treefrogs (Hyla cinerea)(2020-09-04) Christensen-Dalsgaard, Jakob; Lee, Norman; Bee, Mark A; mbee@umn.edu; Bee, Mark AAmphibians are unique among extant vertebrates in having middle ear cavities that are internally coupled to each other and to the lungs. In frogs, the lung-to-ear sound transmission pathway can influence the tympanum’s inherent directionality, but what role such effects might play in directional hearing remain unclear. In this study of the American green treefrog (Hyla cinerea), we tested the hypothesis that the lung-to-ear sound transmission pathway functions to improve directional hearing, particularly in the context of intraspecific sexual communication. Using laser vibrometry, we measured the tympanum’s vibration amplitude in females in response to a frequency modulated sweep presented from 12 sound incidence angles in azimuth. Tympanum directionality was determined across three states of lung inflation (inflated, deflated, reinflated) both for a single tympanum in the form of the vibration amplitude difference (VAD) and for binaural comparisons in the form of the interaural vibration amplitude difference (IVAD). The state of lung inflation had negligible effects (typically less than 0.5 dB) on both VADs and IVADs at frequencies emphasized in the advertisement calls produced by conspecific males (834 Hz and 2730 Hz). Directionality at the peak resonance frequency of the lungs (1558 Hz) was improved by ≅ 3 dB for a single tympanum when the lungs were inflated versus deflated, but IVADs were not impacted by the state of lung inflation. Based on these results, we reject the hypothesis that the lung-to-ear sound transmission pathway functions to improve directional hearing in frogs.Item The effort of mentally repairing speech in individuals with hearing loss(2023-11) Gianakas, StevenOver 460 million people worldwide have hearing loss (HL) that negatively impacts their ability to communicate (Davis & Hoffman, 2019). In the clinic, performance is measured by the percentage of words a listener repeats correctly. However, these scores reflect not only the health of the auditory system but also the listener’s ability to mentally repair misperceptions by using knowledge of the language and context (“cognitive repair”). Standard measures of speech perception cannot detect if a person used cognitive repair or if they accurately heard speech (with no need for repair). Detecting a person’s reliance on cognitive repair is important because while reliance on an extra moment to use context is helpful in the testing booth, it may break down in the real world as the next sentence would be heard before the previous sentence was fully processed. We hypothesize that continual need for cognitive repair is at the heart of what makes listening effortful, and what ultimately leads to increased fatigue (Edwards, 2017), anxiety (Morata et al., 2005), and social withdrawal (Hughes et al., 2018) for people with HL. The goals of this dissertation are to (1) identify listener reliance on cognitive repair, (2) measure the timeline of cognitive repair and its interference with ongoing processing, and (3) measure the relief from effort resulting from priming. The first study demonstrates the use of a clinically feasible test using behavioral measures that identifies when a listener relies on the moment immediately following the sentence to use context. Importantly, this test will better identify patients with HL who use cognitive repair during clinical testing which can lead to improved individualized patient centered care. The second study uses a dual-task paradigm to better identify the amount of time needed for cognitive repair after a sentence. During this time the listener would be susceptible to interference from an upcoming sentence in real-world conversation. The third study uses pupillometry to measure how the effort of repairing speech is affected by listeners having a preview of the missing word.Item Perception of complex sounds at high frequencies(2022-05) Guest, DanielUnderstanding how the auditory system processes frequency and intensity information is crucial to our understanding of overall auditory function. Although great progress has been made in understanding this issue in the case of simple sounds, such as pure tones, considerable uncertainty remains in understanding how the auditory system processes frequency and intensity information in more complex and naturalistic sounds. Moreover, much of our understanding comes from sounds in the low-frequency range, where phase locking to temporal fine structure is available in the auditory nerve. To address these limitations, this dissertation first presents new data on a variety of psychoacoustical tasks measuring frequency and intensity perception not only at low frequencies but also at high frequencies. Next, the psychophysical results are interpreted with the aid of modern computational models of the auditory system, which capture key features of the complex and nonlinear processing that takes place in the auditory periphery and auditory subcortex. Both the behavioral and computational results demonstrate how perception of complex sound features, such as pitch and spectral shape, reflects a delicate combination of both low-level constraints imposed by peripheral encoding of sound and higher-level influences, such as central processing, familiarity, and context.