Browsing by Subject "Cochlear implant"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item A Single-Channel Noise Reduction Algorithm for Cochlear-Implant Users(2015-12) Wang, NingyuanDespite good performance in quiet environments, there are still significant gaps in speech perception in noise between normal-hearing listeners and hearing-impaired listeners using devices like hearing aids or cochlear implants (CIs). Much effort has been invested to develop noise reduction algorithms that could fulfill these gaps, but few of them have the ability to enhance speech intelligibility without any prior knowledge of the speech signal, including both statistical properties and location information. In this study, a single-channel noise reduction algorithm, based on a noise tracking algorithm and the binary masking (BM) method, was implemented for CI users. The noise tracking algorithm was able to catch detailed spectral information of the noise with a fast noise tracker during the noise-like frames and update the estimated accumulative noise level with a slow noise tracker during speech-like frames. Next, this noise tracking algorithm was used to estimate the signal-to-noise ratio (SNR) of each temporal-spectral region, termed “time-frequency unit” in the BM method, to determine whether to eliminate or retain each unit. Finally, a sentence perception test was employed to investigate the effects of this noise reduction algorithm in various types of background noise and input SNR conditions. Results showed that the mean percent correct for CI users is improved in most conditions by the noise reduction process. Improvements in speech intelligibility were observed at all input SNR conditions for the babble and speech-shaped noise conditions; however, challenges still remain for the non-stationary restaurant noise.Item Spectral resolution and speech recognition in noise by cochlear implant users.(2011-07) Anderson, Elizabeth SusanFor cochlear implant (CI) users, the relationship between spectral resolution and speech perception in noise has remained ambiguous. An even more fundamental question has been how to measure spectral resolution in CI listeners. This dissertation describes work exploring the relationships among different measures of spectral resolution, and between each of those measures and speech recognition in quiet and in noise. Spectral ripple discrimination was found to correlate strongly with spatial tuning curves, when the measures were matched in frequency region. Broadband spectral ripple discrimination correlated well with sentence recognition in quiet, but not in background noise. Spectral ripple detection correlated strongly with speech recognition in quiet, but its validity as a measure of spectral resolution was not empirically supported. Spectral ripple discrimination thresholds were compared to sentence recognition in noise, using spectrally-limited maskers that did not overlap with the entire speech spectrum. Speech reception thresholds were measured in the presence of four low- or high-frequency maskers, all bandpass-filtered from speech-shaped noise, and a broadband masker encompassing most of the speech spectrum. The findings revealed substantial between-subject variability in susceptibility to masking by each of these noises and in spectral release from masking, which cannot be explained simply in terms of energetic masking and does not appear to be strongly related to spectral resolution. Better CI users appeared to show stronger relationships between spectral resolution and speech perception than did poorer users, implying that advanced CI processing strategies designed to maximize the number of spectral channels may not benefit all CI users equally.Item Understanding Auditory Context Effects and their Implications(2015-12) Wang, NingyuanOur perception of sound at any point in time is dependent not only on the sound itself, but also on the acoustic environment of the recent past. These auditory context effects reflect the adaptation of the auditory system to the ambient conditions, and provide the potential for improving coding efficiency as well as providing the basis for some forms of perceptual invariance in the face of different talkers, different room environments, and different types of background noise. Despite their obvious importance for auditory perception, the mechanisms underlying auditory context effects remain unclear. The overall goal of this thesis was to investigate different auditory context effects in both normal-hearing listeners and cochlear-implant (CI) users, to shed light on the potential underlying mechanisms, to reveal their implications for auditory perception, and to investigate the effects of hearing loss on these context effects. In Chapters 2, 3 and 4, different context effects, known respectively as the loudness context effect (LCE), induced loudness reduction (ILR), and spectral motion contrast effect, are examined. Another context effect, known as auditory enhancement, is introduced in Chapter 5 with a vowel enhancement paradigm, and is further explored in Chapter 6 by treating it as process of frequency-selective gain control. Finally, a simplified neural model is proposed in Chapter 7 to explain the basis of auditory enhancement, while remaining consistent with the results from the studies of other context effects. The results reveal both similarities and differences between normal-hearing listeners and CI users in responses to auditory context effects, and suggest a role of peripheral processes played in auditory context effects and a potential opportunity to improve current CI speech processing strategies through a restoration of normal auditory context effects.