Identifying the auditory mechanisms supporting speech-in-noise and accented-speech recognition in middle-aged listeners with and without sensorineural hearing loss

Project: Research project

Project Details


Communication rarely occurs in pristine listening conditions, often resulting in degradation of the talker’s message before it reaches the listener’s ears. Environmental noise and non-native speech are two common sources of everyday degraded speech. Considerable evidence shows that degraded-speech recognition is challenging for everyone, and even more challenging for individuals with hearing loss. However, it is unknown what mechanisms support recognition of degraded speech, or whether the mechanisms are dependent on the type of degradation. Moreover, what is known about degraded-speech recognition difficulties comes from studies on older adults, where aging effects can complicate interpretation of findings. Little is known about what happens during middle-age, an age span that represents the majority of the workforce and a period when speech-recognition difficulties emerge. Delineating the mechanisms supporting degraded speech recognition in middle-aged listeners with and without hearing loss will enable us to provide tailored support for individuals when speech-recognition difficulties emerge. The long-term goal of this research is to delineate the shared and separate mechanisms that support speech recognition under multiple degraded listening conditions in a wide age range of listeners, and determine how listener-dependent factors (e.g., hearing acuity, life experiences) lead to individual differences in the mechanisms supporting these listening skills. The first step, and the goal of this project, is to identify the mechanisms of degraded-speech recognition in middle-aged listeners with and without hearing loss. To do this, we will examine the contribution of hearing acuity, via audiometric thresholds and distortion product otoacoustic emissions, and central auditory processing, via the frequency-following response, a subcortical evoked response that captures processing of discrete sound features.
Effective start/end date1/1/2112/31/22


  • American Hearing Research Foundation (Krizman AGMT 12/15/20)


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.