WorldWideScience

Sample records for normal hearing listeners

  1. Binaural pitch perception in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2007-01-01

    The effects of hearing impairment on the perception of binaural-pitch stimuli were investigated. Several experiments were performed with normal-hearing and hearing-impaired listeners, including detection and discrimination of binaural pitch, and melody recognition using different types of binaural...... pitches. For the normal-hearing listeners, all types of binaural pitches could be perceived immediately and were musical. The hearing-impaired listeners could be divided into three groups based on their results: (a) some perceived all types of binaural pitches, but with decreased salience or musicality...... compared to normal-hearing listeners; (b) some could only perceive the strongest pitch types; (c) some were unable to perceive any binaural pitch at all. The performance of the listeners was not correlated with audibility. Additional experiments investigated the correlation between performance in binaural...

  2. Factors Affecting Sentence-in-Noise Recognition for Normal Hearing Listeners and Listeners with Hearing Loss.

    Science.gov (United States)

    Hwang, Jung Sun; Kim, Kyung Hyun; Lee, Jae Hee

    2017-07-01

    Despite amplified speech, listeners with hearing loss often report more difficulties understanding speech in background noise compared to normalhearing listeners. Various factors such as deteriorated hearing sensitivity, age, suprathreshold temporal resolution, and reduced capacity of working memory and attention can attribute to their sentence-in-noise problems. The present study aims to determine a primary explanatory factor for sentence-in-noise recognition difficulties in adults with or without hearing loss. Forty normal-hearing (NH) listeners (23-73 years) and thirty-four hearing-impaired (HI) listeners (24-80 years) participated for experimental testing. For both NH and HI group, the younger, middle-aged, older listeners were included. The sentence recognition score in noise was measured at 0 dB signal-to-noise ratio. The ability of temporal resolution was evaluated by gap detection performance using the Gaps-In-Noise test. Listeners' short-term auditory working memory span was measured by forward and backward digit spans. Overall, the HI listeners' sentence-in-noise recognition, temporal resolution abilities, and digit forward and backward spans were poorer compared to the NH listeners. Both NH and HI listeners had a substantial variability in performance. For NH listeners, only the digit backward span explained a small proportion of the variance in their sentence-in-noise performance. For the HI listeners, all the performance was influenced by age, and their sentence-in-noise difficulties were associated with various factors such as high-frequency hearing sensitivity, suprathreshold temporal resolution abilities, and working memory span. For the HI listeners, the critical predictors of the sentence-in-noise performance were composite measures of peripheral hearing sensitivity and suprathreshold temporal resolution abilities. The primary explanatory factors for the sentence-in-noise recognition performance differ between NH and HI listeners. Factors

  3. Predicting consonant recognition and confusions in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2017-01-01

    , Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102, 2892–2905]. The model was evaluated based on the extensive consonant perception data set provided by Zaar and Dau [(2015). J. Acoust. Soc. Am. 138, 1253–1267], which was obtained with normal-hearing listeners using 15 consonant-vowel combinations...... confusion groups. The large predictive power of the proposed model suggests that adaptive processes in the auditory preprocessing in combination with a cross-correlation based template-matching back end can account for some of the processes underlying consonant perception in normal-hearing listeners....... The proposed model may provide a valuable framework, e.g., for investigating the effects of hearing impairment and hearing-aid signal processing on phoneme recognition....

  4. Subjective Evaluation of Sound Quality for Normal-hearing and Hearing-i,paired Listeners

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    1992-01-01

    11 hearing-impaired (HI) and 12 normal-hearing (NH) subjects have performed sound quality ratings on 6 perceptual scales (Loudness, Clarity, Sharpness, Fullness, Spaciousness and Overall judgement). The signals for the rating experiment consisted of running speech and music with or without......, but the normal-hearing group was slightly more reliable. There were significant differences between stimuli and between subjects, with stimuli affecting the ratings the most. Normal-hearing and hearing-impaired subjects showed similar trends, but normal-hearing listeners were generally more sensitive, i...

  5. Objective Scaling of Sound Quality for Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    ) Subjective sound quality ratings of clean and distorted speech and music signals, by normal-hearing and hearing-impaired listeners, to provide reference data, 2) An auditory model of the ear, including the effects of hearing loss, based on existing psychoacoustic knowledge, coupled to 3) An artificial neural......A new method for the objective estimation of sound quality for both normal-hearing and hearing-impaired listeners has been presented: OSSQAR (Objective Scaling of Sound Quality and Reproduction). OSSQAR is based on three main parts, which have been carried out and documented separately: 1...... network, which was trained to predict the sound quality ratings. OSSQAR predicts the perceived sound quality on two independent perceptual rating scales: Clearness and Sharpness. These two scales were shown to be the most relevant for assessment of sound quality, and they were interpreted the same way...

  6. Temporal Fine-Structure Coding and Lateralized Speech Perception in Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Pedersen, Julie Hefting; Laugesen, Søren

    2016-01-01

    This study investigated the relationship between speech perception performance in spatially complex, lateralized listening scenarios and temporal fine-structure (TFS) coding at low frequencies. Young normal-hearing (NH) and two groups of elderly hearing-impaired (HI) listeners with mild or moderate...... hearing loss above 1.5 kHz participated in the study. Speech reception thresholds (SRTs) were estimated in the presence of either speech-shaped noise, two-, four-, or eight-talker babble played reversed, or a nonreversed two-talker masker. Target audibility was ensured by applying individualized linear...... threshold nor the interaural phase difference threshold tasks showed a correlation with the SRTs or with the amount of masking release due to binaural unmasking, respectively. The results suggest that, although HI listeners with normal hearing thresholds below 1.5 kHz experienced difficulties with speech...

  7. Masker phase effects in normal-hearing and hearing-impaired listeners: evidence for peripheral compression at low signal frequencies

    DEFF Research Database (Denmark)

    Oxenham, Andrew J.; Dau, Torsten

    2004-01-01

    curvature. Results from 12 listeners with sensorineural hearing loss showed reduced masker phase effects, when compared with data from normal-hearing listeners, at both 250- and 1000-Hz signal frequencies. The effects of hearing impairment on phase-related masking differences were not well simulated...... are affected by a common underlying mechanism, presumably related to cochlear outer hair cell function. The results also suggest that normal peripheral compression remains strong even at 250 Hz....

  8. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  9. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    Martin eBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  10. Perception of a Sung Vowel as a Function of Frequency-Modulation Rate and Excursion in Listeners with Normal Hearing and Hearing Impairment

    Science.gov (United States)

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten

    2014-01-01

    Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…

  11. Speech intelligibility of normal listeners and persons with impaired hearing in traffic noise

    Science.gov (United States)

    Aniansson, G.; Peterson, Y.

    1983-10-01

    Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.

  12. Effect of musical training on pitch discrimination performance in older normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Bianchi, Federica; Dau, Torsten; Santurette, Sébastien

    2017-01-01

    -discrimination performance for NH listeners. It is unclear whether a comparable effect of musical training occurs for listeners whose sensory encoding of F0 is degraded. To address this question, F0 discrimination was investigated for three groups of listeners (14 young NH, 9 older NH and 10 HI listeners), each......Hearing-impaired (HI) listeners, as well as elderly listeners, typically have a reduced ability to discriminate the fundamental frequency (F0) of complex tones compared to young normal-hearing (NH) listeners. Several studies have shown that musical training, on the other hand, leads to improved F0...... including musicians and non-musicians, using complex tones that differed in harmonic content. Musical training significantly improved F0 discrimination for all groups of listeners, especially for complex tones containing low-numbered harmonics. In a second experiment, the sensitivity to temporal fine...

  13. Sound localization in noise in hearing-impaired listeners.

    Science.gov (United States)

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  14. Examination of the neighborhood activation theory in normal and hearing-impaired listeners.

    Science.gov (United States)

    Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A

    2001-02-01

    Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as

  15. Speech intelligibility for normal hearing and hearing-impaired listeners in simulated room acoustic conditions

    DEFF Research Database (Denmark)

    Arweiler, Iris; Dau, Torsten; Poulsen, Torben

    Speech intelligibility depends on many factors such as room acoustics, the acoustical properties and location of the signal and the interferers, and the ability of the (normal and impaired) auditory system to process monaural and binaural sounds. In the present study, the effect of reverberation...... on spatial release from masking was investigated in normal hearing and hearing impaired listeners using three types of interferers: speech shaped noise, an interfering female talker and speech-modulated noise. Speech reception thresholds (SRT) were obtained in three simulated environments: a listening room......, a classroom and a church. The data from the study provide constraints for existing models of speech intelligibility prediction (based on the speech intelligibility index, SII, or the speech transmission index, STI) which have shortcomings when reverberation and/or fluctuating noise affect speech...

  16. Detection threshold for sound distortion resulting from noise reduction in normal-hearing and hearing-impaired listeners.

    Science.gov (United States)

    Brons, Inge; Dreschler, Wouter A; Houben, Rolph

    2014-09-01

    Hearing-aid noise reduction should reduce background noise, but not disturb the target speech. This objective is difficult because noise reduction suffers from a trade-off between the amount of noise removed and signal distortion. It is unknown if this important trade-off differs between normal-hearing (NH) and hearing-impaired (HI) listeners. This study separated the negative effect of noise reduction (distortion) from the positive effect (reduction of noise) to allow the measurement of the detection threshold for noise-reduction (NR) distortion. Twelve NH subjects and 12 subjects with mild to moderate sensorineural hearing loss participated in this study. The detection thresholds for distortion were determined using an adaptive procedure with a three-interval, two-alternative forced-choice paradigm. Different levels of distortion were obtained by changing the maximum amount of noise reduction. Participants were also asked to indicate their preferred NR strength. The detection threshold for overall distortion was higher for HI subjects than for NH subjects, suggesting that stronger noise reduction can be applied for HI listeners without affecting the perceived sound quality. However, the preferred NR strength of HI listeners was closer to their individual detection threshold for distortion than in NH listeners. This implies that HI listeners tolerate fewer audible distortions than NH listeners.

  17. Perception of a Sung Vowel as a Function of Frequency-Modulation Rate and Excursionin Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels henrik

    2014-01-01

    Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss...... affects the perception of a sung vowel based on FM cues. Method: Vibrato maps were obtained in 14 NH and 12 HI listeners with different degrees of musical experience. The FM rate and FM excursion of a synthesized vowel, to which coherent FM was applied, were adjusted until a singing voice emerged. Results......: In NH listeners, adding FM to the steady vowel components produced perception of a singing voice for FM rates between 4.1 and 7.5 Hz and FM excursions between 17 and 83 cents on average. In contrast, HI listeners showed substantially broader vibrato maps. Individual differences in map boundaries were...

  18. Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing.

    Science.gov (United States)

    Gordon-Salant, Sandra; Cole, Stacey Samuels

    2016-01-01

    This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening

  19. Effects of Varying Reverberation on Music Perception for Young Normal-Hearing and Old Hearing-Impaired Listeners.

    Science.gov (United States)

    Reinhart, Paul N; Souza, Pamela E

    2018-01-01

    Reverberation enhances music perception and is one of the most important acoustic factors in auditorium design. However, previous research on reverberant music perception has focused on young normal-hearing (YNH) listeners. Old hearing-impaired (OHI) listeners have degraded spatial auditory processing; therefore, they may perceive reverberant music differently. Two experiments were conducted examining the effects of varying reverberation on music perception for YNH and OHI listeners. Experiment 1 examined whether YNH listeners and OHI listeners prefer different amounts of reverberation for classical music listening. Symphonic excerpts were processed at a range of reverberation times using a point-source simulation. Listeners performed a paired-comparisons task in which they heard two excerpts with different reverberation times, and they indicated which they preferred. The YNH group preferred a reverberation time of 2.5 s; however, the OHI group did not demonstrate any significant preference. Experiment 2 examined whether OHI listeners are less sensitive to (e, less able to discriminate) differences in reverberation time than YNH listeners. YNH and OHI participants listened to pairs of music excerpts and indicated whether they perceived the same or different amount of reverberation. Results indicated that the ability of both groups to detect differences in reverberation time improved with increasing reverberation time difference. However, discrimination was poorer for the OHI group than for the YNH group. This suggests that OHI listeners are less sensitive to differences in reverberation when listening to music than YNH listeners, which might explain the lack of group reverberation time preferences of the OHI group.

  20. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  1. Lateralized speech perception with small interaural time differences in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Santurette, Sébastien; Dau, Torsten

    2017-01-01

    and two-talker babble in terms of SRTs, HI listeners could utilize ITDs to a similar degree as NH listeners to facilitate the binaural unmasking of speech. A slight difference was observed between the group means when target and maskers were separated from each other by large ITDs, but not when separated...... SRMs are elicited by small ITDs. Speech reception thresholds (SRTs) and SRM due to ITDs were measured over headphones for 10 young NH and 10 older HI listeners, who had normal or close-to-normal hearing below 1.5 kHz. Diotic target sentences were presented in diotic or dichotic speech-shaped noise...... or two-talker babble maskers. In the dichotic conditions, maskers were lateralized by delaying the masker waveforms in the left headphone channel. Multiple magnitudes of masker ITDs were tested in both noise conditions. Although deficits were observed in speech perception abilities in speechshaped noise...

  2. Listening effort and perceived clarity for normal-hearing children with the use of digital noise reduction.

    Science.gov (United States)

    Gustafson, Samantha; McCreery, Ryan; Hoover, Brenda; Kopun, Judy G; Stelmachowicz, Pat

    2014-01-01

    The goal of this study was to evaluate how digital noise reduction (DNR) impacts listening effort and judgment of sound clarity in children with normal hearing. It was hypothesized that when two DNR algorithms differing in signal-to-noise ratio (SNR) output are compared, the algorithm that provides the greatest improvement in overall output SNR will reduce listening effort and receive a better clarity rating from child listeners. A secondary goal was to evaluate the relation between the inversion method measurements and listening effort with DNR processing. Twenty-four children with normal hearing (ages 7 to 12 years) participated in a speech recognition task in which consonant-vowel-consonant nonwords were presented in broadband background noise. Test stimuli were recorded through two hearing aids with DNR off and DNR on at 0 dB and +5 dB input SNR. Stimuli were presented to listeners and verbal response time (VRT) and phoneme recognition scores were measured. The underlying assumption was that an increase in VRT reflects an increase in listening effort. Children rated the sound clarity for each condition. The two commercially available HAs were chosen based on: (1) an inversion technique, which was used to quantify the magnitude of change in SNR with the activation of DNR, and (2) a measure of magnitude-squared coherence, which was used to ensure that DNR in both devices preserved the spectrum. One device provided a greater improvement in overall output SNR than the other. Both DNR algorithms resulted in minimal spectral distortion as measured using coherence. For both devices, VRT decreased for the DNR-on condition, suggesting that listening effort decreased with DNR in both devices. Clarity ratings were also better in the DNR-on condition for both devices. The device showing the greatest improvement in output SNR with DNR engaged improved phoneme recognition scores. The magnitude of this improved phoneme recognition was not accurately predicted with

  3. Selective attention in normal and impaired hearing.

    Science.gov (United States)

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  4. Effects of Noise on Speech Recognition and Listening Effort in Children with Normal Hearing and Children with Mild Bilateral or Unilateral Hearing Loss

    Science.gov (United States)

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-01-01

    Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…

  5. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  6. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability among Listeners with Normal Hearing Thresholds

    Science.gov (United States)

    Shinn-Cunningham, Barbara

    2017-01-01

    Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…

  7. Informational Masking and Spatial Hearing in Listeners with and without Unilateral Hearing Loss

    Science.gov (United States)

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Purpose: This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. Method: Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and…

  8. Comparison of Reading Literacy in Hearing Impaired and Normal Hearing Students

    Directory of Open Access Journals (Sweden)

    Dr. Ali Asghar Kakojoibari

    2011-06-01

    Full Text Available Background and Aim: listening, speaking, reading and writing are considered the lingual skills. These skills are in direct relation with each other. Listening is the first skill learnt by the individual through development. If damaged by hearing impairment, listening can cause serious defect to lingual skills. The goal of our research was to study the effect of hearing loss on reading literacy in hearing impairment students in comparison with normal hearing students.Methods: Study was performed using the examination booklets of Progress in International Reading Literacy Study (PIRLS 2001. 119 hearing impairment students of 4th grade primary school, last year guidance school, and last year high school levels in schools providing exceptional student education were included. These individuals were compared to 46 normal hearing students of 4th grade primary school of ordinary schools. Comparative statistical analysis was performed using t-test.Results: Reading literacy and literal contents understanding was shown to have a significant difference between normal hearing and whole hearing impaired student (p<0.05, except the ones in high school level with moderate hearing loss. There was also seen a significant difference between normal hearing and hearing impairment students in understanding of information contents (p=0.03.Conclusion: Hearing loss has a negative effect on reading literacy. Consequently, curriculum change and evolution of educational programs in exceptional centers is needed, in order to promote reading literacy and to enhance rest hearing

  9. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  10. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    Science.gov (United States)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly

  11. Modeling Speech Intelligibility in Hearing Impaired Listeners

    DEFF Research Database (Denmark)

    Scheidiger, Christoph; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    speech, e.g. phase jitter or spectral subtraction. Recent studies predict SI for normal-hearing (NH) listeners based on a signal-to-noise ratio measure in the envelope domain (SNRenv), in the framework of the speech-based envelope power spectrum model (sEPSM, [20, 21]). These models have shown good...... agreement with measured data under a broad range of conditions, including stationary and modulated interferers, reverberation, and spectral subtraction. Despite the advances in modeling intelligibility in NH listeners, a broadly applicable model that can predict SI in hearing-impaired (HI) listeners...... is not yet available. As a firrst step towards such a model, this study investigates to what extent eects of hearing impairment on SI can be modeled in the sEPSM framework. Preliminary results show that, by only modeling the loss of audibility, the model cannot account for the higher speech reception...

  12. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    Science.gov (United States)

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  13. Binaural pitch perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Dau, Torsten; Santurette, Sébastien; Strelcyk, Olaf

    2007-01-01

    When two white noises differing only in phase in a particular frequency range are presented simultaneously each to one of our ears, a pitch sensation may be perceived inside the head. This phenomenon, called ’binaural pitch’ or ’dichotic pitch’, can be produced by frequency-dependent interaural...... phasedifference patterns. The evaluation of these interaural phase differences depends on the functionality of the binaural auditory system and the spectro-temporal information at its input. A melody recognition task was performed in the present study using pure-tone stimuli and six different types of noises...... that can generate a binaural pitch sensation. Normal-hearing listeners and hearing-impaired listeners with different kinds of hearing impairment participated in the experiment....

  14. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users.

    Science.gov (United States)

    Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J

    2004-09-01

    The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.

  15. Investigating the Role of Working Memory in Speech-in-noise Identification for Listeners with Normal Hearing.

    Science.gov (United States)

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.

  16. Signal-to-background-ratio preferences of normal-hearing listeners as a function of music

    Science.gov (United States)

    Barrett, Jillian G.

    2005-04-01

    The primary purpose of speech is to convey a message. Many factors affect the listener's overall reception, several of which have little to do with the linguistic content itself, but rather with the delivery (e.g., prosody, intonation patterns, pragmatics, paralinguistic cues). Music, however, may convey a message either with or without linguistic content. In instances in which music has lyrics, one cannot assume verbal content will take precedence over sonic properties. Lyric emphasis over other aspects of music cannot be assumed. Singing introduces distortion of the vowel-consonant temporal ratio of speech, emphasizing vowels and de-emphasizing consonants. The phonemic production alterations of singing make it difficult for even those with normal hearing to understand the singer. This investigation was designed to identify singer-to-background-ratio (SBR) prefer- ences for normal hearing adult listeners (as opposed to SBR levels maxi-mizing speech discrimination ability). Stimuli were derived from three different original songs, each produced in two different genres and sung by six different singers. Singer and genre were the two primary contributors to significant differences in SBR preferences, though results clearly indicate genre, style and singer interact in different combinations for each song, each singer, and for each subject in an unpredictable manner.

  17. Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review.

    Science.gov (United States)

    Ohlenforst, Barbara; Zekveld, Adriana A; Jansma, Elise P; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E

    Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

  18. Postural control assessment in students with normal hearing and sensorineural hearing loss.

    Science.gov (United States)

    Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2015-01-01

    Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (ppostural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  19. Complex-Tone Pitch Discrimination in Listeners With Sensorineural Hearing Loss

    DEFF Research Database (Denmark)

    Bianchi, Federica; Fereczkowski, Michal; Zaar, Johannes

    2016-01-01

    estimated in the same listeners. The estimated reduction of cochlear compression was significantly correlated with the increase in the F0DL ratio, while no correlation was found with filter bandwidth. The effects of degraded frequency selectivity and loss of compression were considered in a simplified......-discrimination performance in listeners with SNHL. Pitch-discrimination thresholds were obtained for 14 normal-hearing (NH) and 10 hearing-impaired (HI) listeners for sine-phase (SP) and random-phase (RP) complex tones. When all harmonics were unresolved, the HI listeners performed, on average, worse than NH listeners...... in the RP condition but similarly to NH listeners in the SP condition. The increase in pitch-discrimination performance for the SP relative to the RP condition (F0DL ratio) was significantly larger in the HI as compared with the NH listeners. Cochlear compression and auditory-filter bandwidths were...

  20. Auditory stream segregation with multi-tonal complexes in hearing-impaired listeners

    Science.gov (United States)

    Rogers, Deanna S.; Lentz, Jennifer J.

    2004-05-01

    The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.

  1. Spectral and binaural loudness summation for hearing-impaired listeners.

    Science.gov (United States)

    Oetting, Dirk; Hohmann, Volker; Appell, Jens-E; Kollmeier, Birger; Ewert, Stephan D

    2016-05-01

    Sensorineural hearing loss typically results in a steepened loudness function and a reduced dynamic range from elevated thresholds to uncomfortably loud levels for narrowband and broadband signals. Restoring narrowband loudness perception for hearing-impaired (HI) listeners can lead to overly loud perception of broadband signals and it is unclear how binaural presentation affects loudness perception in this case. Here, loudness perception quantified by categorical loudness scaling for nine normal-hearing (NH) and ten HI listeners was compared for signals with different bandwidth and different spectral shape in monaural and in binaural conditions. For the HI listeners, frequency- and level-dependent amplification was used to match the narrowband monaural loudness functions of the NH listeners. The average loudness functions for NH and HI listeners showed good agreement for monaural broadband signals. However, HI listeners showed substantially greater loudness for binaural broadband signals than NH listeners: on average a 14.1 dB lower level was required to reach "very loud" (range 30.8 to -3.7 dB). Overall, with narrowband loudness compensation, a given binaural loudness for broadband signals above "medium loud" was reached at systematically lower levels for HI than for NH listeners. Such increased binaural loudness summation was not found for loudness categories below "medium loud" or for narrowband signals. Large individual variations in the increased loudness summation were observed and could not be explained by the audiogram or the narrowband loudness functions. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Headphone listening habits and hearing thresholds in swedish adolescents

    Directory of Open Access Journals (Sweden)

    Stephen E Widen

    2017-01-01

    Full Text Available Introduction: The aim of this study was to investigate self-reported hearing and portable music listening habits, measured hearing function and music exposure levels in Swedish adolescents. The study was divided into two parts. Materials and Methods: The first part included 280 adolescents, who were 17 years of age and focused on self-reported data on subjective hearing problems and listening habits regarding portable music players. From this group, 50 adolescents volunteered to participate in Part II of the study, which focused on audiological measurements and measured listening volume. Results: The results indicated that longer lifetime exposure in years and increased listening frequency were associated with poorer hearing thresholds and more self-reported hearing problems. A tendency was found for listening to louder volumes and poorer hearing thresholds. Women reported more subjective hearing problems compared with men but exhibited better hearing thresholds. In contrast, men reported more use of personal music devices, and they listen at higher volumes. Discussion: Additionally, the study shows that adolescents listening for ≥3 h at every occasion more likely had tinnitus. Those listening at ≥85 dB LAeq, FF and listening every day exhibited poorer mean hearing thresholds, reported more subjective hearing problems and listened more frequently in school and while sleeping. Conclusion: Although the vast majority listened at moderate sound levels and for shorter periods of time, the study also indicates that there is a subgroup (10% that listens between 90 and 100 dB for longer periods of time, even during sleep. This group might be at risk for developing future noise-induced hearing impairments.

  3. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  4. The Effects of Hearing Aid Directional Microphone and Noise Reduction Processing on Listening Effort in Older Adults with Hearing Loss.

    Science.gov (United States)

    Desjardins, Jamie L

    2016-01-01

    Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self

  5. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    Science.gov (United States)

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  6. Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort

    DEFF Research Database (Denmark)

    Schmidt, Erik

    2007-01-01

    sounds, has found that both normal-hearing and hearing-impaired listeners prefer loud sounds to be closer to the most comfortable loudness-level, than suggested by common non-linear fitting rules. During this project, two listening experiments were carried out. In the first experiment, hearing aid users......Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort. Sound processing in hearing aids is determined by the fitting rule. The fitting rule describes how the hearing aid should amplify speech and sounds in the surroundings......, such that they become audible again for the hearing impaired person. The general goal is to place all sounds within the hearing aid users’ audible range, such that speech intelligibility and listening comfort become as good as possible. Amplification strategies in hearing aids are in many cases based on empirical...

  7. Efficient estimates of cochlear hearing loss parameters in individual listeners

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2013-01-01

    It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...

  8. Self-masking: Listening during vocalization. Normal hearing.

    Science.gov (United States)

    Borg, Erik; Bergkvist, Christina; Gustafsson, Dan

    2009-06-01

    What underlying mechanisms are involved in the ability to talk and listen simultaneously and what role does self-masking play under conditions of hearing impairment? The purpose of the present series of studies is to describe a technique for assessment of masked thresholds during vocalization, to describe normative data for males and females, and to focus on hearing impairment. The masking effect of vocalized [a:] on narrow-band noise pulses (250-8000 Hz) was studied using the maximum vocalization method. An amplitude-modulated series of sound pulses, which sounded like a steam engine, was masked until the criterion of halving the perceived pulse rate was reached. For masking of continuous reading, a just-follow-conversation criterion was applied. Intra-session test-retest reproducibility and inter-session variability were calculated. The results showed that female voices were more efficient in masking high frequency noise bursts than male voices and more efficient in masking both a male and a female test reading. The male had to vocalize 4 dBA louder than the female to produce the same masking effect on the test reading. It is concluded that the method is relatively simple to apply and has small intra-session and fair inter-session variability. Interesting gender differences were observed.

  9. The effect of hearing aid noise reduction on listening effort in hearing-impaired adults.

    Science.gov (United States)

    Desjardins, Jamie L; Doherty, Karen A

    2014-01-01

    The purpose of the present study was to evaluate the effect of a noise-reduction (NR) algorithm on the listening effort hearing-impaired participants expend on a speech in noise task. Twelve hearing-impaired listeners fitted with behind-the-ear hearing aids with a fast-acting modulation-based NR algorithm participated in this study. A dual-task paradigm was used to measure listening effort with and without the NR enabled in the hearing aid. The primary task was a sentence-in-noise task presented at fixed overall speech performance levels of 76% (moderate listening condition) and 50% (difficult listening condition) correct performance, and the secondary task was a visual-tracking test. Participants also completed measures of working memory (Reading Span test), and processing speed (Digit Symbol Substitution Test) ability. Participants' speech recognition in noise scores did not significantly change with the NR algorithm activated in the hearing aid in either listening condition. The NR algorithm significantly decreased listening effort, but only in the more difficult listening condition. Last, there was a tendency for participants with faster processing speeds to expend less listening effort with the NR algorithm when listening to speech in background noise in the difficult listening condition. The NR algorithm reduced the listening effort adults with hearing loss must expend to understand speech in noise.

  10. Relation between temporal envelope coding, pitch discrimination, and compression estimates in listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal

    2015-01-01

    Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...

  11. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  12. Signal-to-background ratio preferences of normal-hearing listeners as a function of music

    Science.gov (United States)

    Barrett, Jillian Gallant

    The purpose of this study was to identify listeners' signal-to-background-ratio (SBR) preference levels for vocal music and to investigate whether or not SBR differences existed for different music genres. The ``signal'' was the singer's voice, and the ``background'' was the accompanying music. Three songs were each produced in two different genres (total of 6 genres represented). Each song was performed by three male and three female singers. Analyses addressed influences of musical genre, singing style, and singer timbre on listener's SBR choices. Fifty-three normal-hearing California State University of Northridge students ranging in age from 20-52 years participated as subjects. Subjects adjusted the overall music loudness to a comfortable listening level, and manipulated a second gain control which affected only the singer's voice. Subjects listened to 72 stimuli and adjusted the singer's voice to the level they felt sounded appropriate in comparison to the background music. Singer and Genre were the two primary contributors to significant differences in subject's SBR preferences, although the results clearly indicate Genre, Style and Singer interact in different combinations under different conditions. SBR differences for each song, each singer, and each subject did not occur in a predictable manner, and support the hypothesis that SBR preferences are neither fixed nor dependent merely upon music application or setting. Further investigations regarding psychoacoustical bases responsible for differences in SBR preferences are warranted.

  13. Why middle-aged listeners have trouble hearing in everyday settings.

    Science.gov (United States)

    Ruggles, Dorea; Bharadwaj, Hari; Shinn-Cunningham, Barbara G

    2012-08-07

    Anecdotally, middle-aged listeners report difficulty conversing in social settings, even when they have normal audiometric thresholds [1-3]. Moreover, young adult listeners with "normal" hearing vary in their ability to selectively attend to speech amid similar streams of speech. Ignoring age, these individual differences correlate with physiological differences in temporal coding precision present in the auditory brainstem, suggesting that the fidelity of encoding of suprathreshold sound helps explain individual differences [4]. Here, we revisit the conundrum of whether early aging influences an individual's ability to communicate in everyday settings. Although absolute selective attention ability is not predicted by age, reverberant energy interferes more with selective attention as age increases. Breaking the brainstem response down into components corresponding to coding of stimulus fine structure and envelope, we find that age alters which brainstem component predicts performance. Specifically, middle-aged listeners appear to rely heavily on temporal fine structure, which is more disrupted by reverberant energy than temporal envelope structure is. In contrast, the fidelity of envelope cues predicts performance in younger adults. These results hint that temporal envelope cues influence spatial hearing in reverberant settings more than is commonly appreciated and help explain why middle-aged listeners have particular difficulty communicating in daily life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Relationship between spectrotemporal modulation detection and music perception in normal-hearing, hearing-impaired, and cochlear implant listeners.

    Science.gov (United States)

    Choi, Ji Eun; Won, Jong Ho; Kim, Cheol Hee; Cho, Yang-Sun; Hong, Sung Hwa; Moon, Il Joon

    2018-01-15

    The objective of this study was to examine the relationship between spectrotemporal modulation (STM) sensitivity and the ability to perceive music. Ten normal-nearing (NH) listeners, ten hearing aid (HA) users with moderate hearing loss, and ten cochlear Implant (CI) users participated in this study. Three different types of psychoacoustic tests including spectral modulation detection (SMD), temporal modulation detection (TMD), and STM were administered. Performances on these psychoacoustic tests were compared to music perception abilities. In addition, psychoacoustic mechanisms involved in the improvement of music perception through HA were evaluated. Music perception abilities in unaided and aided conditions were measured for HA users. After that, HA benefit for music perception was correlated with aided psychoacoustic performance. STM detection study showed that a combination of spectral and temporal modulation cues were more strongly correlated with music perception abilities than spectral or temporal modulation cues measured separately. No correlation was found between music perception performance and SMD threshold or TMD threshold in each group. Also, HA benefits for melody and timbre identification were significantly correlated with a combination of spectral and temporal envelope cues though HA.

  15. Is it possible to improve hearing by listening training?

    DEFF Research Database (Denmark)

    Reuter, Karen

    2011-01-01

    Different listening training methods exist, which are based on the assumption that people can be trained to process incoming sound more effectively. It is often distinguished between the terms hearing (=passive reception of sound) and listening (=active process of tuning in to those sounds we wish...... to receive). Listening training methods claim to benefit a wide variety of people, e.g. people having learning disabilities, developmental delay or concentration problems. Sound therapists report about improved hearing/ listening curves following listening training programs. No independent research study has...... confirmed these results using standardized hearing test measures. Dr. Alfred Tomatis, a French ear nose throat doctor, developed the Tomatis listening training in the 1950s. The principles of the Tomatis method are described. A literature review has been conducted to investigate, whether the Tomatis method...

  16. Talker Differences in Clear and Conversational Speech: Perceived Sentence Clarity for Young Adults with Normal Hearing and Older Adults with Hearing Loss

    Science.gov (United States)

    Ferguson, Sarah Hargus; Morgan, Shae D.

    2018-01-01

    Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…

  17. Music Listening Behavior, Health, Hearing and Otoacoustic Emission Levels

    Directory of Open Access Journals (Sweden)

    Kathleen Hutchinson Marron

    2014-07-01

    Full Text Available This study examined the relationship between hearing levels, otoacoustic emission levels and listening habits related to the use of personal listening devices (PLDs in adults with varying health-related fitness. Duration of PLD use was estimated and volume level was directly measured. Biomarkers of health-related fitness were co-factored into the analyses. 115 subjects ages 18–84 participated in this study. Subjects were divided into two sub-groups; PLD users and non-PLD users. Both groups completed audiological and health-related fitness tests. Due to the mismatch in the mean age of the PLD user versus the non-PLD user groups, age-adjusted statistics were performed to determine factors that contributed to hearing levels. Age was the most significant predictor of hearing levels across listening and health-related fitness variables. PLD user status did not impact hearing measures, yet PLD users who listened less than 8 hours per week with intensities of less than 80 dBA were found to have better hearing. Other variables found to be associated with hearing levels included: years listening to PLD, number of noise environments and use of ear protection. Finally, a healthy waist-to-hip ratio was a significant predictor of better hearing, while body mass index approached, but did not reach statistical significance.

  18. Behavioral measures of cochlear compression and temporal resolution as predictors of speech masking release in hearing-impaired listeners

    Science.gov (United States)

    Gregan, Melanie J.; Nelson, Peggy B.; Oxenham, Andrew J.

    2013-01-01

    Hearing-impaired (HI) listeners often show less masking release (MR) than normal-hearing listeners when temporal fluctuations are imposed on a steady-state masker, even when accounting for overall audibility differences. This difference may be related to a loss of cochlear compression in HI listeners. Behavioral estimates of compression, using temporal masking curves (TMCs), were compared with MR for band-limited (500–4000 Hz) speech and pure tones in HI listeners and age-matched, noise-masked normal-hearing (NMNH) listeners. Compression and pure-tone MR estimates were made at 500, 1500, and 4000 Hz. The amount of MR was defined as the difference in performance between steady-state and 10-Hz square-wave-gated speech-shaped noise. In addition, temporal resolution was estimated from the slope of the off-frequency TMC. No significant relationship was found between estimated cochlear compression and MR for either speech or pure tones. NMNH listeners had significantly steeper off-frequency temporal masking recovery slopes than did HI listeners, and a small but significant correlation was observed between poorer temporal resolution and reduced MR for speech. The results suggest either that the effects of hearing impairment on MR are not determined primarily by changes in peripheral compression, or that the TMC does not provide a sufficiently reliable measure of cochlear compression. PMID:24116426

  19. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  20. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L

    2017-05-24

    Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.

  1. Role of short-time acoustic temporal fine structure cues in sentence recognition for normal-hearing listeners.

    Science.gov (United States)

    Hou, Limin; Xu, Li

    2018-02-01

    Short-time processing was employed to manipulate the amplitude, bandwidth, and temporal fine structure (TFS) in sentences. Fifty-two native-English-speaking, normal-hearing listeners participated in four sentence-recognition experiments. Results showed that recovered envelope (E) played an important role in speech recognition when the bandwidth was > 1 equivalent rectangular bandwidth. Removing TFS drastically reduced sentence recognition. Preserving TFS greatly improved sentence recognition when amplitude information was available at a rate ≥ 10 Hz (i.e., time segment ≤ 100 ms). Therefore, the short-time TFS facilitates speech perception together with the recovered E and works with the coarse amplitude cues to provide useful information for speech recognition.

  2. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    Science.gov (United States)

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing

  3. Hear here: children with hearing loss learn words by listening.

    Science.gov (United States)

    Lew, Joyce; Purcell, Alison A; Doble, Maree; Lim, Lynne H

    2014-10-01

    Early use of hearing devices and family participation in auditory-verbal therapy has been associated with age-appropriate verbal communication outcomes for children with hearing loss. However, there continues to be great variability in outcomes across different oral intervention programmes and little consensus on how therapists should prioritise goals at each therapy session for positive clinical outcomes. This pilot intervention study aimed to determine whether therapy goals that concentrate on teaching preschool children with hearing loss how to distinguish between words in a structured listening programme is effective, and whether gains in speech perception skills impact on vocabulary and speech development without them having to be worked on directly in therapy. A multiple baseline across subjects design was used in this within-subject controlled study. 3 children aged between 2:6 and 3:1 with moderate-severe to severe-profound hearing loss were recruited for a 6-week intervention programme. Each participant commenced at different stages of the 10-staged listening programme depending on their individual listening skills at recruitment. Speech development and vocabulary assessments were conducted before and after the training programme in addition to speech perception assessments and probes conducted throughout the intervention programme. All participants made gains in speech perception skills as well as vocabulary and speech development. Speech perception skills acquired were noted to be maintained a week after intervention. In addition, all participants were able to generalise speech perception skills learnt to words that had not been used in the intervention programme. This pilot study found that therapy directed at listening alone is promising and that it may have positive impact on speech and vocabulary development without these goals having to be incorporated into a therapy programme. Although a larger study is necessary for more conclusive findings, the

  4. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    Science.gov (United States)

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. The effects of listening environment and earphone style on preferred listening levels of normal hearing adults using an MP3 player.

    Science.gov (United States)

    Hodgetts, William E; Rieger, Jana M; Szarko, Ryan A

    2007-06-01

    The main objective of this study was to determine the influence of listening environment and earphone style on the preferred-listening levels (PLLs) measured in users' ear canals with a commercially-available MP3 player. It was hypothesized that listeners would prefer higher levels with earbud headphones as opposed to over-the-ear headphones, and that the effects would depend on the environment in which the user was listening. A secondary objective was to use the measured PLLs to determine the permissible listening duration to reach 100% daily noise dose. There were two independent variables in this study. The first, headphone style, had three levels: earbud, over-the-ear, and over-the-ear with noise reduction (the same headphones with a noise reduction circuit). The second, environment, also had 3 levels: quiet, street noise and multi-talker babble. The dependent variable was ear canal A-weighted sound pressure level. A 3 x 3 within-subjects repeated-measures ANOVA was used to analyze the data. Thirty-eight normal hearing adults were recruited from the Faculty of Rehabilitation Medicine at the University of Alberta. Each subject listened to the same song and adjusted the level until it "sounded best" to them in each of the 9 conditions. Significant main effects were found for both the headphone style and environment factors. On average, listeners had higher preferred listening levels with the earbud headphones, than with the over-the-ear headphones. When the noise reduction circuit was used with the over-the-ear headphones, the average PLL was even lower. On average, listeners had higher PLLs in street noise than in multi-talker babble and both of these were higher than the PLL for the quiet condition. The interaction between headphone style and environment was also significant. Details of individual contrasts are explored. Overall, PLLs were quite conservative, which would theoretically allow for extended permissible listening durations. Finally, we investigated

  6. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners

    Directory of Open Access Journals (Sweden)

    Martin Kirchberger

    2016-02-01

    Full Text Available Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings.

  7. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners

    Science.gov (United States)

    Kirchberger, Martin

    2016-01-01

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955

  8. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners.

    Science.gov (United States)

    Kirchberger, Martin; Russo, Frank A

    2016-02-10

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.

  9. Speech perception in older listeners with normal hearing:conditions of time alteration, selective word stress, and length of sentences.

    Science.gov (United States)

    Cho, Soojin; Yu, Jyaehyoung; Chun, Hyungi; Seo, Hyekyung; Han, Woojae

    2014-04-01

    Deficits of the aging auditory system negatively affect older listeners in terms of speech communication, resulting in limitations to their social lives. To improve their perceptual skills, the goal of this study was to investigate the effects of time alteration, selective word stress, and varying sentence lengths on the speech perception of older listeners. Seventeen older people with normal hearing were tested for seven conditions of different time-altered sentences (i.e., ±60%, ±40%, ±20%, 0%), two conditions of selective word stress (i.e., no-stress and stress), and three different lengths of sentences (i.e., short, medium, and long) at the most comfortable level for individuals in quiet circumstances. As time compression increased, sentence perception scores decreased statistically. Compared to a natural (or no stress) condition, the selectively stressed words significantly improved the perceptual scores of these older listeners. Long sentences yielded the worst scores under all time-altered conditions. Interestingly, there was a noticeable positive effect for the selective word stress at the 20% time compression. This pattern of results suggests that a combination of time compression and selective word stress is more effective for understanding speech in older listeners than using the time-expanded condition only.

  10. Masking and Partial Masking in Listeners with a High-Frequency Hearing Loss

    NARCIS (Netherlands)

    Smits, J.T.S.; Duifhuis, H.

    1982-01-01

    3 listeners with sensorineural hearing loss ranging from moderate to moderate-severe starting at frequencies higher than 1 kHz participated in two masking experiments and a partial masking experiment. In the first masking experiment, fM = 1 kHz and LM = 50 dB SPL, higher than normal masked

  11. [The discrimination of mono-syllable words in noise in listeners with normal hearing].

    Science.gov (United States)

    Yoshida, M; Sagara, T; Nagano, M; Korenaga, K; Makishima, K

    1992-02-01

    The discrimination of mono-syllable words (67S word-list) pronounced by a male and a female speaker was investigated in noise in 39 normal hearing subjects. The subjects listened to the test words at a constant level of 62 dB together with white or weighted noise in four S/N conditions. By processing the data with logit transformation, S/N-discrimination curves were presumed for each combination of a speech material and a noise. Regardless of the type of noise, the discrimination scores for the female voice started to decrease gradually at a S/N ratio of +10 dB, and reached 10 to 20% at-10 dB. For the male voice in white noise, the discrimination curve was similar to those for the female voice. On the contrary, the discrimination score for the male voice in weighted noise declined rapidly from a S/N ratio of +5 dB, and went below 10% at -5 dB. The discrimination curves seem to be shaped by the interrelations between the spectrum of the speech material and that of the noise.

  12. [Relationship between the Mandarin acceptable noise level and the personality traits in normal hearing adults].

    Science.gov (United States)

    Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua

    2013-03-01

    To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.

  13. Speech perception in older hearing impaired listeners: benefits of perceptual training.

    Directory of Open Access Journals (Sweden)

    David L Woods

    Full Text Available Hearing aids (HAs only partially restore the ability of older hearing impaired (OHI listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC syllables containing different vowels (/ɑ/, /i/, or /u/ and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs spanning a 12 dB range. Noise levels were adjusted over training sessions based on d' measures. Listeners were tested before and after training to measure (1 changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2 sentence reception thresholds (SeRTs using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d' thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in

  14. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  15. Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.

    Science.gov (United States)

    Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I

    2017-06-01

    In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.

  16. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    Science.gov (United States)

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  17. Speech recognition in normal hearing and sensorineural hearing loss as a function of the number of spectral channels

    NARCIS (Netherlands)

    Baskent, Deniz

    Speech recognition by normal-hearing listeners improves as a function of the number of spectral channels when tested with a noiseband vocoder simulating cochlear implant signal processing. Speech recognition by the best cochlear implant users, however, saturates around eight channels and does not

  18. Hearing Handicap and Speech Recognition Correlate With Self-Reported Listening Effort and Fatigue.

    Science.gov (United States)

    Alhanbali, Sara; Dawes, Piers; Lloyd, Simon; Munro, Kevin J

    To investigate the correlations between hearing handicap, speech recognition, listening effort, and fatigue. Eighty-four adults with hearing loss (65 to 85 years) completed three self-report questionnaires: the Fatigue Assessment Scale, the Effort Assessment Scale, and the Hearing Handicap Inventory for Elderly. Audiometric assessment included pure-tone audiometry and speech recognition in noise. There was a significant positive correlation between handicap and fatigue (r = 0.39, p speech recognition and fatigue (r = 0.22, p speech recognition both correlate with self-reported listening effort and fatigue, which is consistent with a model of listening effort and fatigue where perceived difficulty is related to sustained effort and fatigue for unrewarding tasks over which the listener has low control. A clinical implication is that encouraging clients to recognize and focus on the pleasure and positive experiences of listening may result in greater satisfaction and benefit from hearing aid use.

  19. Syllabic compression and speech intelligibility in hearing impaired listeners

    NARCIS (Netherlands)

    Verschuure, J.; Dreschler, W. A.; de Haan, E. H.; van Cappellen, M.; Hammerschlag, R.; Maré, M. J.; Maas, A. J.; Hijmans, A. C.

    1993-01-01

    Syllabic compression has not been shown unequivocally to improve speech intelligibility in hearing-impaired listeners. This paper attempts to explain the poor results by introducing the concept of minimum overshoots. The concept was tested with a digital signal processor on hearing-impaired

  20. Processing of Binaural Pitch Stimuli in Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2009-01-01

    Binaural pitch is a tonal sensation produced by introducing a frequency-dependent interaural phase shift in binaurally presented white noise. As no spectral cues are present in the physical stimulus, binaural pitch perception is assumed to rely on accurate temporal fine structure coding and intact...... binaural integration mechanisms. This study investigated to what extent basic auditory measures of binaural processing as well as cognitive abilities are correlated with the ability of hearing-impaired listeners to perceive binaural pitch. Subjects from three groups (1: normal-hearing; 2: cochlear...... hearingloss; 3: retro-cochlear impairment) were asked to identify the pitch contour of series of five notes of equal duration, ranging from 523 to 784 Hz, played either with Huggins’ binaural pitch stimuli (BP) or perceptually similar, but monaurally detectable, pitches (MP). All subjects from groups 1 and 2...

  1. Fitting and verification of frequency modulation systems on children with normal hearing.

    Science.gov (United States)

    Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa

    2014-06-01

    Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5

  2. Is it possible to improve hearing by listening training?

    OpenAIRE

    Reuter, Karen

    2011-01-01

    Different listening training methods exist, which are based on the assumption that people can be trained to process incoming sound more effectively. It is often distinguished between the terms hearing (=passive reception of sound) and listening (=active process of tuning in to those sounds we wish to receive). Listening training methods claim to benefit a wide variety of people, e.g. people having learning disabilities, developmental delay or concentration problems. Sound therapists report ab...

  3. Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation

    DEFF Research Database (Denmark)

    Ohlenforst, Barbara; Zekveld, Adriana A.; Lunner, Thomas

    2017-01-01

    Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results...... in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared......-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil...

  4. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    Science.gov (United States)

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast

  5. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    Science.gov (United States)

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  6. Music perception by cochlear implant and normal hearing listeners as measured by the Montreal Battery for Evaluation of Amusia.

    Science.gov (United States)

    Cooper, William B; Tobey, Emily; Loizou, Philipos C

    2008-08-01

    The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, et al., Ann N Y Acad Sci, 999, 58-75) to assess the music perception abilities of cochlear implant (CI) users. The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal-hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, and 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable with that of CI users. CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval)--a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to use effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data seem to most closely match with the 4- and 6-channel NH listener conditions for the pitch-based tasks. Consistent with previous studies, both CI users and NH listeners

  7. Music Perception by Cochlear Implant and Normal Hearing Listeners as Measured by the Montreal Battery for Evaluation of Amusia

    Science.gov (United States)

    Cooper, William B.; Tobey, Emily; Loizou, Philipos C.

    2009-01-01

    Objectives The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, Champod, & Hyde, 2003) to assess the music perception abilities of cochlear implant (CI) users. Design The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable to that of CI users. Results CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval) – a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to utilize effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data appear to most closely match with the 4- and 6- channel NH listener conditions for the pitch-based tasks. Conclusions Consistent with previous studies, both CI

  8. Listening to Sentences in Noise: Revealing Binaural Hearing Challenges in Patients with Schizophrenia.

    Science.gov (United States)

    Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily

    2017-11-01

    The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.

  9. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms.

    Science.gov (United States)

    Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.

  10. Hearing loss impacts neural alpha oscillations under adverse listening conditions

    Directory of Open Access Journals (Sweden)

    Eline Borch Petersen

    2015-02-01

    Full Text Available Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory. One neural signature of working memory load is enhanced power of alpha oscillations (6 ‒ 12 Hz. However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of working memory when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and background noise level, while the electroencephalogram (EEG was recorded. In each trial, participants were presented with 2, 4, or 6 spoken digits embedded in one of three different levels of background noise. After a stimulus-free delay interval, participants indicated whether a probe digit had appeared in the sequence of digits. Participants were healthy older adults (62 – 86 years, with normal to moderately impaired hearing. Importantly, the background noise levels were individually adjusted and participants were wearing hearing aids to equalize audibility across participants. Irrespective of hearing loss, behavioral performance improved with lower memory load and also with lower levels of background noise. Interestingly, the alpha power in the stimulus-free delay interval was dependent on the interplay between task demands (memory load and noise level and hearing loss; while alpha power increased with hearing loss during low and intermediate levels of memory load and background noise, it dropped for participants with the relatively most severe hearing loss under the highest memory load and background noise level. These findings suggest that adaptive neural mechanisms for coping with adverse listening conditions break down for higher degrees of hearing loss, even when adequate hearing aid amplification is in place.

  11. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: improved directional hearing with a bone-conduction device.

    Science.gov (United States)

    Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John

    2012-04-01

    Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.

  12. Can a hearing education campaign for adolescents change their music listening behavior?

    Science.gov (United States)

    Weichbold, Viktor; Zorowka, Patrick

    2007-03-01

    This study looked at whether a hearing education campaign would have behavioral effects on the music listening practices of high school students. A total of 1757 students participated in a hearing education campaign. Before the campaign and one year thereafter they completed a survey asking for: (1) average frequency of discotheque attendance, (2) average duration of stay in the discotheque, (3) use of earplugs in discotheques, (4) frequency of regeneration breaks while at a discotheque, and (5) mean time per week spent listening to music through headphones. On questions (2), (3) and (5) no relevant post-campaign changes were reported. On question (1) students' answers indicated that the frequency of discotheque attendance had even increased after the campaign. The only change in keeping with the purpose of the campaign was an increase in the number of regeneration breaks when at a discotheque. The effect of hearing education campaigns on music listening behavior is questioned. Additional efforts are suggested to encourage adolescents to adopt protective behaviors.

  13. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms

    Science.gov (United States)

    Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200

  14. Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort.

    Science.gov (United States)

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    2016-01-01

    Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants' speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily

  15. Measuring listening effort: driving simulator versus simple dual-task paradigm.

    Science.gov (United States)

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment.

  16. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  17. Hearing aid processing strategies for listeners with different auditory profiles: Insights from the BEAR project

    DEFF Research Database (Denmark)

    Wu, Mengfan; El-Haj-Ali, Mouhamad; Sanchez Lopez, Raul

    hearing aid settings that differed in terms of signal-to-noise ratio (SNR) improvement and temporal and spectral speech distortions were selected for testing based on a comprehensive technical evaluation of different parameterisations of the hearing aid simulator. Speech-in-noise perception was assessed...... stimulus comparison paradigm. RESULTS We hypothesize that the perceptual outcomes from the six hearing aid settings will differ across listeners with different auditory profiles. More specifically, we expect listeners showing high sensitivity to temporal and spectral differences to perform best with and....../or to favour hearing aid settings that preserve those cues. In contrast, we expect listeners showing low sensitivity to temporal and spectral differences to perform best with and/or to favour settings that maximize SNR improvement, independent of any additional speech distortions. Altogether, we anticipate...

  18. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    Science.gov (United States)

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  19. Spectrotemporal modulation sensitivity for hearing-impaired listeners: dependence on carrier center frequency and the relationship to speech intelligibility.

    Science.gov (United States)

    Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W

    2014-07-01

    Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.

  20. Listening Effort With Cochlear Implant Simulations

    NARCIS (Netherlands)

    Pals, Carina; Sarampalis, Anastasios; Başkent, Deniz

    2013-01-01

    Purpose: Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method: Nineteen normal-hearing

  1. Spectral Ripple Discrimination in Normal-Hearing Infants.

    Science.gov (United States)

    Horn, David L; Won, Jong Ho; Rubinstein, Jay T; Werner, Lynne A

    Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 d

  2. Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss.

    Science.gov (United States)

    Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew

    In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

  3. Is it possible to improve hearing by listening training?

    DEFF Research Database (Denmark)

    Reuter Andersen, Karen

    2011-01-01

    confirmed these results using standardized hearing test measures. Dr. Alfred Tomatis, a French ear nose throat doctor, developed the Tomatis listening training in the 1950s. The principles of the Tomatis method are described. A literature review has been conducted to investigate, whether the Tomatis method...

  4. Binaural speech discrimination under noise in hearing-impaired listeners

    Science.gov (United States)

    Kumar, K. V.; Rao, A. B.

    1988-01-01

    This paper presents the results of an assessment of speech discrimination by hearing-impaired listeners (sensori-neural, conductive, and mixed groups) under binaural free-field listening in the presence of background noise. Subjects with pure-tone thresholds greater than 20 dB in 0.5, 1.0 and 2.0 kHz were presented with a version of the W-22 list of phonetically balanced words under three conditions: (1) 'quiet', with the chamber noise below 28 dB and speech at 60 dB; (2) at a constant S/N ratio of +10 dB, and with a background white noise at 70 dB; and (3) same as condition (2), but with the background noise at 80 dB. The mean speech discrimination scores decreased significantly with noise in all groups. However, the decrease in binaural speech discrimination scores with an increase in hearing impairment was less for material presented under the noise conditions than for the material presented in quiet.

  5. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    Science.gov (United States)

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic

  6. A comparison of the effects of filtering and sensorineural hearing loss on patients of consonant confusions.

    Science.gov (United States)

    Wang, M D; Reed, C M; Bilger, R C

    1978-03-01

    It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.

  7. Audiometric Testing With Pulsed, Steady, and Warble Tones in Listeners With Tinnitus and Hearing Loss.

    Science.gov (United States)

    Lentz, Jennifer J; Walker, Matthew A; Short, Ciara E; Skinner, Kimberly G

    2017-09-18

    This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Audiometric thresholds (octave frequencies from 0.25-16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1-5 scale and selected a preferred type. For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms.

  8. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  9. Fine-structure processing, frequency selectivity and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Dau, Torsten

    2008-01-01

    Hearing-impaired people often experience great difficulty with speech communication when background noise is present, even if reduced audibility has been compensated for. Other impairment factors must be involved. In order to minimize confounding effects, the subjects participating in this study...... consisted of groups with homogeneous, symmetric audiograms. The perceptual listening experiments assessed the intelligibility of full-spectrum as well as low-pass filtered speech in the presence of stationary and fluctuating interferers, the individual's frequency selectivity and the integrity of temporal...... modulation were obtained. In addition, these binaural and monaural thresholds were measured in a stationary background noise in order to assess the persistence of the fine-structure processing to interfering noise. Apart from elevated speech reception thresholds, the hearing impaired listeners showed poorer...

  10. Looking Behavior and Audiovisual Speech Understanding in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss.

    Science.gov (United States)

    Lewis, Dawna E; Smith, Nicholas A; Spalding, Jody L; Valente, Daniel L

    Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children

  11. Development of a music perception test for adult hearing-aid users

    Directory of Open Access Journals (Sweden)

    Marinda Uys

    2011-11-01

    Full Text Available The purpose of this research was two-fold. Firstly to develop a music perception test for hearing aid users and secondly to evaluate the influence of non-linear frequency compression (NFC on music perception with the use of the self-compiled test. This article focuses on the description of the development and validation of a music perception test. To date, the main direction in frequency lowering hearing aid studies has been in relation to speech perception abilities. With improvements in hearing aid technology, interest in musical perception as a dimension that could improve hearing aid users’ quality of life grew. The Music Perception Test (MPT was designed to evaluate different aspects of rhythm, timbre, pitch and melody. The development of the MPT could be described as design based. Phase 1 of the study included test development and recording while Phase 2 entailed presentation of stimuli to normal hearing listeners (n=15 and hearing aid users (n=4. Based on the findings of Phase 2, item analysis was performed to eliminate or change stimuli that resulted in high error rates. During Phase 3 the adapted version of the test was performed on a smaller group of normal hearing listeners (n=4 and twenty hearing aid users. Results proved that normal hearing adults as well as adults using hearing aids were able to complete all the sub-tests of the MPT although hearing aid users scored less on the various sub-tests than normal hearing listeners. For the rhythm section of the MPT normal hearing listeners scored on average 93.8% versus 75.5% of hearing aid users and 83% for the timbre section compared to 62.3% by hearing aid users. Normal hearing listeners obtained an average score of 86.3% for the pitch section and 88.2% for the melody section compared to the 70.8% and 61.9% respectively obtained by hearing aid users. This implicates that the MPT can be used successfully for assessment of music perception in hearing aid users within the South African

  12. Hearing Impairment and Cognitive Energy: The Framework for Understanding Effortful Listening (FUEL).

    Science.gov (United States)

    Pichora-Fuller, M Kathleen; Kramer, Sophia E; Eckert, Mark A; Edwards, Brent; Hornsby, Benjamin W Y; Humes, Larry E; Lemke, Ulrike; Lunner, Thomas; Matthen, Mohan; Mackersie, Carol L; Naylor, Graham; Phillips, Natalie A; Richter, Michael; Rudner, Mary; Sommers, Mitchell S; Tremblay, Kelly L; Wingfield, Arthur

    2016-01-01

    The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.

  13. Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

    Directory of Open Access Journals (Sweden)

    Ben Williges

    2015-12-01

    Full Text Available For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch. Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.

  14. The Danish hearing in noise test

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten

    2010-01-01

    Objective : A Danish version of the hearing in noise test (HINT) has been developed and evaluated in normal-hearing (NH) and hearing-impaired (HI) listeners. The speech material originated from Nielsen & Dau (2009) where a sentence-based intelligibility equalization method was presented. Design...

  15. Modeling consonant perception in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech perception is often studied in terms of natural meaningful speech, i.e., by measuring the in- telligibility of a given set of single words or full sentences. However, when trying to understand how background noise, various sorts of transmission channels (e.g., mobile phones) or hearing...... perception data: (i) an audibility-based approach, which corresponds to the Articu- lation Index (AI), and (ii) a modulation-masking based approach, as reflected in the speech-based Envelope Power Spectrum Model (sEPSM). For both models, the internal representations of the same stimuli as used...

  16. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    Science.gov (United States)

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  17. Sources of variability in consonant perception of normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2015-01-01

    between responses. The speech-induced variability across and within talkers and the across-listener variability were substantial and of similar magnitude. The noise-induced variability, obtained with time-shifted realizations of the same random process, was smaller but significantly larger than the amount......Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability......, referring to perceptual differences caused by acoustical differences in the speech tokens and/or the masking noise tokens, and receiver-related variability, referring to perceptual differences caused by within- and across-listener uncertainty. Consonant-vowel combinations consisting of 15 consonants...

  18. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    Science.gov (United States)

    Kidd, Gerald, Jr.

    2017-01-01

    Purpose: Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This…

  19. Music to whose ears? The effect of social norms on young people's risk perceptions of hearing damage resulting from their music listening behavior.

    Science.gov (United States)

    Gilliver, Megan; Carter, Lyndal; Macoun, Denise; Rosen, Jenny; Williams, Warwick

    2012-01-01

    Professional and community concerns about the potentially dangerous noise levels for common leisure activities has led to increased interest on providing hearing health information to participants. However, noise reduction programmes aimed at leisure activities (such as music listening) face a unique difficulty. The noise source that is earmarked for reduction by hearing health professionals is often the same one that is viewed as pleasurable by participants. Furthermore, these activities often exist within a social setting, with additional peer influences that may influence behavior. The current study aimed to gain a better understanding of social-based factors that may influence an individual's motivation to engage in positive hearing health behaviors. Four hundred and eighty-four participants completed questionnaires examining their perceptions of the hearing risk associated with listening to music listening and asking for estimates of their own and their peer's music listening behaviors. Participants were generally aware of the potential risk posed by listening to personal stereo players (PSPs) and the volumes likely to be most dangerous. Approximately one in five participants reported using listening volumes at levels perceived to be dangerous, an incidence rate in keeping with other studies measuring actual PSP use. However, participants showed less awareness of peers' behavior, consistently overestimating the volumes at which they believed their friends listened. Misperceptions of social norms relating to listening behavior may decrease individuals' perceptions of susceptibility to hearing damage. The consequences of hearing health promotion are discussed, along with suggestions relating to the development of new programs.

  20. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    Science.gov (United States)

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  2. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    Science.gov (United States)

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  3. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    Directory of Open Access Journals (Sweden)

    Sara Giannantonio

    Full Text Available Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use. Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real, regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and

  4. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    Science.gov (United States)

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in

  5. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    Science.gov (United States)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  6. Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

    Science.gov (United States)

    Lau, Sin Tung; Pichora-Fuller, M Kathleen; Li, Karen Z H; Singh, Gurjit; Campos, Jennifer L

    2016-07-01

    Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges. The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking. In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth). The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments. Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within

  7. Characterizing auditory processing and perception in individual listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    2011-01-01

    –438 (2008)] was used as a framework. The parameters of the cochlear processing stage of the model were adjusted to account for behaviorally estimated individual basilar-membrane inputoutput functions and the audiogram, from which the amounts of inner hair-cell and outer hair-cell losses were estimated......This study considered consequences of sensorineural hearing loss in ten listeners. The characterization of individual hearing loss was based on psychoacoustic data addressing audiometric pure-tone sensitivity, cochlear compression, frequency selectivity, temporal resolution, and intensity...

  8. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    DEFF Research Database (Denmark)

    Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.

    2017-01-01

    -spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...

  9. Low empathy in deaf and hard of hearing (preadolescents compared to normal hearing controls.

    Directory of Open Access Journals (Sweden)

    Anouk P Netten

    Full Text Available The purpose of this study was to examine the level of empathy in deaf and hard of hearing (preadolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy.The study group (mean age 11.9 years consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior.Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported language or attend special education. However, they are still outperformed by normal hearing children.Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  10. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.

    Science.gov (United States)

    Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V

    2012-01-01

    Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  11. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    U Ajith Kumar

    2012-01-01

    Full Text Available Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13, 41 50 ( = 13, 41-50 (n = 9, and 51-60 (n = 6 years and their non-noise-exposed counterparts (n = 30 in each age group. Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  12. Binaural fusion and listening effort in children who use bilateral cochlear implants: a psychoacoustic and pupillometric study.

    Science.gov (United States)

    Steel, Morrison M; Papsin, Blake C; Gordon, Karen A

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.

  13. Characterising physician listening behaviour during hospitalist handoffs using the HEAR checklist.

    Science.gov (United States)

    Greenstein, Elizabeth A; Arora, Vineet M; Staisiunas, Paul G; Banerjee, Stacy S; Farnan, Jeanne M

    2013-03-01

    The increasing fragmentation of healthcare has resulted in more patient handoffs. Many professional groups, including the Accreditation Council on Graduate Medical Education and the Society of Hospital Medicine, have made recommendations for safe and effective handoffs. Despite the two-way nature of handoff communication, the focus of these efforts has largely been on the person giving information. To observe and characterise the listening behaviours of handoff receivers during hospitalist handoffs. Prospective observational study of shift change and service change handoffs on a non-teaching hospitalist service at a single academic tertiary care institution. The 'HEAR Checklist', a novel tool created based on review of effective listening behaviours, was used by third party observers to characterise active and passive listening behaviours and interruptions during handoffs. In 48 handoffs (25 shift change, 23 service change), active listening behaviours (eg, read-back (17%), note-taking (23%) and reading own copy of the written signout (27%)) occurred less frequently than passive listening behaviours (eg, affirmatory statements (56%) nodding (50%) and eye contact (58%)) (pRead-back occurred only eight times (17%). In 11 handoffs (23%) receivers took notes. Almost all (98%) handoffs were interrupted at least once, most often by side conversations, pagers going off, or clinicians arriving. Handoffs with more patients, such as service change, were associated with more interruptions (r=0.46, plistening behaviours. While passive listening behaviours are common, active listening behaviours that promote memory retention are rare. Handoffs are often interrupted, most commonly by side conversations. Future handoff improvement efforts should focus on augmenting listening and minimising interruptions.

  14. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.

    Science.gov (United States)

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M

    2015-01-01

    The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  15. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    Science.gov (United States)

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  16. Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls

    Science.gov (United States)

    Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365

  17. Dynamic-range reduction by peak clipping or compression and its effects on phoneme perception in hearing-impaired listeners

    NARCIS (Netherlands)

    Dreschler, W. A.

    1988-01-01

    In this study, differences between dynamic-range reduction by peak clipping and single-channel compression for phoneme perception through conventional hearing aids have been investigated. The results from 16 hearing-impaired listeners show that compression limiting yields significantly better

  18. Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study

    Science.gov (United States)

    Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423

  19. Binaural fusion and listening effort in children who use bilateral cochlear implants: a psychoacoustic and pupillometric study.

    Directory of Open Access Journals (Sweden)

    Morrison M Steel

    Full Text Available Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz. Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.

  20. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    DEFF Research Database (Denmark)

    Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.

    2017-01-01

    In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels...... and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...

  1. Recent concepts and challenges in hearing research

    DEFF Research Database (Denmark)

    Dau, Torsten

    In everyday life, the speech we listen to is often mixed with many other sound sources as well as reverberation. In such a situation, normal-hearing listeners are able to effortlessly segregate a single voice out of the background, which is commonly known as the 'cocktail party effect'. Conversely......, hearing-impaired people have great difficulty understanding speech when more than one person is talking, even when reduced audibility has been fully compensated for by a hearing aid. As with the hearing impaired, the performance of automatic speech recognition systems deteriorates dramatically...... with additional sound sources. The reasons for these difficulties are not well understood. Only by obtaining a clearer understanding of the auditory system’s coding strategies will it be possible to design intelligent compensation algorithms for hearing devices. This presentation highlights recent concepts...

  2. Speech perception in noise in unilateral hearing loss

    OpenAIRE

    Mondelli, Maria Fernanda Capoani Garcia; dos Santos, Marina de Marchi; José, Maria Renata

    2016-01-01

    ABSTRACT INTRODUCTION: Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. OBJECTIVE: To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. METHODS: The study included 30 adu...

  3. Loudness of brief tones in listeners with normal hearing and sensorineural hearing loss

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    To investigate how hearing loss affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level. Loudness functions derived from these data indicated that the gain required to restore loudness usually is the same for short and long sounds....

  4. Effects of hearing loss and cognitive load on speech recognition with competing talkers

    Directory of Open Access Journals (Sweden)

    Hartmut eMeister

    2016-03-01

    Full Text Available Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched regarding their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT. The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC. In a post hoc analysis the

  5. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    Science.gov (United States)

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  6. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    Directory of Open Access Journals (Sweden)

    Niklas eRönnberg

    2014-12-01

    Full Text Available Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR on listening effort, as a function of working memory capacity (WMC and updating ability (UA. The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing MLL. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech-fragments and vocal sounds in the background noise.

  7. L2 Learners' Engagement with High Stakes Listening Tests: Does Technology Have a Beneficial Role to Play?

    Science.gov (United States)

    East, Martin; King, Chris

    2012-01-01

    In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…

  8. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  9. Interactions between amplitude modulation and frequency modulation processing: Effects of age and hearing loss.

    Science.gov (United States)

    Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian

    2016-07-01

    Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.

  10. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    Science.gov (United States)

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress

  11. Advantages of binaural amplification to acceptable noise level of directional hearing aid users.

    Science.gov (United States)

    Kim, Ja-Hee; Lee, Jae Hee; Lee, Ho-Ki

    2014-06-01

    The goal of the present study was to examine whether Acceptable Noise Levels (ANLs) would be lower (greater acceptance of noise) in binaural listening than in monaural listening condition and also whether meaningfulness of background speech noise would affect ANLs for directional microphone hearing aid users. In addition, any relationships between the individual binaural benefits on ANLs and the individuals' demographic information were investigated. Fourteen hearing aid users (mean age, 64 years) participated for experimental testing. For the ANL calculation, listeners' most comfortable listening levels and background noise level were measured. Using Korean ANL material, ANLs of all participants were evaluated under monaural and binaural amplification with a counterbalanced order. The ANLs were also compared across five types of competing speech noises, consisting of 1- through 8-talker background speech maskers. Seven young normal-hearing listeners (mean age, 27 years) participated for the same measurements as a pilot testing. The results demonstrated that directional hearing aid users accepted more noise (lower ANLs) with binaural amplification than with monaural amplification, regardless of the type of competing speech. When the background speech noise became more meaningful, hearing-impaired listeners accepted less amount of noise (higher ANLs), revealing that ANL is dependent on the intelligibility of the competing speech. The individuals' binaural advantages in ANLs were significantly greater for the listeners with longer experience of hearing aids, yet not related to their age or hearing thresholds. Binaural directional microphone processing allowed hearing aid users to accept a greater amount of background noise, which may in turn improve listeners' hearing aid success. Informational masking substantially influenced background noise acceptance. Given a significant association between ANLs and duration of hearing aid usage, ANL measurement can be useful for

  12. Auditory perceptual learning in adults with and without age-related hearing loss

    Directory of Open Access Journals (Sweden)

    Hanin eKarawani

    2016-02-01

    Full Text Available Introduction: Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL. Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL.Methods: 56 listeners (60-72 y/o, 35 participants with ARHL and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training and no-training group. Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1 Speech-in-noise (2 time compressed speech and (3 competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results: Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions: ARHL did not preclude auditory perceptual learning but there was little generalization to

  13. Relating binaural pitch perception to the individual listener's auditory profile.

    Science.gov (United States)

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  14. The effect of extending high-frequency bandwidth on the acceptable noise level (ANL) of hearing-impaired listeners.

    Science.gov (United States)

    Johnson, Earl; Ricketts, Todd; Hornsby, Benjamin

    2009-01-01

    This study examined the effects of extending high-frequency bandwidth, for both a speech signal and a background noise, on the acceptable signal-to-noise ratio (SNR) of listeners with mild sensorineural hearing loss through utilization of the Acceptable Noise Level (ANL) procedure. In addition to extending high-frequency bandwidth, the effects of reverberation time and background noise type and shape were also examined. The study results showed a significant increase in the mean ANL (i.e. participants requested a better SNR for an acceptable listening situation) when high-frequency bandwidth was extended from 3 to 9 kHz and from 6 to 9 kHz. No change in the ANL of study participants was observed as a result of isolated modification to reverberation time or background noise stimulus. An interaction effect, however, of reverberation time and background noise stimulus was demonstrated. These findings may have implications for future design of hearing aid memory programs for listening to speech in the presence of broadband background noise.

  15. Auditory and language outcomes in children with unilateral hearing loss.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Gaboury, Isabelle; Durieux-Smith, Andrée; Coyle, Doug; Whittingham, JoAnne; Nassrallah, Flora

    2018-03-13

    Children with unilateral hearing loss (UHL) are being diagnosed at younger ages because of newborn hearing screening. Historically, they have been considered at risk for difficulties in listening and language development. Little information is available on contemporary cohorts of children identified in the early months of life. We examined auditory and language acquisition outcomes in a contemporary cohort of early-identified children with UHL and compared their outcomes at preschool age with peers with mild bilateral loss and with normal hearing. As part of the Mild and Unilateral Hearing Loss in Children Study, we collected auditory and spoken language outcomes on children with unilateral, bilateral hearing loss and with normal hearing over a four-year period. This report provides a cross-sectional analysis of results at age 48 months. A total of 120 children (38 unilateral and 31 bilateral mild, 51 normal hearing) were enrolled in the study from 2010 to 2015. Children started the study at varying ages between 12 and 36 months of age and were followed until age 36-48 months. The median age of identification of hearing loss was 3.4 months (IQR: 2.0, 5.5) for unilateral and 3.6 months (IQR: 2.7, 5.9) for the mild bilateral group. Families completed an intake form at enrolment to provide baseline child and family-related characteristics. Data on amplification fitting and use were collected via parent questionnaires at each annual assessment interval. This study involved a range of auditory development and language measures. For this report, we focus on the end of follow-up results from two auditory development questionnaires and three standardized speech-language assessments. Assessments included in this report were completed at a median age of 47.8 months (IQR: 38.8, 48.5). Using ANOVA, we examined auditory and language outcomes in children with UHL and compared their scores to children with mild bilateral hearing loss and those with normal hearing. On most

  16. Can We Teach Effective Listening? An Exploratory Study

    Science.gov (United States)

    Caspersz, Donella; Stasinska, Ania

    2015-01-01

    Listening is not the same as hearing. While hearing is a physiological process, listening is a conscious process that requires us to be mentally attentive (Low & Sonntag, 2013). The obvious place for scholarship about listening is in communication studies. While interested in listening, the focus of this study is on effective listening.…

  17. Early Radiosurgery Improves Hearing Preservation in Vestibular Schwannoma Patients With Normal Hearing at the Time of Diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Akpinar, Berkcan [University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania (United States); Mousavi, Seyed H., E-mail: mousavish@upmc.edu [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); McDowell, Michael M.; Niranjan, Ajay; Faraji, Amir H. [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); Flickinger, John C. [Department of Radiation Oncology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); Lunsford, L. Dade [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States)

    2016-06-01

    Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent early (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.

  18. Early Radiosurgery Improves Hearing Preservation in Vestibular Schwannoma Patients With Normal Hearing at the Time of Diagnosis

    International Nuclear Information System (INIS)

    Akpinar, Berkcan; Mousavi, Seyed H.; McDowell, Michael M.; Niranjan, Ajay; Faraji, Amir H.; Flickinger, John C.; Lunsford, L. Dade

    2016-01-01

    Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent early (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.

  19. Predicting the benefit of binaural cue preservation in bilateral directional processing schemes for listeners with impaired hearing

    DEFF Research Database (Denmark)

    Brand, Thomas; Hauth, Christopher; Wagener, Kirsten C.

    2018-01-01

    Linked pairs of hearing aids offer various possibilities for directional processing providing adjustable trade-off between improving signal-to-noise ratio and preserving binaural listening. The benefit depends on the processing scheme, the acoustic scenario, and the listener’s ability to exploit...... fine structure. BSIM revealed a benefit due to binaural processing in well-performing listeners when processing provided low-frequency interaural timing cues....

  20. Speech Perception in Noise in Normally Hearing Children: Does Binaural Frequency Modulated Fitting Provide More Benefit than Monaural Frequency Modulated Fitting?

    Science.gov (United States)

    Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul

    2011-07-01

    The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (Pbinaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.

  1. Consequences of Early Conductive Hearing Loss on Long-Term Binaural Processing.

    Science.gov (United States)

    Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram

    The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.

  2. Active Listening Improve Your Ability to Listen and Lead

    CERN Document Server

    (CCL), Center for Creative Leadership

    2011-01-01

    Active listening is a person's willingness and ability to hear and understand. At its core, active listening is a state of mind that involves paying full and careful attention to the other person, avoiding premature judgment, reflecting understanding, clarifying information, summarizing, and sharing. By learning and committing to the skills and behaviors of active listening, leaders can become more effective listeners and, over time, improve their ability to lead.

  3. Students with Hearing Loss and Their Teachers' View on Factors Associated with the Students' Listening Perception of Classroom Communication

    Science.gov (United States)

    Rekkedal, Ann Mette

    2015-01-01

    This study investigates factors associated with the listening perception of classroom communication by students with hearing loss, based on the students' and their teachers' views. It also examines how students with different degrees of hearing loss may perceive their classmates. To explore the relationships between the factors Structural Equation…

  4. Peripheral auditory processing and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf

    One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even...... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS......) at the output of the inner-ear (cochlear) filters. The purpose of this work was to investigate these aspects in detail. One chapter studies relations between frequency selectivity, TFS processing, and speech reception in listeners with normal and impaired hearing, using behavioral listening experiments. While...

  5. Recognition of "real-world" musical excerpts by cochlear implant recipients and normal-hearing adults.

    Science.gov (United States)

    Gfeller, Kate; Olszewski, Carol; Rychener, Marly; Sena, Kimberly; Knutson, John F; Witt, Shelley; Macpherson, Beth

    2005-06-01

    The purposes of this study were (a) to compare recognition of "real-world" music excerpts by postlingually deafened adults using cochlear implants and normal-hearing adults; (b) to compare the performance of cochlear implant recipients using different devices and processing strategies; and (c) to examine the variability among implant recipients in recognition of musical selections in relation to performance on speech perception tests, performance on cognitive tests, and demographic variables. Seventy-nine cochlear implant users and 30 normal-hearing adults were tested on open-set recognition of systematically selected excerpts from musical recordings heard in real life. The recognition accuracy of the two groups was compared for three musical genre: classical, country, and pop. Recognition accuracy was correlated with speech recognition scores, cognitive measures, and demographic measures, including musical background. Cochlear implant recipients were significantly less accurate in recognition of previously familiar (known before hearing loss) musical excerpts than normal-hearing adults (p genre. Implant recipients were most accurate in the recognition of country items and least accurate in the recognition of classical items. There were no significant differences among implant recipients due to implant type (Nucleus, Clarion, or Ineraid), or programming strategy (SPEAK, CIS, or ACE). For cochlear implant recipients, correlations between melody recognition and other measures were moderate to weak in strength; those with statistically significant correlations included age at time of testing (negatively correlated), performance on selected speech perception tests, and the amount of focused music listening following implantation. Current-day cochlear implants are not effective in transmitting several key structural features (i.e., pitch, harmony, timbral blends) of music essential to open-set recognition of well-known musical selections. Consequently, implant

  6. Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M

    2011-01-01

    The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman

  7. Preferred listening levels of mobile phone programs when considering subway interior noise

    Directory of Open Access Journals (Sweden)

    Jyaehyoung Yu

    2016-01-01

    Full Text Available Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on "double noise exposures" such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s. The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB, music, game] for 30 min using a tablet personal computer with an earphone. The participants′ earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed.

  8. Preferred listening levels of mobile phone programs when considering subway interior noise.

    Science.gov (United States)

    Yu, Jyaehyoung; Lee, Donguk; Han, Woojae

    2016-01-01

    Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on "double noise exposures" such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s). The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB), music, game] for 30 min using a tablet personal computer with an earphone. The participants' earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed.

  9. Motivation to Address Self-Reported Hearing Problems in Adults with Normal Hearing Thresholds

    Science.gov (United States)

    Alicea, Carly C. M.; Doherty, Karen A.

    2017-01-01

    Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…

  10. Combined Electric and Acoustic Stimulation With Hearing Preservation: Effect of Cochlear Implant Low-Frequency Cutoff on Speech Understanding and Perceived Listening Difficulty.

    Science.gov (United States)

    Gifford, René H; Davis, Timothy J; Sunderhaus, Linsey W; Menapace, Christine; Buck, Barbara; Crosson, Jillian; O'Neill, Lori; Beiter, Anne; Segel, Phil

    The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically. Assessments were completed in the R-SPACE sound-simulation system which includes a semidiffuse restaurant noise originating from eight loudspeakers placed circumferentially about the subject's head. AzBio sentences were presented at 67 dBA with signal to noise ratio varying between +10 and 0 dB determined individually to yield approximately 50 to 60% correct for the CI-alone condition with full CI bandwidth. Listening conditions for all subjects included CI alone, bimodal (CI + contralateral hearing aid), and bilateral-aided electric and acoustic stimulation (EAS; CI + bilateral hearing aid). Low-frequency cutoffs both below and above the original "clinical software recommendation" frequency were tested for all patients, in all conditions. Subjects estimated listening difficulty for all conditions using listener ratings based on a visual analog scale. Three primary findings were that (1) there was statistically significant benefit of preserved acoustic hearing in the implanted ear for most overlap conditions, (2) the default clinical software recommendation rarely yielded the highest level of speech recognition (1 of 13 ears), and (3) greater EAS overlap than that provided by the clinical recommendation yielded significant improvements in speech understanding. For standard-electrode CI recipients with preserved hearing, spectral overlap of acoustic and electric stimuli yielded significantly better speech understanding and less listening effort in a laboratory-based, restaurant

  11. Perception of contrastive bi-syllabic lexical stress in unaccented and accented words by younger and older listeners

    Science.gov (United States)

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Pickett, Erin J.; Fitzgibbons, Peter J.

    2016-01-01

    This study examined the ability of older and younger listeners to perceive contrastive syllable stress in unaccented and Spanish-accented cognate bi-syllabic English words. Younger listeners with normal hearing, older listeners with normal hearing, and older listeners with hearing impairment judged recordings of words that contrasted in stress that conveyed a noun or verb form (e.g., CONduct/conDUCT), using two paradigms differing in the amount of semantic support. The stimuli were spoken by four speakers: one native English speaker and three Spanish-accented speakers (one moderately and two mildly accented). The results indicate that all listeners showed the lowest accuracy scores in responding to the most heavily accented speaker and the highest accuracy in judging the productions of the native English speaker. The two older groups showed lower accuracy in judging contrastive lexical stress than the younger group, especially for verbs produced by the most accented speaker. This general pattern of performance was observed in the two experimental paradigms, although performance was generally lower in the paradigm without semantic support. The findings suggest that age-related difficulty in adjusting to deviations in contrastive bi-syllabic lexical stress produced with a Spanish accent may be an important factor limiting perception of accented English by older people. PMID:27036250

  12. A listening test system for automotive audio

    DEFF Research Database (Denmark)

    Christensen, Flemming; Martin, Geoff; Minnaar, Pauli

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  13. Music Listening in Electric Hearing -designing and testing two novel EEG paradigms for measuring music perception in cochlear implant users

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Friis Andersen, Anne Sofie; Højlund, Andreas

    With the considerable advances made in cochlear implant (CI) technology with regards to speech perception, it is natural that many CI users express hopes of being able to enjoy music. For the majority of CI users, however, the music experience is disappointing and their discrimination of musical...... features as well as self-reported levels of music enjoyment is significantly lower than normal-hearing (NH) listeners (1,2). Therefore, it is important that ongoing efforts are made to improve the quality of music through a CI. To aid in this process, the aim of this study is to validate two new musical...

  14. Comparison of reading comprehension and working memory in hearing-impaired and normal-hearing children

    Directory of Open Access Journals (Sweden)

    Mohammad Rezaei

    2013-03-01

    Full Text Available Background and Aim: Reading is the most important human need for learning. In normal-hearing people working memory is a predictor of reading comprehension. In this study the relationship between working memory and reading comprehension skills was studied in hearing-impaired children, and then compared with the normal-hearing group.Methods: This was a descriptive-analytic study. The working memory and reading comprehension skills of 18 (8 male, 10 female sever hearing-impaired children in year five of exceptional schools were compared by means of a reading test with 18 hearing children as control group. The subjects in the control group were of the same gender and educational level of the sample group.Results: The children with hearing loss performed similarly to the normal-hearing children in tasks related to auditory-verbal memory of sounds (reverse, visual-verbal memory of letters, and visual-verbal memory of pictures. However, they showed lower levels of performance in reading comprehension (p<0.001. Moreover, no significant relationship was observed between working memory and reading comprehension skills.Conclusion: Findings indicated that children with hearing loss have a significant impairment in the reading comprehension skill. Impairment in language knowledge and vocabulary may be the main cause of poor reading comprehension in these children. In hearing-impaired children working memory is not a strong predictor of reading comprehension.

  15. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained

  16. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners.

    Science.gov (United States)

    Won, Jong Ho; Jones, Gary L; Drennan, Ward R; Jameyson, Elyse M; Rubinstein, Jay T

    2011-10-01

    Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an "ideal observer," showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. © 2011 Acoustical Society of America

  17. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    Science.gov (United States)

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  18. Music and hearing aids.

    Science.gov (United States)

    Madsen, Sara M K; Moore, Brian C J

    2014-10-31

    The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems. © The Author(s) 2014.

  19. Music and Hearing Aids

    Directory of Open Access Journals (Sweden)

    Sara M. K. Madsen

    2014-10-01

    Full Text Available The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems.

  20. Discotheques and the risk of hearing loss among youth: Risky listening behavior and its psychosocial correlates

    NARCIS (Netherlands)

    Vogel, I.; Brug, J.; Ploeg, C.P.B. van der; Raat, H.

    2010-01-01

    There is an increasing population at risk of hearing loss and tinnitus due to increasing high-volume music listening. To inform prevention strategies and interventions, this study aimed to identify important protection motivation theory-based constructs as well as the constructs 'consideration of

  1. The relationship between the intelligibility of time-compressed speech and speech in noise in young and elderly listeners

    Science.gov (United States)

    Versfeld, Niek J.; Dreschler, Wouter A.

    2002-01-01

    A conventional measure to determine the ability to understand speech in noisy backgrounds is the so-called speech reception threshold (SRT) for sentences. It yields the signal-to-noise ratio (in dB) for which half of the sentences are correctly perceived. The SRT defines to what degree speech must be audible to a listener in order to become just intelligible. There are indications that elderly listeners have greater difficulty in understanding speech in adverse listening conditions than young listeners. This may be partly due to the differences in hearing sensitivity (presbycusis), hence audibility, but other factors, such as temporal acuity, may also play a significant role. A potential measure for the temporal acuity may be the threshold to which speech can be accelerated, or compressed in time. A new test is introduced where the speech rate is varied adaptively. In analogy to the SRT, the time-compression threshold (or TCT) then is defined as the speech rate (expressed in syllables per second) for which half of the sentences are correctly perceived. In experiment I, the TCT test is introduced and normative data are provided. In experiment II, four groups of subjects (young and elderly normal-hearing and hearing-impaired subjects) participated, and the SRT's in stationary and fluctuating speech-shaped noise were determined, as well as the TCT. The results show that the SRT in fluctuating noise and the TCT are highly correlated. All tests indicate that, even after correction for the hearing loss, elderly normal-hearing subjects perform worse than young normal-hearing subjects. The results indicate that the use of the TCT test or the SRT test in fluctuating noise is preferred over the SRT test in stationary noise.

  2. Effects of single-channel phonemic compression schemes on the understanding of speech by hearing-impaired listeners

    NARCIS (Netherlands)

    Goedegebure, A.; Hulshof, M.; Maas, R. J.; Dreschler, W. A.; Verschuure, H.

    2001-01-01

    The effect of digital processing on speech intelligibility was studied in hearing-impaired listeners with moderate to severe high-frequency losses. The amount of smoothed phonemic compression in a high-frequency channel was varied using wide-band control. Two alternative systems were tested to

  3. Binaural hearing with electrical stimulation

    Science.gov (United States)

    Kan, Alan; Litovsky, Ruth Y.

    2014-01-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. PMID:25193553

  4. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners a

    Science.gov (United States)

    Ho Won, Jong; Jones, Gary L.; Drennan, Ward R.; Jameyson, Elyse M.; Rubinstein, Jay T.

    2011-01-01

    Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an “ideal observer,” showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. PMID:21973363

  5. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker

    2017-09-18

    We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

  6. Discotheques and the Risk of Hearing Loss among Youth: Risky Listening Behavior and Its Psychosocial Correlates

    Science.gov (United States)

    Vogel, Ineke; Brug, Johannes; Van Der Ploeg, Catharina P. B.; Raat, Hein

    2010-01-01

    There is an increasing population at risk of hearing loss and tinnitus due to increasing high-volume music listening. To inform prevention strategies and interventions, this study aimed to identify important protection motivation theory-based constructs as well as the constructs "consideration of future consequences" and "habit…

  7. Comparison of Different Levels of Reading Comprehension between Hearing-Impaired Loss and Normal-Hearing Students

    Directory of Open Access Journals (Sweden)

    Azam Sharifi

    2011-12-01

    Full Text Available Background and Aim: Reading skill is one of the most important necessities of students' learning in everyday life. This skill is referred to the ability of comprehension, comment and conclusion from texts and receiving the meaning of the massage which is composed. Educational development in any student has a direct relation with the ability of the comprehension. This study is designed to investigate the effects of hearing loss on reading comprehension in hearing-impaired students compared to normal-hearing ones.Methods: Seventeen hearing-impaired students in 4th year of primary exceptional schools in Karaj, Robatkarim and Shahriyar, Iran, were enrolled in this cross-sectional study. Seventeen normal-hearing students were randomly selected from ordinary schools next to exceptional ones as control group. They were compared for different levels of reading comprehension using the international standard booklet (PIRLS 2001. Results: There was a significant difference in performance between hearing-impaired and normal- hearing students in different levels of reading comprehension (p<0.05.Conclusion: Hearing loss has negative effects on different levels of reading comprehension, so in exceptional centers, reconsideration in educational planning in order to direct education from memorizing to comprehension and deeper layers of learning seems necessary.

  8. Narrative competence among hearing-impaired and normal-hearing children: analytical cross-sectional study

    Directory of Open Access Journals (Sweden)

    Alexandra Dezani Soares

    Full Text Available CONTEXT AND OBJECTIVE: Oral narrative is a means of language development assessment. However, standardized data for deaf patients are scarce. The aim here was to compare the use of narrative competence between hearing-impaired and normal-hearing children. DESIGN AND SETTING: Analytical cross-sectional study at the Department of Speech-Language and Hearing Sciences, Universidade Federal de São Paulo. METHODS: Twenty-one moderately to profoundly bilaterally hearing-impaired children (cases and 21 normal-hearing children without language abnormalities (controls, matched according to sex, age, schooling level and school type, were studied. A board showing pictures in a temporally logical sequence was presented to each child, to elicit a narrative, and the child's performance relating to narrative structure and cohesion was measured. The frequencies of variables, their associations (Mann-Whitney test and their 95% confidence intervals was analyzed. RESULTS: The deaf subjects showed poorer performance regarding narrative structure, use of connectives, cohesion measurements and general punctuation (P < 0.05. There were no differences in the number of propositions elaborated or in referent specification between the two groups. The deaf children produced a higher proportion of orientation-related propositions (P = 0.001 and lower proportions of propositions relating to complicating actions (P = 0.015 and character reactions (P = 0.005. CONCLUSION: Hearing-impaired children have abnormalities in different aspects of language, involving form, content and use, in relation to their normal-hearing peers. Narrative competence was also associated with the children's ages and the school type.

  9. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss.

    Science.gov (United States)

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-06-01

    Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

  10. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.

  11. Listening to Red

    Directory of Open Access Journals (Sweden)

    Sinazo Mtshemla

    Full Text Available Following a distinction John Mowitt draws between hearing (and phonics, and listening (and sonics, this article argues that the dominant notion of listening to sound was determined by the disciplinary framework of South African history and by the deployment of a cinematic documentary apparatus, both of which have served to disable the act of listening. The conditions of this hearing, and a deafness to a reduced or bracketed listening (Chion via Schaeffer that would enable us to think the post in post-apartheid differently, is thus at the centre of our concerns here. We stage a series of screenings of expected possible soundtracks for Simon Gush's film and installation Red, simultaneously tracking the ways that sound - and particularly music and dialogue - can be shown to hold a certain way of thinking both the political history of South Africa and the politics of South African history. We conclude by listening more closely to hiss and murmur in the soundtrack to Red and suggest this has major implications for considering ways of thinking and knowing.

  12. Speech understanding in noise with integrated in-ear and muff-style hearing protection systems

    Directory of Open Access Journals (Sweden)

    Sharon M Abel

    2011-01-01

    Full Text Available Integrated hearing protection systems are designed to enhance free field and radio communications during military operations while protecting against the damaging effects of high-level noise exposure. A study was conducted to compare the effect of increasing the radio volume on the intelligibility of speech over the radios of two candidate systems, in-ear and muff-style, in 85-dBA speech babble noise presented free field. Twenty normal-hearing, English-fluent subjects, half male and half female, were tested in same gender pairs. Alternating as talker and listener, their task was to discriminate consonant-vowel-consonant syllables that contrasted either the initial or final consonant. Percent correct consonant discrimination increased with increases in the radio volume. At the highest volume, subjects achieved 79% with the in-ear device but only 69% with the muff-style device, averaged across the gender of listener/talker pairs and consonant position. Although there was no main effect of gender, female listener/talkers showed a 10% advantage for the final consonant and male listener/talkers showed a 1% advantage for the initial consonant. These results indicate that normal hearing users can achieve reasonably high radio communication scores with integrated in-ear hearing protection in moderately high-level noise that provides both energetic and informational masking. The adequacy of the range of available radio volumes for users with hearing loss has yet to be determined.

  13. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach.

    Science.gov (United States)

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  14. Aspects of Music with Cochlear Implants – Music Listening Habits and Appreciation in Danish Cochlear Implant Users

    DEFF Research Database (Denmark)

    Petersen, Bjørn; Hansen, Mads; Sørensen, Stine Derdau

    Cochlear implant users differ significantly from their normal hearing peers when it comes to perception of music. Several studies have shown that structural features – such as rhythm, timbre, and pitch – are transmitted less accurately through an implant. However, we cannot predict personal...... music less post-implantation than prior to their hearing loss. Nevertheless, a large majority of CI listeners either prefer music over not hearing music at all or find music as pleasant as they recall it before their hearing loss, or more so....... enjoyment of music solely as a function of accuracy of perception. But can music be pleasant with a cochlear implant at all? Our aim here was to gather information of both music enjoyment and listening habits before the onset of hearing loss and post-operation from a large, representative sample of Danish...

  15. Relations Between Self-Reported Daily-Life Fatigue, Hearing Status, and Pupil Dilation During a Speech Perception in Noise Task.

    Science.gov (United States)

    Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas

    People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil

  16. The MOC Reflex during Active Listening to Speech

    Science.gov (United States)

    Garinis, Angela C.; Glattke, Theodore; Cone, Barbara K.

    2011-01-01

    Purpose: The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Method: Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in…

  17. Binaural hearing with electrical stimulation.

    Science.gov (United States)

    Kan, Alan; Litovsky, Ruth Y

    2015-04-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Chinese Writing of Deaf or Hard-of-hearing Students and Normal-hearing Peers from Complex Network Approach

    Directory of Open Access Journals (Sweden)

    Huiyuan Jin

    2016-11-01

    Full Text Available Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts, because sign language is the primary communicative skills for many deaf people. The current body of research only covers the detailed linguistic features of deaf or hard-of-hearing students. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students’ exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  19. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  20. Prosody perception in simulated cochlear implant listening in modulated and stationary noise

    DEFF Research Database (Denmark)

    Morris, David Jackson

    2012-01-01

    Cochlear Implant (CI) listeners can do well when attending to speech in quiet, yet challenging listening situations are more problematic. Previous studies have shown that fluctuations in the noise do not yield better speech recognition scores for CI listeners as they can do for normal hearing (NH...... derived from non-scripted Danish speech. The F0 temporal midpoint of the initial syllable was varied stepwise in semitones. Competing signals of modulated white noise and speech shaped noise at 0 dB and 12 dB SNR, were added to the tokens prior to 8-channel noiseexcited vocoder processing. Stimuli were...

  1. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls

    NARCIS (Netherlands)

    Netten, A.P.; Rieffe, C.; Theunissen, S.C.P.M.; Soede, W.; Dirks, E.; Briaire, J.J.; Frijns, J.H.M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age

  2. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian Maria; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multichannel reproduced sound. Ninety-one participants filled in a web-based questionnaire. Seventy-eight of them took part in an assessment of their hearing thresholds......, their spatial hearing, and their verbal production abilities. The listeners displayed large individual differences in their performance. Forty subjects were selected based on the test results. The self-assessed listening habits and experience in the web-questionnaire could not predict the results...... of the selection procedure. Further, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  3. Selecting participants for listening tests of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Wickelmaier, Florian; Choisel, Sylvain

    2005-01-01

    A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....

  4. Detection threshold for sound distortion resulting from noise reduction in normal-hearing and hearing-impaired listeners

    NARCIS (Netherlands)

    Brons, Inge; Dreschler, Wouter A.; Houben, Rolph

    2014-01-01

    Hearing-aid noise reduction should reduce background noise, but not disturb the target speech. This objective is difficult because noise reduction suffers from a trade-off between the amount of noise removed and signal distortion. It is unknown if this important trade-off differs between

  5. The cerebral functional location in normal subjects when they listened to a story in English as a second language

    International Nuclear Information System (INIS)

    Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; He Guangqiang

    2004-01-01

    Purpose: To detect the cerebral functional location when normal subjects listened to a story in English as a second language. Methods: 14 normal young students of the medical collage of Zhejiang University, 22-24 years old, 8 male and 6 female. The first they underwent a 99mTc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in English as a second language on a tap for 20 minters. The content of the story is about the deeds of life of a well-known physicist, Aiyinsitan. They were also asked to pay special attention to the name of the personage in the story, what time and place did the story stated. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Their hearing was fell into bad, middle, and good according to the restate content. Results: To compare the rest state, during listen to the story in Chinese and asked to remember the content of story the superior temporal were activated in all 14 subjects, among them, dual in 4 cases, right in 5 cases, and left in 5 cases. The midtemporal (right in 5 cases), inferior temporal (right in 2 cases and left in 3 cases), and pre-temporal (in 1 case) were activated too. The auditory associated areas in frontal lobes were activated in different level, among them left post-inferior frontal (Broca's area) in 8 cases, right post-inferior frontal (Broca's area) in 3 cases, superior frontal in 6 cases (dual in 3 and right in 3), pre-inferior frontal and/or medial frontal lobes in 9 cases (dual in 6 and right in 3). Other regions that were activated included the parietal lobes (right in 4 and left in 1), the occipital lobes (dual in 4,right in 2 and left in 4)and pre-cingulated gyms (in 1 case). According to the hearing in sequence (bad, middle and good), the activated rate of the occipital lobes is decreasing (100%,75% and 57

  6. Visual cues and listening effort: individual variability.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  7. Temporal integration of loudness in listeners with hearing losses of primarily cochlear origin

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1999-01-01

    To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4...... of temporal integration—defined as the level difference between equally loud short and long tones—varied nonmonotonically with level and was largest at moderate levels. No consistent effect of frequency was apparent. The impaired listeners varied widely, but most showed a clear effect of level on the amount...... of temporal integration. Overall, their results appear consistent with expectations based on knowledge of the general properties of their loudness-growth functions and the equal-loudness-ratio hypothesis, which states that the loudness ratio between equal-SPL long and brief tones is the same at all SPLs...

  8. Searching for sources of variance in speech recognition: Young adults with normal hearing

    Science.gov (United States)

    Watson, Charles S.; Kidd, Gary R.

    2005-04-01

    In the present investigation, sensory-perceptual abilities of one thousand young adults with normal hearing are being evaluated with a range of auditory, visual, and cognitive measures. Four auditory measures were derived from factor-analytic analyses of previous studies with 18-20 speech and non-speech variables [G. R. Kidd et al., J. Acoust. Soc. Am. 108, 2641 (2000)]. Two measures of visual acuity are obtained to determine whether variation in sensory skills tends to exist primarily within or across sensory modalities. A working memory test, grade point average, and Scholastic Aptitude Test scores (Verbal and Quantitative) are also included. Preliminary multivariate analyses support previous studies of individual differences in auditory abilities (e.g., A. M. Surprenant and C. S. Watson, J. Acoust. Soc. Am. 110, 2085-2095 (2001)] which found that spectral and temporal resolving power obtained with pure tones and more complex unfamiliar stimuli have little or no correlation with measures of speech recognition under difficult listening conditions. The current findings show that visual acuity, working memory, and intellectual measures are also very poor predictors of speech recognition ability, supporting the independence of this processing skill. Remarkable performance by some exceptional listeners will be described. [Work supported by the Office of Naval Research, Award No. N000140310644.

  9. Communication between hearing impaired and normal hearing students: a facilitative proposal of learning in higher education

    Directory of Open Access Journals (Sweden)

    Krysne Kelly de França Oliveira

    2014-09-01

    Full Text Available Introduction: There has been an increase in the number of hearing impaired people with access to higher education. Most of them are young people from a different culture who present difficulties in communication, inter-relationship, and learning in a culture of normal hearing people, because they use a different language, the Brazilian Sign Language - LIBRAS. Objective: The present study aimed to identify the forms of communication used between hearing impaired and normal hearing students, verifying how they can interfere with the learning process of the first. Methods: A qualitative study that used the space of a private university in the city of Fortaleza, Ceará state, Brazil, from February to April 2009. We carried out semi-structured interviews with three hearing impaired students, three teachers, three interpreters, and three normal hearing students. The content of the speeches was categorized and organized by the method of thematic analysis. Results: We verified that the forms of communication used ranged from mime and gestures to writing and drawing, but the most accepted by the hearing impaired students was LIBRAS. As a method of communication, it supports the learning of hearing impaired students, and with the mediation of interpreters, it gives them conditions to settle in their zones of development, according to the precepts of Vygotsky. Conclusion: Thus, we recognize the importance of LIBRAS as predominant language, essential to the full academic achievement of hearing impaired students; however, their efforts and dedication, as well as the interest of institutions and teachers on the deaf culture, are also important for preparing future professionals.

  10. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    Science.gov (United States)

    Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…

  11. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  12. Output capabilities of personal music players and assessment of preferred listening levels of test subjects: outlining recommendations for preventing music-induced hearing loss.

    Science.gov (United States)

    Breinbauer, Hayo A; Anabalón, Jose L; Gutierrez, Daniela; Cárcamo, Rodrigo; Olivares, Carla; Caro, Jorge

    2012-11-01

    Our goal was to assess the impact of personal music players, earphones, and music styles on output, the subject's preferred listening levels, and outline recommendations for the prevention of music-induced hearing loss. Experimental study. Personal music players' output capabilities and volunteers' preferred output levels were assessed in different settings. Based on current noise-induced hearing loss exposure limits, recommendations were outlined. On three different devices and earphone types and 10 music styles, free field equivalent sound pressure output levels were assessed by applying a microphone probe inside the auditory canal. Forty-five hearing-healthy volunteers were asked to select preferred listening levels in different background noise scenarios. Sound pressure output reached 126 dB. No difference was found between device types, whereas earbud and supra-aural earphones showed significantly lower outputs than in-ear earphones (P music style groups were identified with as much as 14.4 dB difference between them. In silence, 17.8% of volunteers spontaneously selected a listening level above 85 dB. With 90 dB background noise, 40% selected a level above 94 dB. Earphone attenuation capability was found to correlate significantly with preferred level reductions (r = 0.585, P < .001). In-ear and especially supra-aural earphones reduced preferred listening levels the most. Safe-use recommendations were outlined, whereas selecting the lowest volume setting comfortable remained the main suggestion. High background noise attenuating earphones may help in reducing comfortable listening levels and should be preferred. A risk table was elaborated, presenting time limits before reaching a risky exposure. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  13. Predicting effects of hearing-instrument signal processing on consonant perception

    DEFF Research Database (Denmark)

    Zaar, Johannes; Schmitt, Nicola; Derleth, Ralph-Peter

    2017-01-01

    –1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli. Regarding CI processing, the consonant......This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051...... perception data from DiNino et al. [(2016). J. Acoust. Soc. Am. 140, 4404-4418] were considered, which were obtained with noise-vocoded vowel-consonant-vowel stimuli in 12 NH listeners. The inputs to the model were the same stimuli as were used in the corresponding experiments. The model predictions obtained...

  14. Hear Me, Oh Hear Me! Are We Listening to Our Employees?

    Science.gov (United States)

    Loy, Darcy

    2011-01-01

    Listening is one of the most crucial skills that leaders need to possess but is often the most difficult to master. It takes hard work, concentration, and specific skill sets to become an effective listener. Facilities leaders need to perfect the art of listening to their employees. Employees possess pertinent knowledge about day-to-day operations…

  15. Classroom listening assessment: strategies for speech-language pathologists.

    Science.gov (United States)

    Johnson, Cheryl DeConde

    2012-11-01

    Emphasis on classroom listening has gained importance for all children and especially for those with hearing loss and special listening needs. The rationale can be supported from trends in educational placements, the Response to Intervention initiative, student performance and accountability, the role of audition in reading, and improvement in hearing technologies. Speech-language pathologists have an instrumental role advocating for the accommodations that are necessary for effective listening for these children in school. To identify individual listening needs and make relevant recommendations for accommodations, a classroom listening assessment is suggested. Components of the classroom listening assessment include observation, behavioral assessment, self-assessment, and classroom acoustics measurements. Together, with a strong rationale, the results can be used to implement a plan that results in effective classroom listening for these children. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Reliability and Magnitude of Laterality Effects in Dichotic Listening with Exogenous Cueing

    Science.gov (United States)

    Voyer, Daniel

    2004-01-01

    The purpose of the present study was to replicate and extend to word recognition previous findings of reduced magnitude and reliability of laterality effects when exogenous cueing was used in a dichotic listening task with syllable pairs. Twenty right-handed undergraduate students with normal hearing (10 females, 10 males) completed a dichotic…

  17. Hearing Screening

    Science.gov (United States)

    Johnson-Curiskis, Nanette

    2012-01-01

    Hearing levels are threatened by modern life--headsets for music, rock concerts, traffic noises, etc. It is crucial we know our hearing levels so that we can draw attention to potential problems. This exercise requires that students receive a hearing screening for their benefit as well as for making the connection of hearing to listening.

  18. Listening Comprehension in Middle-Aged Adults.

    Science.gov (United States)

    Sommers, Mitchell S

    2015-06-01

    The purpose of this summary is to examine changes in listening comprehension across the adult lifespan and to identify factors associated with individual differences in listening comprehension. In this article, the author reports on both cross-sectional and longitudinal changes in listening comprehension. Despite significant declines in both sensory and cognitive abilities, listening comprehension remains relatively unchanged in middle-aged listeners (between the ages of 40 and 60 years) compared with young listeners. These results are discussed with respect to possible compensatory factors that maintain listening comprehension despite impaired hearing and reduced cognitive capacities.

  19. Single-sided deafness & directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear

    Directory of Open Access Journals (Sweden)

    Martijn Johannes Hermanus Agterberg

    2014-07-01

    Full Text Available Direction-specific interactions of sound waves with the head, torso and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs and sound level (interaural level differences, or ILDs. Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD, their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, 3 kHz and broadband (BB, 0.5 – 20 kHz noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45-65 dB SPL. Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.

  20. Effects of Simulated Conductive Hearing Loss on Dichotic Listening Performance for Digits.

    Science.gov (United States)

    Niccum, Nancy; And Others

    1987-01-01

    Conductive hearing losses were simulated in 12 subjects aged 19-35 and performance was compared with normal hearing performance. Digit dichotic performance was affected when test intensities were within 8 dB of the "knees" (95 percent correct point) of monotic performance intensity functions, but not when test intensities were 12 dB…

  1. Preliminary investigation of the categorization of gaps and overlaps in turn-taking interactions: Effects of noise and hearing loss

    DEFF Research Database (Denmark)

    Sørensen, Anna Josefine; Weisser, Adam; MacDonald, Ewen

    2017-01-01

    Normal conversation requires interlocutors to monitor the ongoing acoustic signal to judge when it is appropriate to start talking. Categorical thresholds for gaps and overlaps in turn-taking interactions were measured for normalhearing and hearing-impaired listeners in both quiet and multitalker...... babble (+6 dB SNR). The slope of the categorization functions were significantly shallower for hearing impaired listeners and in the presence of background noise. Moreover, the categorization threshold for overlaps increased in background noise....

  2. Performance, fatigue and stress in open-plan offices: The effects of noise and restoration on hearing impaired and normal hearing individuals

    Directory of Open Access Journals (Sweden)

    Helena Jahncke

    2012-01-01

    Full Text Available Hearing impaired and normal hearing individuals were compared in two within-participant office noise conditions (high noise: 60 L Aeq and low noise: 30 L Aeq . Performance, subjective fatigue, and physiological stress were tested during working on a simulated open-plan office. We also tested two between-participants restoration conditions following the work period with high noise (nature movie or continued office noise. Participants with a hearing impairment (N = 20 were matched with normal hearing participants (N = 18 and undertook one practice session and two counterbalanced experimental sessions. In each experimental session they worked for two hours with basic memory and attention tasks. We also measured physiological stress indicators (cortisol and catecholamines and self-reports of mood and fatigue. The hearing impaired participants were more affected by high noise than the normal hearing participants, as shown by impaired performance for tasks that involve recall of semantic information. The hearing impaired participants were also more fatigued by high noise exposure than participants with normal hearing, and they tended to have higher stress hormone levels during the high noise compared to the low noise condition. Restoration with a movie increased performance and motivation for the normal hearing participants, while rest with continued noise did not. For the hearing impaired participants, continued noise during rest increased motivation and performance, while the movie did not. In summary, the impact of noise and restorative conditions varied with the hearing characteristics of the participants. The small sample size does however encourage caution when interpreting the results.

  3. The medial olivocochlear reflex in children during active listening.

    Science.gov (United States)

    Smith, Spencer B; Cone, Barbara

    2015-08-01

    To determine if active listening modulates the strength of the medial olivocochlear (MOC) reflex in children. Click-evoked otoacoustic emissions (CEOAEs) were recorded from the right ear in quiet and in four test conditions: one with contralateral broadband noise (BBN) only, and three with active listening tasks wherein attention was directed to speech embedded in contralateral BBN. Fifteen typically-developing children (ranging in age from 8 to14 years) with normal hearing. CEOAE levels were reduced in every condition with contralateral acoustic stimulus (CAS) when compared to preceding quiet conditions. There was an additional systematic decrease in CEOAE level with increased listening task difficulty, although this effect was very small. These CEOAE level differences were most apparent in the 8-18 ms region after click onset. Active listening may change the strength of the MOC reflex in children, although the effects reported here are very subtle. Further studies are needed to verify that task difficulty modulates the activity of the MOC reflex in children.

  4. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  5. Hearing Aids

    Science.gov (United States)

    ... primarily useful in improving the hearing and speech comprehension of people who have hearing loss that results ... and you can change the program for different listening environments—from a small, quiet room to a ...

  6. The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise

    Science.gov (United States)

    Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.

    2011-01-01

    Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the

  7. Oscillatory decoupling differentiates auditory encoding deficits in children with listening problems.

    Science.gov (United States)

    Gilley, Phillip M; Sharma, Mridula; Purdy, Suzanne C

    2016-02-01

    We sought to examine whether oscillatory EEG responses to a speech stimulus in both quiet and noise were different in children with listening problems than in children with normal hearing. We employed a high-resolution spectral-temporal analysis of the cortical auditory evoked potential in response to a 150 ms speech sound /da/ in quiet and 3 dB SNR in 21 typically developing children (mean age=10.7 years, standard deviation=1.7) and 44 children with reported listening problems (LP) with absence of hearing loss (mean age=10.3 years, standard deviation=1.6). Children with LP were assessed for auditory processing disorder (APD) by which 24 children had APD, and 20 children did not. Peak latencies, magnitudes, and frequencies were compared between these groups. Children with LP had frequency shifts in the theta, and alpha bands (plistening problems in this population of children. Published by Elsevier Ireland Ltd.

  8. Reflectance Measures from Infant Ears With Normal Hearing and Transient Conductive Hearing Loss.

    Science.gov (United States)

    Voss, Susan E; Herrmann, Barbara S; Horton, Nicholas J; Amadei, Elizabeth A; Kujawa, Sharon G

    2016-01-01

    The objective is to develop methods to utilize newborn reflectance measures for the identification of middle-ear transient conditions (e.g., middle-ear fluid) during the newborn period and ultimately during the first few months of life. Transient middle-ear conditions are a suspected source of failure to pass a newborn hearing screening. The ability to identify a conductive loss during the screening procedure could enable the referred ear to be either (1) cleared of a middle-ear condition and recommended for more extensive hearing assessment as soon as possible, or (2) suspected of a transient middle-ear condition, and if desired, be rescreened before more extensive hearing assessment. Reflectance measurements are reported from full-term, healthy, newborn babies in which one ear referred and one ear passed an initial auditory brainstem response newborn hearing screening and a subsequent distortion product otoacoustic emission screening on the same day. These same subjects returned for a detailed follow-up evaluation at age 1 month (range 14 to 35 days). In total, measurements were made on 30 subjects who had a unilateral refer near birth (during their first 2 days of life) and bilateral normal hearing at follow-up (about 1 month old). Three specific comparisons were made: (1) Association of ear's state with power reflectance near birth (referred versus passed ear), (2) Changes in power reflectance of normal ears between newborn and 1 month old (maturation effects), and (3) Association of ear's newborn state (referred versus passed) with ear's power reflectance at 1 month. In addition to these measurements, a set of preliminary data selection criteria were developed to ensure that analyzed data were not corrupted by acoustic leaks and other measurement problems. Within 2 days of birth, the power reflectance measured in newborn ears with transient middle-ear conditions (referred newborn hearing screening and passed hearing assessment at age 1 month) was significantly

  9. "Can You Repeat That?" Teaching Active Listening in Management Education

    Science.gov (United States)

    Spataro, Sandra E.; Bloch, Janel

    2018-01-01

    Listening is a critical communication skill and therefore an essential element of management education. "Active" listening surpasses passive listening or simple hearing to establish a deeper connection between speaker and listener, as the listener gives the speaker full attention via inquiry, reflection, respect, and empathy. This…

  10. The Effects of Musical and Linguistic Components in Recognition of Real-World Musical Excerpts by Cochlear Implant Recipients and Normal-Hearing Adults

    Science.gov (United States)

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce

    2011-01-01

    Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258

  11. The effects of musical and linguistic components in recognition of real-world musical excerpts by cochlear implant recipients and normal-hearing adults.

    Science.gov (United States)

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce

    2012-01-01

    Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).

  12. Prosody Perception and Production in Children with Hearing Loss and Age- and Gender-Matched Controls.

    Science.gov (United States)

    Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine

    2017-04-01

    Auditory development in children with hearing loss, including the perception of prosody, depends on having adequate input from cochlear implants and/or hearing aids. Lack of adequate auditory stimulation can lead to delayed speech and language development. Nevertheless, prosody perception and production in people with hearing loss have received less attention than other aspects of language. The perception of auditory information conveyed through prosody using variations in the pitch, amplitude, and duration of speech is not usually evaluated clinically. This study (1) compared prosody perception and production abilities in children with hearing loss and children with normal hearing; and (2) investigated the effect of age, hearing level, and musicality on prosody perception. Participants were 16 children with hearing loss and 16 typically developing controls matched for age and gender. Fifteen of the children with hearing loss were tested while using amplification (n = 9 hearing aids, n = 6 cochlear implants). Six receptive subtests of the Profiling Elements of Prosody in Speech-Communication (PEPS-C), the Child Paralanguage subtest of Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA 2), and Contour and Interval subtests of the Montreal Battery of Evaluation of Amusia (MBEA) were used. Audio recordings of the children's reading samples were rated using a perceptual prosody rating scale by nine experienced listeners who were blinded to the children's hearing status. Thirty two children, 16 with hearing loss (mean age = 8.71 yr) and 16 age- and gender-matched typically developing children with normal hearing (mean age = 8.87 yr). Assessments were completed in one session lasting 1-2 hours in a quiet room. Test items were presented using a laptop computer through loudspeaker at a comfortable listening level. For children with hearing loss using hearing instruments, all tests were completed with hearing devices set at their everyday listening setting. All PEPS

  13. Sentence Writing and Perception of Written Sentences in Hearing-Impaired and Normal-Hearing Primary School Students in Hamadan, Western Iran

    Directory of Open Access Journals (Sweden)

    Afsaneh Yaghobi

    2011-06-01

    Full Text Available Background and Aim: Learning language is acquired in early childhood and gradually developed by new words and new structures. Hearing sense is the most important acquisition for learning this skill. Hearing disorders are barriers for natural language learning. The purpose of this study was to investigate the relationship between writing sentences and perception of written sentences in hearing-impaired and normal-hearing students.Methods: A cross-sectional study was conducted among thirty hearing-impaired students with hearing loss of 70-90 dB and thirty normal hearing students. They were selected from 3rd grade primary school students in Hamadan, a large city in Western Iran. The language skills and non language information was assessed by questionnaire, Action Picture Test, and Sentence Perception Test.Results: Results showed that there was a significant relation between writing sentences and perception of written sentences in hearing impaired students (p<0.001, (r=0.8. This significant relation was seen in normal-hearing students as well (p<0.001, (r=0.7.Conclusion: Disability of hearing-impaired students in verbal communication is not only related to articulation and voice disorders but also is related to their disability to explore and use of language rules. They suffer lack of perception of written sentences, and they are not skilled to convey their feelings and thoughts in order to presenting themselves by using language structures.

  14. Influence of risky and protective behaviors connected with listening to music on hearing loss and the noise induced threshold shift among students of the Medical University of Bialystok

    Directory of Open Access Journals (Sweden)

    Beata Modzelewska

    2017-03-01

    Full Text Available Background . Currently, significant changes have occurred in the character of sound exposure, along with the properties of the group affected by it. Thus, primary care physicians have to keep in mind that a sizable group of young adults comprises groups in which the prevalence of hearing loss is increasing. Objectives . The goal of the following study was to determine the auditory ability of the students attending the Medical University in Bialystok and to analyze their risky and protective behaviors relating to music consumption. Material and methods . In total, 230 students (age: 18–26 years completed a questionnaire about general personal information and their music-listening habits. Thereafter, pure tone audiometry at standard frequencies (0.25 kHz–8 kHz was performed. Results . Hearing loss was more frequent in subjects who listened to music at higher volumes (‘very loud’ – 22.2%, ‘loud’ – 3.9%, ‘not very loud’ – 2.1%, ‘quiet’ – 9.1%, p = 0.046. Hearing loss was more prevalent among those students who were living in a city with more than 50,000 inhabitants before starting higher education compared to the remaining subjects (7.95% vs. 0.97%, p = 0.025. Conclusions . The study demonstrated that surprisingly few medical students suffer from hearing loss or a noise induced threshold shift. There is no correlation between risky behavior such as a lengthy daily duration of listening to music or the type of headphone used and hearing loss. Hearing screening tests connected with education are indicated in the group of young adults due to the accumulative character of hearing damage.

  15. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    Science.gov (United States)

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  16. Subcortical amplitude modulation encoding deficits suggest evidence of cochlear synaptopathy in normal-hearing 18-19 year olds with higher lifetime noise exposure.

    Science.gov (United States)

    Paul, Brandon T; Waheed, Sajal; Bruce, Ian C; Roberts, Larry E

    2017-11-01

    Noise exposure and aging can damage cochlear synapses required for suprathreshold listening, even when cochlear structures needed for hearing at threshold remain unaffected. To control for effects of aging, behavioral amplitude modulation (AM) detection and subcortical envelope following responses (EFRs) to AM tones in 25 age-restricted (18-19 years) participants with normal thresholds, but different self-reported noise exposure histories were studied. Participants with more noise exposure had smaller EFRs and tended to have poorer AM detection than less-exposed individuals. Simulations of the EFR using a well-established cochlear model were consistent with more synaptopathy in participants reporting greater noise exposure.

  17. Looking is not seeing and listening is not hearing: effect of an intervention to enhance auditory skills of graduate-entry nursing students.

    Science.gov (United States)

    Pellico, Linda Honan; Duffy, Thomas C; Fennie, Kristopher P; Swan, Katharine A

    2012-01-01

    Inspection/observation and listening/auscultation are essential skills for health care providers. Given that observational and auditory skills take time to perfect, there is concern about accelerated students' ability to attain proficiency in a timely manner. This article describes the impact of music auditory training (MAT) for nursing students in an accelerated master's entry program on their competence in detecting heart, lung, and bowel sounds. During the first semester, a two-hour MAT session with focused attention on pitch, timbre, rhythm, and masking was held for the intervention group; a control group received traditional instruction only. Students in the music intervention group demonstrated significant improvement in hearing bowel, heart, and lung sounds (p < .0001). The ability to label normal and abnormal heart sounds doubled; interpretation of normal and abnormal lung sounds improved by 50 percent; and bowel sounds interpretation improved threefold, demonstrating the effect of an adult-oriented, creative, yet practical method for teaching auscultation.

  18. Story retelling skills in Persian speaking hearing-impaired children.

    Science.gov (United States)

    Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza

    2017-05-01

    Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Effects of hearing-aid dynamic range compression on spatial perception in a reverberant environment

    DEFF Research Database (Denmark)

    Hassager, Henrik Gert; Wiinberg, Alan; Dau, Torsten

    2017-01-01

    This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners’ spatial perception in a reverberant environment. Three compression schemes—independent compression at each ear, linked compression between the two ears, and “spatially ideal......” compression operating solely on the dry source signal—were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane. Linear processing was considered as the reference condition. The results showed that both...... independent and linked compression resulted in more diffuse and broader sound images as well as internalization and image splits, whereby more image splits were reported for the noise bursts than for speech. Only the spatially ideal compression provided the listeners with a spatial percept similar...

  20. Analysis of Output Levels of an MP3 Player: Effects of Earphone Type, Music Genre, and Listening Duration.

    Science.gov (United States)

    Shim, Hyunyong; Lee, Seungwan; Koo, Miseung; Kim, Jinsook

    2018-02-26

    To prevent noise induced hearing losses caused by listening to music with personal listening devices for young adults, this study was aimed to measure output levels of an MP3 and to identify preferred listening levels (PLLs) depending on earphone types, music genres, and listening durations. Twenty-two normal hearing young adults (mean=18.82, standard deviation=0.57) participated. Each participant was asked to select his or her most PLLs when listened to Korean ballade or dance music with an earbud or an over-the-ear earphone for 30 or 60 minutes. One side of earphone was connected to the participant's better ear and the other side was connected to a sound level meter via a 2 or 6 cc-couplers. Depending on earphone types, music genres, and listening durations, loudness A-weighted equivalent (LAeq) and loudness maximum time-weighted with A-frequency sound levels in dBA were measured. Neither main nor interaction effects of the PLLs among the three factors were significant. Overall output levels of earbuds were about 10-12 dBA greater than those of over-the-ear earphones. The PLLs were 1.73 dBA greater for earbuds than over-the-ear earphones. The average PLL for ballad was higher than for dance music. The PLLs at LAeq for both music genres were the greatest at 0.5 kHz followed by 1, 0.25, 2, 4, 0.125, 8 kHz in the order. The PLLs were not different significantly when listening to Korean ballad or dance music as functions of earphone types, music genres, and listening durations. However, over-the-ear earphones seemed to be more suitable to prevent noise induce hearing loss when listening to music, showing lower PLLs, possibly due to isolation from the background noise by covering ears.

  1. Everyday listeners' impressions of speech produced by individuals with adductor spasmodic dysphonia.

    Science.gov (United States)

    Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M

    2015-01-01

    Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. How young and old adults listen to and remember speech in noise.

    Science.gov (United States)

    Pichora-Fuller, M K; Schneider, B A; Daneman, M

    1995-01-01

    Two experiments using the materials of the Revised Speech Perception in Noise (SPIN-R) Test [Bilger et al., J. Speech Hear. Res. 27, 32-48 (1984)] were conducted to investigate age-related differences in the identification and the recall of sentence-final words heard in a babble background. In experiment 1, the level of the babble was varied to determine psychometric functions (percent correct word identification as a function of S/N ratio) for presbycusics, old adults with near-normal hearing, and young normal-hearing adults, when the sentence-final words were either predictable (high context) or unpredictable (low context). Differences between the psychometric functions for high- and low-context conditions were used to show that both groups of old listeners derived more benefit from supportive context than did young listeners. In experiment 2, a working memory task [Daneman and Carpenter, J. Verb. Learn. Verb. Behav. 19, 450-466 (1980)] was added to the SPIN task for young and old adults. Specifically, after listening to and identifying the sentence-final words for a block of n sentences, the subjects were asked to recall the last n words that they had identified. Old subjects recalled fewer of the items they had perceived than did young subjects in all S/N conditions, even though there was no difference in the recall ability of the two age groups when sentences were read. Furthermore, the number of items recalled by both age groups was reduced in adverse S/N conditions. The resutls were interpreted as supporting a processing model in which reallocable processing resources are used to support auditory processing when listening becomes difficult either because of noise, or because of age-related deterioration in the auditory system. Because of this reallocation, these resources are unavailable to more central cognitive processes such as the storage and retrieval functions of working memory, so that "upstream" processing of auditory information is adversely

  3. Rapid word-learning in normal-hearing and hearing-impaired children: effects of age, receptive vocabulary, and high-frequency amplification.

    Science.gov (United States)

    Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G

    2005-12-01

    This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both

  4. Pre- and Postoperative Binaural Unmasking for Bimodal Cochlear Implant Listeners.

    Science.gov (United States)

    Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W

    Cochlear implants (CIs) are increasingly recommended to individuals with residual bilateral acoustic hearing. Although new hearing-preserving electrode designs and surgical approaches show great promise, CI recipients are still at risk to lose acoustic hearing in the implanted ear, which could prevent the ability to take advantage of binaural unmasking to aid speech recognition in noise. This study examined the tradeoff between the benefits of a CI for speech understanding in noise and the potential loss of binaural unmasking for CI recipients with some bilateral preoperative acoustic hearing. Binaural unmasking is difficult to evaluate in CI candidates because speech perception in noise is generally too poor to measure reliably in the range of signal to noise ratios (SNRs) where binaural intelligibility level differences (BILDs) are typically observed (binaural benefit, 9 out of 10 listeners tested postoperatively had performance equal to or better than their best pre-CI performance. The listener who retained functional acoustic hearing in the implanted ear also demonstrated a preserved acoustic BILD postoperatively. Approximately half of the CI candidates in this study demonstrated preoperative binaural hearing benefits for audiovisual speech perception in noise. Most of these listeners lost their acoustic hearing in the implanted ear after surgery (using nonhearing-preservation techniques), and therefore lost access to this binaural benefit. In all but one case, any loss of binaural benefit was compensated for or exceeded by an improvement in speech perception with the CI. Evidence of a preoperative BILD suggests that certain CI candidates might further benefit from hearing-preservation surgery to retain acoustic binaural unmasking, as demonstrated for the listener who underwent hearing-preservation surgery. This test of binaural audiovisual speech perception in noise could serve as a diagnostic tool to identify CI candidates who are most likely to receive

  5. Hearing Loss in Children With Otitis Media With Effusion: Actual and Simulated Effects on Speech Perception.

    Science.gov (United States)

    Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-11-14

    Conductive hearing loss simulations have attempted to estimate the speech-understanding difficulties of children with otitis media with effusion (OME). However, the validity of this approach has not been evaluated. The research aim of the present study was to investigate whether a simple, frequency-specific, attenuation-based simulation of OME-related hearing loss was able to reflect the actual effects of conductive hearing loss on speech perception. Forty-one school-age children with OME-related hearing loss were recruited. Each child with OME was matched with a same sex and age counterpart with normal hearing to make a participant pair. Pure-tone threshold differences at octave frequencies from 125 to 8000 Hz for every participant pair were used as the simulation attenuation levels for the normal-hearing children. Another group of 41 school-age otologically normal children were recruited as a control group without actual or simulated hearing loss. The Mandarin Hearing in Noise Test was utilized, and sentence recall accuracy at four signal to noise ratios (SNR) considered representative of classroom-listening conditions were derived, as well as reception thresholds for sentences (RTS) in quiet and in noise using adaptive protocols. The speech perception in quiet and in noise of children with simulated OME-related hearing loss was significantly poorer than that of otologically normal children. Analysis showed that RTS in quiet of children with OME-related hearing loss and of children with simulated OME-related hearing loss was significantly correlated and comparable. A repeated-measures analysis suggested that sentence recall accuracy obtained at 5-dB SNR, 0-dB SNR, and -5-dB SNR was similar between children with actual and simulated OME-related hearing loss. However, RTS in noise in children with OME was significantly better than that for children with simulated OME-related hearing loss. The present frequency-specific, attenuation-based simulation method reflected

  6. Hearing, listening, action: Enhancing nursing practice through aural awareness education.

    Science.gov (United States)

    Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa

    2014-01-01

    Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses' ability to assess patients effectively. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilized an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patients' experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.

  7. The music listening preferences and habits of youths in Singapore and its relation to leisure noise-induced hearing loss.

    Science.gov (United States)

    Lee, Gary Jek Chong; Lim, Ming Yann; Kuan, Angeline Yi Wei; Teo, Joshua Han Wei; Tan, Hui Guang; Low, Wong Kein

    2014-02-01

    Noise-induced hearing loss (NIHL) is a preventable condition, and much has been done to protect workers from it. However, thus far, little attention has been given to leisure NIHL. The purpose of this study is to determine the possible music listening preferences and habits among young people in Singapore that may put them at risk of developing leisure NIHL. In our study, the proportion of participants exposed to > 85 dBA for eight hours a day (time-weighted average) was calculated by taking into account the daily number of hours spent listening to music and by determining the average sound pressure level at which music was listened to. A total of 1,928 students were recruited from Temasek Polytechnic, Singapore. Of which, 16.4% of participants listened to portable music players with a time-weighted average of > 85 dBA for 8 hours. On average, we found that male students were more likely to listen to music at louder volumes than female students (p students in our study listened to louder music than the Chinese students (p leisure NIHL from music delivered via earphones. As additional risks due to exposure to leisure noise from other sources was not taken into account, the extent of the problem of leisure NIHL may be even greater. There is a compelling need for an effective leisure noise prevention program among young people in Singapore.

  8. Right-Ear Advantage for Speech-in-Noise Recognition in Patients with Nonlateralized Tinnitus and Normal Hearing Sensitivity.

    Science.gov (United States)

    Tai, Yihsin; Husain, Fatima T

    2018-04-01

    Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.

  9. Objective measures of listening effort: effects of background noise and noise reduction.

    Science.gov (United States)

    Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin

    2009-10-01

    This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. To address this, the hypothesis tested here is that the positive effects of NR might be to reduce cognitive effort directed toward speech reception, making it available for other tasks. Normal-hearing individuals participated in 2 dual-task experiments, in which 1 task was to report sentences or words in noise set to various signal-to-noise ratios. Secondary tasks involved either holding words in short-term memory or responding in a complex visual reaction-time task. At low values of signal-to-noise ratio, although NR had no positive effect on speech reception thresholds, it led to better performance on the word-memory task and quicker responses in visual reaction times. Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.

  10. Four cases of acoustic neuromas with normal hearing.

    Science.gov (United States)

    Valente, M; Peterein, J; Goebel, J; Neely, J G

    1995-05-01

    In 95 percent of the cases, patients with acoustic neuromas will have some magnitude of hearing loss in the affected ear. This paper reports on four patients who had acoustic neuromas and normal hearing. Results from the case history, audiometric evaluation, auditory brainstem response (ABR), electroneurography (ENOG), and vestibular evaluation are reported for each patient. For all patients, the presence of unilateral tinnitus was the most common complaint. Audiologically, elevated or absent acoustic reflex thresholds and abnormal ABR findings were the most powerful diagnostic tools.

  11. Can you hear me now? Teaching listening skills.

    Science.gov (United States)

    Nemec, Patricia B; Spagnolo, Amy Cottone; Soydan, Anne Sullivan

    2017-12-01

    This column provides an overview of methods for training to improve service provider active listening and reflective responding skills. Basic skills in active listening and reflective responding allow service providers to gather information about and explore the needs, desires, concerns, and preference of people using their services-activities that are of critical importance if services are to be truly person-centered and person-driven. Sources include the personal experience of the authors as well as published literature on the value of basic counseling skills and best practices in training on listening and other related soft skills. Training in listening is often needed but rarely sought by behavioral health service providers. Effective curricula exist, providing content and practice opportunities that can be incorporated into training, supervision, and team meetings. When providers do not listen well to the people who use their services, the entire premise of recovery-oriented person-driven services is undermined. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Hearing Threshold and Equal Loudness Level Contours of 1/3-octave Noise Bands in a Diffuse Sound Field

    DEFF Research Database (Denmark)

    Nielsen, Maja Kirstine E.; Poulsen, Torben

    1994-01-01

    Hearing threshold levels and equal loudness level contours of 1/3-octave noise bands at 40 phons and 60 phon were measured for 27 normal hearing listeners in an approximately diffuse sound field. The threshold data in the frequency range 125 Hz to 1 kHz were 3-6 dB higher than the values given...

  13. The effects of reverberant self- and overlap-masking on speech recognition in cochlear implant listeners.

    Science.gov (United States)

    Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S

    2014-06-01

    Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.

  14. Effort and Displeasure in People Who Are Hard of Hearing.

    Science.gov (United States)

    Matthen, Mohan

    2016-01-01

    Listening effort helps explain why people who are hard of hearing are prone to fatigue and social withdrawal. However, a one-factor model that cites only effort due to hardness of hearing is insufficient as there are many who lead happy lives despite their disability. This article explores other contributory factors, in particular motivational arousal and pleasure. The theory of rational motivational arousal predicts that some people forego listening comprehension because they believe it to be impossible and hence worth no effort at all. This is problematic. Why should the listening task be rated this way, given the availability of aids that reduce its difficulty? Two additional factors narrow the explanatory gap. First, we separate the listening task from the benefit derived as a consequence. The latter is temporally more distant, and is discounted as a result. The second factor is displeasure attributed to the listening task, which increases listening cost. Many who are hard of hearing enjoy social interaction. In such cases, the actual activity of listening is a benefit, not a cost. These people also reap the benefits of listening, but do not have to balance these against the displeasure of the task. It is suggested that if motivational harmony can be induced by training in somebody who is hard of hearing, then the obstacle to motivational arousal would be removed. This suggests a modified goal for health care professionals. Do not just teach those who are hard of hearing how to use hearing assistance devices. Teach them how to do so with pleasure and enjoyment.

  15. Lexical and age effects on word recognition in noise in normal-hearing children.

    Science.gov (United States)

    Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing

    2015-12-01

    The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word

  16. Masking release with changing fundamental frequency: Electric acoustic stimulation resembles normal hearing subjects.

    Science.gov (United States)

    Auinger, Alice Barbara; Riss, Dominik; Liepins, Rudolfs; Rader, Tobias; Keck, Tilman; Keintzel, Thomas; Kaider, Alexandra; Baumgartner, Wolf-Dieter; Gstoettner, Wolfgang; Arnoldner, Christoph

    2017-07-01

    It has been shown that patients with electric acoustic stimulation (EAS) perform better in noisy environments than patients with a cochlear implant (CI). One reason for this could be the preserved access to acoustic low-frequency cues including the fundamental frequency (F0). Therefore, our primary aim was to investigate whether users of EAS experience a release from masking with increasing F0 difference between target talker and masking talker. The study comprised 29 patients and consisted of three groups of subjects: EAS users, CI users and normal-hearing listeners (NH). All CI and EAS users were implanted with a MED-EL cochlear implant and had at least 12 months of experience with the implant. Speech perception was assessed with the Oldenburg sentence test (OlSa) using one sentence from the test corpus as speech masker. The F0 in this masking sentence was shifted upwards by 4, 8, or 12 semitones. For each of these masker conditions the speech reception threshold (SRT) was assessed by adaptively varying the masker level while presenting the target sentences at a fixed level. A statistically significant improvement in speech perception was found for increasing difference in F0 between target sentence and masker sentence in EAS users (p = 0.038) and in NH listeners (p = 0.003). In CI users (classic CI or EAS users with electrical stimulation only) speech perception was independent from differences in F0 between target and masker. A release from masking with increasing difference in F0 between target and masking speech was only observed in listeners and configurations in which the low-frequency region was presented acoustically. Thus, the speech information contained in the low frequencies seems to be crucial for allowing listeners to separate multiple sources. By combining acoustic and electric information, EAS users even manage tasks as complicated as segregating the audio streams from multiple talkers. Preserving the natural code, like fine-structure cues in

  17. Comparison of objective methods for assessment of annoyance of low frequency noise with the results of a laboratory listening test

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2003-01-01

    Subjective assessments made by test persons were compared to results from a number of objective measurement and calculation methods for the assessment of low frequency noise. Eighteen young persons with normal hearing listened to eight environmental low frequency noises and evaluated the annoyance...

  18. Lateralization of narrow-band noise by blind and sighted listeners.

    Science.gov (United States)

    Simon, Helen J; Divenyi, Pierre L; Lotze, Al

    2002-01-01

    The effects of varying interaural time delay (ITD) and interaural intensity difference (IID) were measured in normal-hearing sighted and congenitally blind subjects as a function of eleven frequencies and at sound pressure levels of 70 and 90 dB, and at a sensation level of 25 dB (sensation level refers to the pressure level of the sound above its threshold for the individual subject). Using an 'acoustic' pointing paradigm, the subject varied the IID of a 500 Hz narrow-band (100 Hz) noise (the 'pointer') to coincide with the apparent lateral position of a 'target' ITD stimulus. ITDs of 0, +/-200, and +/-400 micros were obtained through total waveform delays of narrow-band noise, including envelope and fine structure. For both groups, the results of this experiment confirm the traditional view of binaural hearing for like stimuli: non-zero ITDs produce little perceived lateral displacement away from 0 IID at frequencies above 1250 Hz. To the extent that greater magnitude of lateralization for a given ITD, presentation level, and center frequency can be equated with superior localization abilities, blind listeners appear at least comparable and even somewhat better than sighted subjects, especially when attending to signals in the periphery. The present findings suggest that blind listeners are fully able to utilize the cues for spatial hearing, and that vision is not a mandatory prerequisite for the calibration of human spatial hearing.

  19. Associations between speech understanding and auditory and visual tests of verbal working memory: effects of linguistic complexity, task, age, and hearing loss.

    Science.gov (United States)

    Smith, Sherri L; Pichora-Fuller, M Kathleen

    2015-01-01

    Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners' auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding.

  20. Measuring listening-related effort and fatigue in school-aged children using pupillometry.

    Science.gov (United States)

    McGarrigle, Ronan; Dawes, Piers; Stewart, Andrew J; Kuchinsky, Stefanie E; Munro, Kevin J

    2017-09-01

    Stress and fatigue from effortful listening may compromise well-being, learning, and academic achievement in school-aged children. The aim of this study was to investigate the effect of a signal-to-noise ratio (SNR) typical of those in school classrooms on listening effort (behavioral and pupillometric) and listening-related fatigue (self-report and pupillometric) in a group of school-aged children. A sample of 41 normal-hearing children aged 8-11years performed a narrative speech-picture verification task in a condition with recommended levels of background noise ("ideal": +15dB SNR) and a condition with typical classroom background noise levels ("typical": -2dB SNR). Participants showed increased task-evoked pupil dilation in the typical listening condition compared with the ideal listening condition, consistent with an increase in listening effort. No differences were found between listening conditions in terms of performance accuracy and response time on the behavioral task. Similarly, no differences were found between listening conditions in self-report and pupillometric markers of listening-related fatigue. This is the first study to (a) examine listening-related fatigue in children using pupillometry and (b) demonstrate physiological evidence consistent with increased listening effort while listening to spoken narratives despite ceiling-level task performance accuracy. Understanding the physiological mechanisms that underpin listening-related effort and fatigue could inform intervention strategies and ultimately mitigate listening difficulties in children. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Modeling Speech Level as a Function of Background Noise Level and Talker-to-Listener Distance for Talkers Wearing Hearing Protection Devices

    Science.gov (United States)

    Bouserhal, Rachel E.; Bockstael, Annelies; MacDonald, Ewen; Falk, Tiago H.; Voix, Jérémie

    2017-01-01

    Purpose: Studying the variations in speech levels with changing background noise level and talker-to-listener distance for talkers wearing hearing protection devices (HPDs) can aid in understanding communication in background noise. Method: Speech was recorded using an intra-aural HPD from 12 different talkers at 5 different distances in 3…

  2. The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and Evaluation.

    Science.gov (United States)

    Singh, Gurjit; Liskovoi, Lisa; Launer, Stefan; Russo, Frank

    2018-06-11

    The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self

  3. EEG activity as an objective measure of cognitive load during effortful listening: A study on pediatric subjects with bilateral, asymmetric sensorineural hearing loss.

    Science.gov (United States)

    Marsella, Pasquale; Scorpecci, Alessandro; Cartocci, Giulia; Giannantonio, Sara; Maglione, Anton Giulio; Venuti, Isotta; Brizi, Ambra; Babiloni, Fabio

    2017-08-01

    Deaf subjects with hearing aids or cochlear implants generally find it challenging to understand speech in noisy environments where a great deal of listening effort and cognitive load are invested. In prelingually deaf children, such difficulties may have detrimental consequences on the learning process and, later in life, on academic performance. Despite the importance of such a topic, currently, there is no validated test for the assessment of cognitive load during audiological tasks. Recently, alpha and theta EEG rhythm variations in the parietal and frontal areas, respectively, have been used as indicators of cognitive load in adult subjects. The aim of the present study was to investigate, by means of EEG, the cognitive load of pediatric subjects affected by asymmetric sensorineural hearing loss as they were engaged in a speech-in-noise identification task. Seven children (4F and 3M, age range = 8-16 years) affected by asymmetric sensorineural hearing loss (i.e. profound degree on one side, mild-to-severe degree on the other side) and using a hearing aid only in their better ear, were included in the study. All of them underwent EEG recording during a speech-in-noise identification task: the experimental conditions were quiet, binaural noise, noise to the better hearing ear and noise to the poorer hearing ear. The subjects' Speech Recognition Thresholds (SRT) were also measured in each test condition. The primary outcome measures were: frontal EEG Power Spectral Density (PSD) in the theta band and parietal EEG PSD in the alpha band, as assessed before stimulus (word) onset. No statistically significant differences were noted among frontal theta power levels in the four test conditions. However, parietal alpha power levels were significantly higher in the "binaural noise" and in the "noise to worse hearing ear" conditions than in the "quiet" and "noise to better hearing ear" conditions (p cognitive load during effortful listening. Significantly higher

  4. Imaging of Conductive Hearing Loss With a Normal Tympanic Membrane.

    Science.gov (United States)

    Curtin, Hugh D

    2016-01-01

    This article presents an approach to imaging conductive hearing loss in patients with normal tympanic membranes and discusses entities that should be checked as the radiologist evaluates this potentially complicated issue. Conductive hearing loss in a patient with a normal tympanic membrane is a complicated condition that requires a careful imaging approach. Imaging should focus on otosclerosis, and possible mimics and potential surgical considerations should be evaluated. The radiologist should examine the ossicular chain and the round window and keep in mind that a defect in the superior semicircular canal can disturb the hydraulic integrity of the labyrinth.

  5. Social Connectedness and Perceived Listening Effort in Adult Cochlear Implant Users: A Grounded Theory to Establish Content Validity for a New Patient-Reported Outcome Measure.

    Science.gov (United States)

    Hughes, Sarah E; Hutchings, Hayley A; Rapport, Frances L; McMahon, Catherine M; Boisvert, Isabelle

    2018-02-08

    Individuals with hearing loss often report a need for increased effort when listening, particularly in challenging acoustic environments. Despite audiologists' recognition of the impact of listening effort on individuals' quality of life, there are currently no standardized clinical measures of listening effort, including patient-reported outcome measures (PROMs). To generate items and content for a new PROM, this qualitative study explored the perceptions, understanding, and experiences of listening effort in adults with severe-profound sensorineural hearing loss before and after cochlear implantation. Three focus groups (1 to 3) were conducted. Purposive sampling was used to recruit 17 participants from a cochlear implant (CI) center in the United Kingdom. The participants included adults (n = 15, mean age = 64.1 years, range 42 to 84 years) with acquired severe-profound sensorineural hearing loss who satisfied the UK's national candidacy criteria for cochlear implantation and their normal-hearing significant others (n = 2). Participants were CI candidates who used hearing aids (HAs) and were awaiting CI surgery or CI recipients who used a unilateral CI or a CI and contralateral HA (CI + HA). Data from a pilot focus group conducted with 2 CI recipients were included in the analysis. The data, verbatim transcripts of the focus group proceedings, were analyzed qualitatively using constructivist grounded theory (GT) methodology. A GT of listening effort in cochlear implantation was developed from participants' accounts. The participants provided rich, nuanced descriptions of the complex and multidimensional nature of their listening effort. Interpreting and integrating these descriptions through GT methodology, listening effort was described as the mental energy required to attend to and process the auditory signal, as well as the effort required to adapt to, and compensate for, a hearing loss. Analyses also suggested that listening effort for most participants was

  6. Factors associated with hearing loss in a normal-hearing guinea pig model of Hybrid cochlear implants.

    Science.gov (United States)

    Tanaka, Chiemi; Nguyen-Huynh, Anh; Loera, Katherine; Stark, Gemaine; Reiss, Lina

    2014-10-01

    The Hybrid cochlear implant (CI), also known as Electro-Acoustic Stimulation (EAS), is a new type of CI that preserves residual acoustic hearing and enables combined cochlear implant and hearing aid use in the same ear. However, 30-55% of patients experience acoustic hearing loss within days to months after activation, suggesting that both surgical trauma and electrical stimulation may cause hearing loss. The goals of this study were to: 1) determine the contributions of both implantation surgery and EAS to hearing loss in a normal-hearing guinea pig model; 2) determine which cochlear structural changes are associated with hearing loss after surgery and EAS. Two groups of animals were implanted (n = 6 per group), with one group receiving chronic acoustic and electric stimulation for 10 weeks, and the other group receiving no direct acoustic or electric stimulation during this time frame. A third group (n = 6) was not implanted, but received chronic acoustic stimulation. Auditory brainstem response thresholds were followed over time at 1, 2, 6, and 16 kHz. At the end of the study, the following cochlear measures were quantified: hair cells, spiral ganglion neuron density, fibrous tissue density, and stria vascularis blood vessel density; the presence or absence of ossification around the electrode entry was also noted. After surgery, implanted animals experienced a range of 0-55 dB of threshold shifts in the vicinity of the electrode at 6 and 16 kHz. The degree of hearing loss was significantly correlated with reduced stria vascularis vessel density and with the presence of ossification, but not with hair cell counts, spiral ganglion neuron density, or fibrosis area. After 10 weeks of stimulation, 67% of implanted, stimulated animals had more than 10 dB of additional threshold shift at 1 kHz, compared to 17% of implanted, non-stimulated animals and 0% of non-implanted animals. This 1-kHz hearing loss was not associated with changes in any of the cochlear measures

  7. The comparison of stress and marital satisfaction status of parents of hearing-impaired and normal children

    Directory of Open Access Journals (Sweden)

    Karim Gharashi

    2013-03-01

    Full Text Available Background and Aim: Stress is the source of many problems in human-kind lives and threatens people's life constantly. Having hearing-impaired child, not only causes stress in parents, but also affects their marital satisfaction. The purpose of this study was comparing the stress and marital satisfaction status between the normal and hearing-impaired children's parents.Methods: This was a causal-comparative study. Eighty parents of normal children and 80 parents of hearing-impaired children were chosen from rehabilitation centers and kindergartens in city of Tabriz, Iran by available and clustering sampling method. All parents were asked to complete the Friedrich's source of stress and Enrich marital satisfaction questionnaires.Results: Parents of hearing-impaired children endure more stress than the normal hearing ones (p<0.001. The marital satisfaction of hearing-impaired children's parents was lower than the parents of normal hearing children, too (p<0.001.Conclusion: Having a hearing-impaired child causes stress and threatens the levels of marital satisfaction. This requires much more attention and a distinct planning for parents of handicap children to reduce their stress.

  8. Auditory profiling and hearing-aid satisfaction in hearing-aid candidates

    DEFF Research Database (Denmark)

    Thorup, Nicoline; Santurette, Sébastien; Jørgensen, Søren

    2016-01-01

    by default. This study aimed at identifying clinically relevant tests that may serve as an informative addition to the audiogram and which may relate more directly to HA satisfaction than the audiogram does. METHODS: A total of 29 HI and 26 normal-hearing listeners performed tests of spectral and temporal...... their audiogram. Measures of temporal resolution or speech perception in both stationary and fluctuating noise could be relevant measures to consider in an extended auditory profile. FUNDING: The study was supported by Grosserer L.F. Foghts Fond. TRIAL REGISTRATION: The protocol was approved by the Science Ethics...

  9. Persian randomized dichotic digits test: Development and dichotic listening performance in young adults

    Directory of Open Access Journals (Sweden)

    Mohammad Ebrahim Mahdavi

    2015-02-01

    Full Text Available Background and Aims: The dichotic listening subtest is an important component of the test battery for auditory processing assessment in both children and adults. A randomized dichotic digits test (RDDT was created to compensate for sensitivity weakness of double digits when detecting abnormal ear asymmetry during dichotic listening. The aim of this study was the development and  intial evaluation of the Persian randomized dichotic digits test.Method: Persian digits 1-10 (except for the bisyllabic digit, 4 uttered by a native Persian language speaker were recorded in a studio. After alignment of intensity and temporal characteristics of digit waveforms, lists 1 and 2 of the RDDT were reproduced. List 1 of the test was administered at 55 dBHL on 50 right-handed normal hearing individuals (with an equal sex ratio in the age group of 18-25 years and hearing thresholds of 15 dBHL or better in audiometric frequencies.Results: Mean (standard deviation of percent-correct score for right and left ears and right ear advantage of the subjects was 94.3 (5.3, 84.8 (7.7, and 9.5 (7.0 percent, respectively. Sixty percent of the subjects showed normal results and unilateral and bilateral deficits were seen in 24 percent and 16 percent, respectively, of studied individuals.Conclusion: It seems the Persian version of RDDT test is the same as the original test as it is able to test ear asymmerty, unilateral and bilateral deficits in dichotic listening.

  10. Age-related changes in auditory and cognitive abilities in elderly persons with hearing aids fitted at the initial stages of hearing loss

    Directory of Open Access Journals (Sweden)

    C. Obuchi

    2011-03-01

    Full Text Available In this study, we investigated the relation between the use of hearing aids at the initial stages of hearing loss and age-related changes in the auditory and cognitive abilities of elderly persons. 12 healthy elderly persons participated in an annual auditory and cognitive longitudinal examination for three years. According to their hearing level, they were divided into 3 subgroups - the normal hearing group, the hearing loss without hearing aids group, and the hearing loss with hearing aids group. All the subjects underwent 4 tests: pure-tone audiometry, syllable intelligibility test, dichotic listening test (DLT, and Wechsler Adult Intelligence Scale-Revised (WAIS-R Short Forms. Comparison between the 3 groups revealed that the hearing loss without hearing aids group showed the lowest scores for the performance tasks, in contrast to the hearing level and intelligibility results. The other groups showed no significant difference in the WAIS-R subtests. This result indicates that prescription of a hearing aid during the early stages of hearing loss is related to the retention of cognitive abilities in such elderly people. However, there were no statistical significant correlations between the auditory and cognitive tasks.

  11. Recognition of Speech of Normal-hearing Individuals with Tinnitus and Hyperacusis

    Directory of Open Access Journals (Sweden)

    Hennig, Tais Regina

    2011-01-01

    Full Text Available Introduction: Tinnitus and hyperacusis are increasingly frequent audiological symptoms that may occur in the absence of the hearing involvement, but it does not offer a lower impact or bothering to the affected individuals. The Medial Olivocochlear System helps in the speech recognition in noise and may be connected to the presence of tinnitus and hyperacusis. Objective: To evaluate the speech recognition of normal-hearing individual with and without complaints of tinnitus and hyperacusis, and to compare their results. Method: Descriptive, prospective and cross-study in which 19 normal-hearing individuals were evaluated with complaint of tinnitus and hyperacusis of the Study Group (SG, and 23 normal-hearing individuals without audiological complaints of the Control Group (CG. The individuals of both groups were submitted to the test List of Sentences in Portuguese, prepared by Costa (1998 to determine the Sentences Recognition Threshold in Silence (LRSS and the signal to noise ratio (S/N. The SG also answered the Tinnitus Handicap Inventory for tinnitus analysis, and to characterize hyperacusis the discomfort thresholds were set. Results: The CG and SG presented with average LRSS and S/N ratio of 7.34 dB NA and -6.77 dB, and of 7.20 dB NA and -4.89 dB, respectively. Conclusion: The normal-hearing individuals with or without audiological complaints of tinnitus and hyperacusis had a similar performance in the speech recognition in silence, which was not the case when evaluated in the presence of competitive noise, since the SG had a lower performance in this communication scenario, with a statistically significant difference.

  12. A Comparison of Linguistic Skills between Persian Cochlear Implant and Normal Hearing Children

    Directory of Open Access Journals (Sweden)

    Mohammad Rahimi

    2013-04-01

    Full Text Available Objectives: A large number of congenitally deaf children are born annually. If not treated, this will have destructive effects on their language and speech development, educational achievements and future occupation. In this study it has been tried to determine the level of language skills in children with Cochlear Implants (CI in comparison with Normal Hearing (NH age-mates. Methods: Test of Language Development was administered to 30 pre-lingual, severe-to-profound CI children between the ages of 5 to 8. The obtained scores were compared to a Persian database from scores of normally hearing children with the same age range. Results: Results indicated that in spite of great advancements in different areas of language after hearing gain, CI children still lag behind their hearing age-mates in almost all aspects of language skills. Discussion: Based on the results, it is suggested that children with average or above average cognitive skills who use CI have the potential to produce and understand language comparable to their normally hearing peers.

  13. Salivary Cortisol Profiles of Children with Hearing Loss

    Science.gov (United States)

    Bess, Fred H.; Gustafson, Samantha J.; Corbett, Blythe A.; Lambert, E. Warren; Camarata, Stephen M.; Hornsby, Benjamin W. Y.

    2016-01-01

    Objectives: It has long been speculated that effortful listening places children with hearing loss at risk for fatigue. School-age children with hearing loss experiencing cumulative stress and listening fatigue on a daily basis might undergo dysregulation of hypothalamic-pituitary-adrenal (HPA) axis activity resulting in elevated or flattened…

  14. Assistive Technologies for Improving Communication of Hearing Impairment in the Higher Education in Panama

    Directory of Open Access Journals (Sweden)

    Lineth Alain

    2016-12-01

    Full Text Available The ability to communicate, specifically the gift of hearing, is a necessity often taken for granted. A lack of sense of hearing affects the intellectual and emotional development of the human being who suffers from it. It prevents the fluid exchange of knowledge, thoughts and ideas that allow personal growth and development. This article emerges due to an interest in providing assistive technologies that can be considered to improve communication among hearing impaired and normal hearing listeners in the class-room of a higher education level in the Republic of Panama. Information has been compiled from various primary and secondary sources highlighting the communication problem facing this group of disabled people. Information about the situation of hearing impairment, laws, organizations, the reality with the higher education system, and finally, we will talk about Information and Communication Technologies (TICs that will work as technology support in order to improve communication in the classroom in higher education among normal-hearing and deaf people.

  15. Listeners as Authors in Preaching

    DEFF Research Database (Denmark)

    Gaarden, Marianne; Lorensen, Marlene Ringgaard

    2013-01-01

    Based on new empirical studies this essay explores how churchgoers listen to sermons in regard to the theological notion that “faith comes from hearing.” Through Bakhtinian theories presented by Lorensen and empirical findings presented by Gaarden, the apparently masked agency in preaching......) create new meaning and understanding. It is not a room that the listener or the preacher can control or occupy, but a room in which both engage....

  16. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  17. Auditory, Visual, and Auditory-Visual Perception of Emotions by Individuals with Cochlear Implants, Hearing Aids, and Normal Hearing

    Science.gov (United States)

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…

  18. Associations between speech understanding and auditory and visual tests of verbal working memory: effects of linguistic complexity, task, age, and hearing loss

    Science.gov (United States)

    Smith, Sherri L.; Pichora-Fuller, M. Kathleen

    2015-01-01

    Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769

  19. Safety of the HyperSound® Audio System in subjects with normal hearing

    Directory of Open Access Journals (Sweden)

    Ritvik P. Mehta

    2015-11-01

    Full Text Available The objective of the study was to assess the safety of the HyperSound® Audio System (HSS, a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered preexposure and post-exposure test design. We investigated primary and secondary outcome measures: i temporary threshold shift (TTS, defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs >10 dB at two or more frequencies; ii presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  20. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    Science.gov (United States)

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  1. Comparison of Reading Comprehension Skill of Students with Severe to Profound Hearing Impairment from Second up to Fifth Grade of Exceptional Schools with Normal Hearing Students

    Directory of Open Access Journals (Sweden)

    Maryam Jalalipour

    2016-03-01

    Full Text Available Background: Reading is known as one of the most important learning tools. Research results consistently have shown that even a mild hearing impairment could affect the reading skills. Due to the reported differences in reading comprehension skills between hearing impaired students and their normal hearing peers, this research was conducted to compare the differences between the two groups. The other aim was to find any changes in the reading ability of hearing impaired group during elementary school. Methods: This study is a cross-sectional (descriptive–analytic one in which reading comprehension ability of 91 students with severe and profound hearing impairment (33 girls and 58 boys from 2nd up to 5th grade of exceptional schools were compared with 50 2nd grade normal hearing students in Ahvaz, Iran. The first section of Diagnostic Reading Test (Shirazi – Nilipour, 2004 was used in this study. Then the mean reading scores of hearing impaired students in each grade was compared with control group using SPSS 13 with Mann Whitney test. Results: There was a significant difference between average scores of hearing impaired students (boys and girls in 2nd to 5th grade with normal hearing students of 2nd grade (P<0.001. Reading comprehension scores of students with hearing impairment in higher grades had improved slightly, but it was still lower than that of the normal hearing students in the 2nd grade. Conclusion: It appears that reading comprehension skill of students with significant hearing impairment near the end of elementary school years becomes weaker than normal hearing students in the second grade. Therefore, it is essential to find and resolve the underlying reasons of this condition by all professionals who work in the field of education and rehabilitation of these students.

  2. The Impact of Frequency Modulation (FM) System Use and Caregiver Training on Young Children with Hearing Impairment in a Noisy Listening Environment

    Science.gov (United States)

    Nguyen, Huong Thi Thien

    2011-01-01

    The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiment 1 was…

  3. Characteristics of the tinnitus and hyperacusis in normal hearing individuals

    Directory of Open Access Journals (Sweden)

    Daila Urnau1,

    2011-10-01

    Full Text Available Introduction: The tinnitus has become a common otological complaint. Another complaint is found in bearers of the tinnitus is the hyperacusis. Objective: Analyze the characteristics of tinnitus and hyperacusis in normal hearing individuals with associated complaints of tinnitus and hyperacusis. Method: 25 normal hearing individuals who complained of hyperacusis and tinnitus were surveyed in this form of cross-sectional study. They were questioned about the location and type of the tinnitus. The evaluation of the tinnitus was made using the Brazilian Tinnitus Handicap Inventory and acuphenometry. A questionnaire was made about the hyperacusis covering aspects such as: sounds considered uncomfortable, sensations in the presence of such sounds, and difficulty understanding speech in noise. Results: Of the 25 individuals, 64% were women and 36% men. Regarding tinnitus, 84% referred to bilateral location and 80% high pitch. The most common degree found was light (44%. The women presented tinnitus degree statistically superior to those of men. The strong intensity sounds and the reactions of irritation, anxiety and the need to move away from the sound were the most mentioned. From the analyzed individuals, 68% referred to difficulty understanding speech in noise and 12% reported using hearing protection. The most found frequencies at the acuphenometry were 6 and 8 KHz. Conclusion: Normal hearing individuals who complain of tinnitus and hyperacusis present mainly high pitch tinnitus, located bilaterally and light degree. The sounds considered uncomfortable were the high intensity ones and the most cited reaction to sound was irritation. The difficulty to understand speech in noise was reported by most of the individuals.

  4. Musical background not associated with self-perceived hearing performance or speech perception in postlingual cochlear-implant users

    NARCIS (Netherlands)

    Fuller, Christina; Free, Rolien; Maat, Bert; Baskent, Deniz

    In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech

  5. Spatial Release From Masking in Children: Effects of Simulated Unilateral Hearing Loss.

    Science.gov (United States)

    Corbin, Nicole E; Buss, Emily; Leibold, Lori J

    The purpose of this study was twofold: (1) to determine the effect of an acute simulated unilateral hearing loss on children's spatial release from masking in two-talker speech and speech-shaped noise, and (2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than in speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. Participants were 12 children (8.7 to 10.9 years) and 11 adults (18.5 to 30.4 years) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or -90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or -90° azimuth. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the

  6. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  7. Cognitive skills and the effect of noise on perceived effort in employees with aided hearing impairment and normal hearing

    Directory of Open Access Journals (Sweden)

    Håkan Hua

    2014-01-01

    Full Text Available The aim of the following study was to examine the relationship between working memory capacity (WMC, executive functions (EFs and perceived effort (PE after completing a work-related task in quiet and in noise in employees with aided hearing impairment (HI and normal hearing. The study sample consisted of 20 hearing-impaired and 20 normally hearing participants. Measures of hearing ability, WMC and EFs were tested prior to performing a work-related task in quiet and in simulated traffic noise. PE of the work-related task was also measured. Analysis of variance was used to analyze within- and between-group differences in cognitive skills, performance on the work-related task and PE. The presence of noise yielded a significantly higher PE for both groups. However, no significant group differences were observed in WMC, EFs, PE and performance in the work-related task. Interestingly, significant negative correlations were only found between PE in the noise condition and the ability to update information for both groups. In summary, noise generates a significantly higher PE and brings explicit processing capacity into play, irrespective of hearing. This suggest that increased PE involves other factors such as type of task that is to be performed, performance in the cognitive skill required solving the task at hand and whether noise is present. We therefore suggest that special consideration in hearing care should be made to the individual′s prerequisites on these factors in the labor market.

  8. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  9. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... implications for speech perception models and the development of compensation strategies in future generations of hearing instruments.......Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence...

  10. Toward a Differential Diagnosis of Hidden Hearing Loss in Humans.

    Directory of Open Access Journals (Sweden)

    M Charles Liberman

    Full Text Available Recent work suggests that hair cells are not the most vulnerable elements in the inner ear; rather, it is the synapses between hair cells and cochlear nerve terminals that degenerate first in the aging or noise-exposed ear. This primary neural degeneration does not affect hearing thresholds, but likely contributes to problems understanding speech in difficult listening environments, and may be important in the generation of tinnitus and/or hyperacusis. To look for signs of cochlear synaptopathy in humans, we recruited college students and divided them into low-risk and high-risk groups based on self-report of noise exposure and use of hearing protection. Cochlear function was assessed by otoacoustic emissions and click-evoked electrocochleography; hearing was assessed by behavioral audiometry and word recognition with or without noise or time compression and reverberation. Both groups had normal thresholds at standard audiometric frequencies, however, the high-risk group showed significant threshold elevation at high frequencies (10-16 kHz, consistent with early stages of noise damage. Electrocochleography showed a significant difference in the ratio between the waveform peaks generated by hair cells (Summating Potential; SP vs. cochlear neurons (Action Potential; AP, i.e. the SP/AP ratio, consistent with selective neural loss. The high-risk group also showed significantly poorer performance on word recognition in noise or with time compression and reverberation, and reported heightened reactions to sound consistent with hyperacusis. These results suggest that the SP/AP ratio may be useful in the diagnosis of "hidden hearing loss" and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.

  11. The cerebral functional location in normal subjects during listening to a story in Chinese, English or Japanese

    International Nuclear Information System (INIS)

    Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; Bao Chengkan

    2004-01-01

    Objectives: To compare the cerebral functional location in normal subjects during listening to a story in Chinese (native language), English (learned language) or Japanese (unfamiliar language). Methods: 9, 14,7 normal young students were asked to listen an emotional story in Chinese, the deeds of life of Aiyinsitan in English, and a dialogue in unfamiliar Japanese on a tap for 20 minters respectively. They were also asked to pay special attention to the name of the personage, time and site during listening Chinese or English story. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30 60 minutes after the tracer was administered. The results were compared with their brain imaging at rest respectively. Results: During listened to the story in Chinese, learned English, and unfamiliar Japanese, the auditory association cortex in the dual superior temporal and some midtemporal were activated. The inferior frontal and/or medial frontal lobes were activated too, special during listening to familiar language, and asked to remember the plot of the story, such as Chinese and English. But to compare with listening English, the activity in right frontal lobe was higher than in left during listened to the Chinese. During listened to unfamiliar Japanese, the frontal lobes were activated widely too. Conclusions: The results of our study shows that besides the auditory association cortex in the superior temporal and midtemporal, language can activates, the left inferior frontal (Broca s area), and in right and left frontal eye field, midtemporal, and superior frontal lobes were activated by language too. These regions in frontal have a crucial role in the decoding of familiar spoken language. And the attempt to decode unfamiliar spoken languages activates more auditory association areas. The left hemisphere is dominance hemisphere for language. But in our study, right temporal and frontal lobes were activated more

  12. Self-esteem and social well-being of children with cochlear implant compared to normal-hearing children

    DEFF Research Database (Denmark)

    Percy-Smith, L.; Caye-Thomasen, P.; Gudman, M.

    2008-01-01

    Objective: The purpose of this study was to make a quantitative comparison of parameters of self-esteem and social well-being between children with cochlear implants and normal-hearing children. Material and methods: Data were obtained from 164 children with cochlear implant (CI) and 2169 normal......-hearing children (NH). Parental questionnaires, used in a national survey assessing the self-esteem and well-being of normal-hearing children, were applied to the cochlear implanted group, in order to allow direct comparisons. Results: The children in the CI group rated significantly higher on questions about well...... overall self-esteem or number of friends. The two groups of children scored similarly on being confident, independent, social, not worried and happy. Conclusion: Children with cochlear implant score equal to or better than their normal-hearing peers on matters of self-esteem and social well-being. (C...

  13. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  14. Hearing Aids

    Science.gov (United States)

    ... listen to TV or your music player, play videogames, or use your phone. Talk to your audiologist ... your audiologist several times, but it's worth the benefit of being able to hear your friends and ...

  15. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  16. Performance-intensity functions of Mandarin word recognition tests in noise: test dialect and listener language effects.

    Science.gov (United States)

    Liu, Danzheng; Shi, Lu-Feng

    2013-06-01

    This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.

  17. Binaural Interference and the Effects of Age and Hearing Loss.

    Science.gov (United States)

    Mussoi, Bruna S S; Bentler, Ruth A

    2017-01-01

    The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two

  18. Acceptance of background noise, working memory capacity, and auditory evoked potentials in subjects with normal hearing.

    Science.gov (United States)

    Brännström, K Jonas; Zunic, Edita; Borovac, Aida; Ibertsson, Tina

    2012-01-01

    The acceptable noise level (ANL) test is a method for quantifying the amount of background noise that subjects accept when listening to speech. Large variations in ANL have been seen between normal-hearing subjects and between studies of normal-hearing subjects, but few explanatory variables have been identified. To explore a possible relationship between a Swedish version of the ANL test, working memory capacity (WMC), and auditory evoked potentials (AEPs). ANL, WMC, and AEP were tested in a counterbalanced order across subjects. Twenty-one normal-hearing subjects participated in the study (14 females and 7 males; aged 20-39 yr with an average of 25.7 yr). Reported data consists of age, pure-tone average (PTA), most comfortable level (MCL), background noise level (BNL), ANL (i.e., MCL - BNL), AEP latencies, AEP amplitudes, and WMC. Spearman's rank correlation coefficient was calculated between the collected variables to investigate associations. A principal component analysis (PCA) with Varimax rotation was conducted on the collected variables to explore underlying factors and estimate interactions between the tested variables. Subjects were also pooled into two groups depending on their results on the WMC test, one group with a score lower than the average and one with a score higher than the average. Comparisons between these two groups were made using the Mann-Whitney U-test with Bonferroni correction for multiple comparisons. A negative association was found between ANL and WMC but not between AEP and ANL or WMC. Furthermore, ANL is derived from MCL and BNL, and a significant positive association was found between BNL and WMC. However, no significant associations were seen between AEP latencies and amplitudes and the demographic variables, MCL, and BNL. The PCA identified two underlying factors: One that contained MCL, BNL, ANL, and WMC and another that contained latency for wave Na and amplitudes for waves V and Na-Pa. Using the variables in the first factor

  19. Adolescents risky MP3-player listening and its psychosocial correlates

    NARCIS (Netherlands)

    Vogel, I.; Brug, J.; Ploeg, C.P.B. van der; Raat, H.

    2011-01-01

    Analogue to occupational noise-induced hearing loss, MP3-induced hearing loss may be evolving into a significant social and public health problem. To inform prevention strategies and interventions, this study investigated correlates of adolescents' risky MP3-player listening behavior primarily

  20. Spectral Ripple Discrimination in Normal Hearing Infants

    Science.gov (United States)

    Horn, David L.; Won, Jong Ho; Rubinstein, Jay T.; Werner, Lynne A.

    2016-01-01

    Objectives Spectral resolution is a correlate of open-set speech understanding in post-lingually deaf adults as well as pre-lingually deaf children who use cochlear implants (CIs). In order to apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in NH children. In this study, spectral ripple discrimination (SRD) was used to measure listeners’ sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. Design SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90 degree shift in phase of the sinusoidally-modulated amplitude spectrum. A 2X3 between subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough “depth” (10, 13, and 20 dB) on SRD in normal hearing listeners (Experiment 1). In Experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In Experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). Results In Experiment 1, there was a significant interaction between age and ripple depth. The Infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in Experiment 2 infant performance was

  1. The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children.

    Science.gov (United States)

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2013-09-01

    Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected

  2. Loud music listening.

    Science.gov (United States)

    Petrescu, Nicolae

    2008-07-01

    Over the past four decades, there has been increasing interest in the effects of music listening on hearing. The purpose of this paper is to review published studies that detail the noise levels, the potential effects (e.g. noise-induced hearing loss), and the perceptions of those affected by music exposure in occupational and non-occupational settings. The review employed Medline, PubMed, PsychINFO, and the World Wide Web to find relevant studies in the scientific literature. Considered in this review are 43 studies concerning the currently most significant occupational sources of high-intensity music: rock and pop music playing and employment at music venues, as well as the most significant sources of non-occupational high-intensity music: concerts, dicotheques (clubs), and personal music players. Although all of the activities listed above have the potential for hearing damage, the most serious threat to hearing comes from prolonged exposures to amplified live music (concerts). The review concludes that more research is needed to clarify the hearing loss risks of music exposure from personal music players and that current scientific literature clearly recognizes an unmet hearing health need for more education regarding the risks of loud music exposure and the benefits of wearing hearing protection, for more hearing protection use by those at risk, and for more regulations limiting music intensity levels at music entertainment venues.

  3. Neural responses to silent lipreading in normal hearing male and female subjects

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Albers, Frans; van Dijk, Pim; Wit, Hero; Willemsen, Antoon

    In the past, researchers investigated silent lipreading in normal hearing subjects with functional neuroimaging tools and showed how the brain processes visual stimuli that are normally accompanied by an auditory counterpart. Previously, we showed activation differences between males and females in

  4. A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

    Science.gov (United States)

    Best, Virginia; Keidser, Gitte; Freeston, Katrina; Buchholz, Jörg M

    2016-07-01

    Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids. A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study. Thirty listeners with normal hearing participated in this study. Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR. Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score. This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit. American Academy of Audiology.

  5. Current amplification models of sensorineurall and conductive hearing loss

    Directory of Open Access Journals (Sweden)

    Ostojić Sanja

    2012-01-01

    Full Text Available The main function of a hearing aid is to improve auditory and language abilities of hearing impaired users. The amplification model has to be adapted according to age, degree and type of hearing loss. The goal of this paper is to analyze the current amplification models of sensorineural and conductive hearing loss which can provide a high quality of speech perception and sounds at any degree of hearing loss. The BAHA is a surgically implantable system for treatment of conductive hearing loss that works through direct bone conduction. BAHA is used to help people with chronic ear infections, congenital external auditory canal atresia and single sided deafness who cannot benefit from conventional hearing aids. The last generation of hearing aid for sensorineural hearing loss is cochlear implant. Bimodal amplification improves binaural hearing. Hearing aids alone do not make listening easier in all situations. The things that can interfere with listening are background noises, distance from a sound and reverberation or echo. The device used most often today is the Frequency Modulated (FM system.

  6. Survey of college students on iPod use and hearing health.

    Science.gov (United States)

    Danhauer, Jeffrey L; Johnson, Carole E; Byrd, Anne; DeGood, Laura; Meuel, Caitlin; Pecile, Angela; Koch, Lindsey L

    2009-01-01

    The popularity of personal listening devices (PLDs) including iPods has increased dramatically over the past decade. PLDs allow users to listen to music uninterrupted for prolonged periods and at levels that may pose a risk for hearing loss in some listeners, particularly those using earbud earphones that fail to attenuate high ambient noise levels and necessitate increasing volume for acoustic enjoyment. Earlier studies have documented PLD use by teenagers and adults, but omitted college students, which represent a large segment of individuals who use these devices. This study surveyed college students' knowledge about, experiences with, attitudes toward, and practices and preferences for hearing health and use of iPods and/or other PLDs. The study was designed to help determine the need, content, and preferred format for educational outreach campaigns regarding safe iPod use to college students. An 83-item questionnaire was designed and used to survey college students' knowledge about, experiences with, attitudes toward, and practices/preferences for hearing health and PLD use. The questionnaire assessed Demographics and Knowledge of Hearing Health, iPod Users' Practices and Preferences, Attitudes toward iPod Use, and Reasons for iPod Use. Generally, most college students were knowledgeable about hearing health but could use information about signs of and how to prevent hearing loss. Two-thirds of these students used iPods, but not at levels or for durations that should pose excessive risks for hearing loss when listening in quiet environments. However, most iPod users could be at risk for hearing loss given a combination of common practices. Most of these college students should not be at great risk of hearing loss from their iPods when used conscientiously. Some concern is warranted for a small segment of these students who seemed to be most at risk because they listened to their iPods at high volume levels for long durations using earbuds, and reported that

  7. How age affects memory task performance in clinically normal hearing persons.

    Science.gov (United States)

    Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-05-01

    The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.

  8. Hearing aid fine-tuning based on Dutch descriptions.

    Science.gov (United States)

    Thielemans, Thijs; Pans, Donné; Chenault, Michelene; Anteunis, Lucien

    2017-07-01

    The aim of this study was to derive an independent fitting assistant based on expert consensus. Two questions were asked: (1) what (Dutch) terms do hearing impaired listeners use nowadays to describe their specific hearing aid fitting problems? (2) What is the expert consensus on how to resolve these complaints by adjusting hearing aid parameters? Hearing aid dispensers provided descriptors that impaired listeners use to describe their reactions to specific hearing aid fitting problems. Hearing aid fitting experts were asked "How would you adjust the hearing aid if its user reports that the aid sounds…?" with the blank filled with each of the 40 most frequently mentioned descriptors. 112 hearing aid dispensers and 15 hearing aid experts. The expert solution with the highest weight value was considered the best solution for that descriptor. Principal component analysis (PCA) was performed to identify a factor structure in fitting problems. Nine fitting problems could be identified resulting in an expert-based, hearing aid manufacturer independent, fine-tuning fitting assistant for clinical use. The construction of an expert-based, hearing aid manufacturer independent, fine-tuning fitting assistant to be used as an additional tool in the iterative fitting process is feasible.

  9. Individual Hearing Loss

    Directory of Open Access Journals (Sweden)

    Sébastien Santurette

    2016-06-01

    Full Text Available It is well-established that hearing loss does not only lead to a reduction of hearing sensitivity. Large individual differences are typically observed among listeners with hearing impairment in a wide range of suprathreshold auditory measures. In many cases, audiometric thresholds cannot fully account for such individual differences, which make it challenging to find adequate compensation strategies in hearing devices. How to characterize, model, and compensate for individual hearing loss were the main topics of the fifth International Symposium on Auditory and Audiological Research (ISAAR, held in Nyborg, Denmark, in August 2015. The following collection of papers results from some of the work that was presented and discussed at the symposium.

  10. Operative findings of conductive hearing loss with intact tympanic membrane and normal temporal bone computed tomography.

    Science.gov (United States)

    Kim, Se-Hyung; Cho, Yang-Sun; Kim, Hye Jeong; Kim, Hyung-Jin

    2014-06-01

    Despite recent technological advances in diagnostic methods including imaging technology, it is often difficult to establish a preoperative diagnosis of conductive hearing loss (CHL) in patients with an intact tympanic membrane (TM). Especially, in patients with a normal temporal bone computed tomography (TBCT), preoperative diagnosis is more difficult. We investigated middle ear disorders encountered in patients with CHL involving an intact TM and normal TBCT. We also analyzed the surgical results with special reference to the pathology. We reviewed the medical records of 365 patients with intact TM, who underwent exploratory tympanotomy for CHL. Fifty nine patients (67 ears, eight bilateral surgeries) had a normal preoperative TBCT findings reported by neuro-radiologists. Demographic data, otologic history, TM findings, preoperative imaging findings, intraoperative findings, and pre- and postoperative audiologic data were obtained and analyzed. Exploration was performed most frequently in the second and fifth decades. The most common postoperative diagnosis was stapedial fixation with non-progressive hearing loss. The most commonly performed hearing-restoring procedure was stapedotomy with piston wire prosthesis insertion. Various types of hearing-restoring procedures during exploration resulted in effective hearing improvement, especially with better outcome in the ossicular chain fixation group. In patients with CHL who have intact TM and normal TBCT, we should consider an exploratory tympanotomy for exact diagnosis and hearing improvement. Information of the common operative findings from this study may help in preoperative counseling.

  11. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Contribution of low- and high-frequency bands to binaural unmasking in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Locsei, Gusztav; Dau, Torsten; Santurette, Sébastien

    2017-01-01

    This study investigated the contribution of interaural timing differences (ITDs) in different frequency regions to binaural unmasking (BU) of speech. Speech reception thresholds (SRTs) and binaural intelligibility level differences (BILDs) were measured in two-talker babble in 6 young normal-hear...

  13. Predicting social functioning in children with a cochlear implant and in normal-hearing children: the role of emotion regulation.

    Science.gov (United States)

    Wiefferink, Carin H; Rieffe, Carolien; Ketelaar, Lizet; Frijns, Johan H M

    2012-06-01

    The purpose of the present study was to compare children with a cochlear implant and normal hearing children on aspects of emotion regulation (emotion expression and coping strategies) and social functioning (social competence and externalizing behaviors) and the relation between emotion regulation and social functioning. Participants were 69 children with cochlear implants (CI children) and 67 normal hearing children (NH children) aged 1.5-5 years. Parents answered questionnaires about their children's language skills, social functioning, and emotion regulation. Children also completed simple tasks to measure their emotion regulation abilities. Cochlear implant children had fewer adequate emotion regulation strategies and were less socially competent than normal hearing children. The parents of cochlear implant children did not report fewer externalizing behaviors than those of normal hearing children. While social competence in normal hearing children was strongly related to emotion regulation, cochlear implant children regulated their emotions in ways that were unrelated with social competence. On the other hand, emotion regulation explained externalizing behaviors better in cochlear implant children than in normal hearing children. While better language skills were related to higher social competence in both groups, they were related to fewer externalizing behaviors only in cochlear implant children. Our results indicate that cochlear implant children have less adequate emotion-regulation strategies and less social competence than normal hearing children. Since they received their implants relatively recently, they might eventually catch up with their hearing peers. Longitudinal studies should further explore the development of emotion regulation and social functioning in cochlear implant children. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Binaural Hearing Ability With Bilateral Bone Conduction Stimulation in Subjects With Normal Hearing: Implications for Bone Conduction Hearing Aids.

    Science.gov (United States)

    Zeitooni, Mehrnaz; Mäki-Torkko, Elina; Stenfelt, Stefan

    The purpose of this study is to evaluate binaural hearing ability in adults with normal hearing when bone conduction (BC) stimulation is bilaterally applied at the bone conduction hearing aid (BCHA) implant position as well as at the audiometric position on the mastoid. The results with BC stimulation are compared with bilateral air conduction (AC) stimulation through earphones. Binaural hearing ability is investigated with tests of spatial release from masking and binaural intelligibility level difference using sentence material, binaural masking level difference with tonal chirp stimulation, and precedence effect using noise stimulus. In all tests, results with bilateral BC stimulation at the BCHA position illustrate an ability to extract binaural cues similar to BC stimulation at the mastoid position. The binaural benefit is overall greater with AC stimulation than BC stimulation at both positions. The binaural benefit for BC stimulation at the mastoid and BCHA position is approximately half in terms of decibels compared with AC stimulation in the speech based tests (spatial release from masking and binaural intelligibility level difference). For binaural masking level difference, the binaural benefit for the two BC positions with chirp signal phase inversion is approximately twice the benefit with inverted phase of the noise. The precedence effect results with BC stimulation at the mastoid and BCHA position are similar for low frequency noise stimulation but differ with high-frequency noise stimulation. The results confirm that binaural hearing processing with bilateral BC stimulation at the mastoid position is also present at the BCHA implant position. This indicates the ability for binaural hearing in patients with good cochlear function when using bilateral BCHAs.

  15. A study of the possibility of acquiring noise-induced hearing loss by the use of personal cassette players (walkman).

    Science.gov (United States)

    Turunen-Rise, I; Flottorp, G; Tvete, O

    1991-01-01

    Playing various types of music on five selected personal cassette players (PCPs), A-weighted sound pressure levels (SPLs), together with octave band spectrum, were measured on KEMAR (Knowles Electronics Manikin for Acoustic Research). Maximum and equivalent SPLs were measured for various types of music, PCPs and for different gain (volume) settings. The measured SPL-values on KEMAR ear were transformed to field values outside the ear canal by means of corrections based on KEMAR's ear canal resonance curve--in order to compare measured values with the Norwegian national noise risk criteria. Temporary threshold shift (TTS) was measured after listening to PCP music for one hour in order to obtain additional information about possible risk of hearing damage. TTS values are presented for six subjects when playing two different pop music cassettes on one type PCP. Our analysis indicates that the risk for permanent noise-induced hearing loss from listening to PCP is very small for normal listening conditions.

  16. Adolescents Risky MP3-Player Listening and Its Psychosocial Correlates

    Science.gov (United States)

    Vogel, Ineke; Brug, Johannes; Van Der Ploeg, Catharina P. B.; Raat, Hein

    2011-01-01

    Analogue to occupational noise-induced hearing loss, MP3-induced hearing loss may be evolving into a significant social and public health problem. To inform prevention strategies and interventions, this study investigated correlates of adolescents' risky MP3-player listening behavior primarily informed by protection motivation theory. We invited…

  17. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    Science.gov (United States)

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  18. Variability and Intelligibility of Clarified Speech to Different Listener Groups

    Science.gov (United States)

    Silber, Ronnie F.

    Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story

  19. Evaluation of Noise in Hearing Instruments Caused by GSM and DECT Mobile Telephones

    DEFF Research Database (Denmark)

    Hansen, Mie Østergaard; Poulsen, Torben

    1996-01-01

    The annoyance of noise in hearing instruments caused by electromagnetic interference from Global systems for Mobile Communication (GSM) and Digital European Cordless Telecommunication (DECT) mobile telephones has been subjectively evaluated by test subjects. The influence on speech recognition from...... the GSM and the DECT noises was also determined. The measurements involved seventeen hearing-imparied subjects. The annoyance was tested with GSM and DECT noise, each one mixed with continuous speech, a mall environment noise, or an office environment noise. Speech recognition was tested with the DANTALE...... word material mixed with GSM and DECT noise. The listening tests showed that if the noise level is acceptable so also is speech recognition. The results agree well with an investigation carried out on normal-hearing subjects. If a hearing instrument user is able to use a telephone without annoyance...

  20. The Phonemic Awareness Skills of Cochlear Implant Children and Children with Normal Hearing in Primary School

    Directory of Open Access Journals (Sweden)

    Aliakbar Dashtelei

    2015-12-01

    Full Text Available Objectives: Phonemic awareness skills have a significant impact on children speech and language. The purpose of this study was investigating the phonemic awareness skills of children with cochlear implant and normal hearing peers in primary school. Methods: phonemic awareness subscales of phonological awareness test were administered to 30 children with cochlear implantation at the first to sixth grades of primary school and 30 children with normal hearing who were matched in age with cochlear implant group. All of children were between 6 to 11 years old. Children with cochlear implant had at least 1 to 2 years of implant experience and they were over 5 years when they receive implantation. Children with cochlear implant were selected from Special education centers in Tehran and children with normal hearing were recruited from primary schools in Tehran. The phonemic awareness skills were assessed in both groups. Results: The results showed that the Mean scores of phonemic awareness skills in cochlear implant children were significantly lower than children with normal hearing (P<.0001. Discussion: children with cochlear implant, despite Cochlear implantation prosthesis, had lower performance in phonemic awareness when compared with normal hearing children. Therefore, due to importance of phonemic awareness skills in learning of literacy skills, and defects of these skills in children with cochlear implant, these skills should be assessed carefully in children with cochlear implant and rehabilitative interventions should be considered.

  1. Assessment of hearing aid algorithms using a master hearing aid: the influence of hearing aid experience on the relationship between speech recognition and cognitive capacity.

    Science.gov (United States)

    Rählmann, Sebastian; Meis, Markus; Schulte, Michael; Kießling, Jürgen; Walger, Martin; Meister, Hartmut

    2017-04-27

    Model-based hearing aid development considers the assessment of speech recognition using a master hearing aid (MHA). It is known that aided speech recognition in noise is related to cognitive factors such as working memory capacity (WMC). This relationship might be mediated by hearing aid experience (HAE). The aim of this study was to examine the relationship of WMC and speech recognition with a MHA for listeners with different HAE. Using the MHA, unaided and aided 80% speech recognition thresholds in noise were determined. Individual WMC capacity was assed using the Verbal Learning and Memory Test (VLMT) and the Reading Span Test (RST). Forty-nine hearing aid users with mild to moderate sensorineural hearing loss divided into three groups differing in HAE. Whereas unaided speech recognition did not show a significant relationship with WMC, a significant correlation could be observed between WMC and aided speech recognition. However, this only applied to listeners with HAE of up to approximately three years, and a consistent weakening of the correlation could be observed with more experience. Speech recognition scores obtained in acute experiments with an MHA are less influenced by individual cognitive capacity when experienced HA users are taken into account.

  2. The performance of an automatic acoustic-based program classifier compared to hearing aid users' manual selection of listening programs.

    Science.gov (United States)

    Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias

    2018-03-01

    To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.

  3. The effect of sensorineural hearing loss and tinnitus on speech recognition over air and bone conduction military communications headsets.

    Science.gov (United States)

    Manning, Candice; Mermagen, Timothy; Scharine, Angelique

    2017-06-01

    Military personnel are at risk for hearing loss due to noise exposure during deployment (USACHPPM, 2008). Despite mandated use of hearing protection, hearing loss and tinnitus are prevalent due to reluctance to use hearing protection. Bone conduction headsets can offer good speech intelligibility for normal hearing (NH) listeners while allowing the ears to remain open in quiet environments and the use of hearing protection when needed. Those who suffer from tinnitus, the experience of perceiving a sound not produced by an external source, often show degraded speech recognition; however, it is unclear whether this is a result of decreased hearing sensitivity or increased distractibility (Moon et al., 2015). It has been suggested that the vibratory stimulation of a bone conduction headset might ameliorate the effects of tinnitus on speech perception; however, there is currently no research to support or refute this claim (Hoare et al., 2014). Speech recognition of words presented over air conduction and bone conduction headsets was measured for three groups of listeners: NH, sensorineural hearing impaired, and/or tinnitus sufferers. Three levels of speech-to-noise (SNR = 0, -6, -12 dB) were created by embedding speech items in pink noise. Better speech recognition performance was observed with the bone conduction headset regardless of hearing profile, and speech intelligibility was a function of SNR. Discussion will include study limitations and the implications of these findings for those serving in the military. Published by Elsevier B.V.

  4. Differences in Perception of Musical Stimuli among Acoustic, Electric, and Combined Modality Listeners.

    Science.gov (United States)

    Prentiss, Sandra M; Friedland, David R; Nash, John J; Runge, Christina L

    2015-05-01

    Cochlear implants have shown vast improvements in speech understanding for those with severe to profound hearing loss; however, music perception remains a challenge for electric hearing. It is unclear whether the difficulties arise from limitations of sound processing, the nature of a damaged auditory system, or a combination of both. To examine music perception performance with different acoustic and electric hearing configurations. Chord discrimination and timbre perception were tested in subjects representing four daily-use listening configurations: unilateral cochlear implant (CI), contralateral bimodal (CIHA), bilateral hearing aid (HAHA) and normal-hearing (NH) listeners. A same-different task was used for discrimination of two chords played on piano. Timbre perception was assessed using a 10-instrument forced-choice identification task. Fourteen adults were included in each group, none of whom were professional musicians. The number of correct responses was divided by the total number of presentations to calculate scores in percent correct. Data analyses were performed with Kruskal-Wallis one-way analysis of variance and linear regression. Chord discrimination showed a narrow range of performance across groups, with mean scores ranging between 72.5% (CI) and 88.9% (NH). Significant differences were seen between the NH and all hearing-impaired groups. Both the HAHA and CIHA groups performed significantly better than the CI groups, and no significant differences were observed between the HAHA and CIHA groups. Timbre perception was significantly poorer for the hearing-impaired groups (mean scores ranged from 50.3-73.9%) compared to NH (95.2%). Significantly better performance was observed in the HAHA group as compared to both groups with electric hearing (CI and CIHA). There was no significant difference in performance between the CIHA and CI groups. Timbre perception was a significantly more difficult task than chord discrimination for both the CI and CIHA

  5. Effects of Hearing Loss and Fast-Acting Compression on Amplitude Modulation Perception and Speech Intelligibility

    DEFF Research Database (Denmark)

    Wiinberg, Alan; Jepsen, Morten Løve; Epp, Bastian

    2018-01-01

    Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss......, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly...... of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity...

  6. Discrimination task reveals differences in neural bases of tinnitus and hearing impairment.

    Directory of Open Access Journals (Sweden)

    Fatima T Husain

    Full Text Available We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI. Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN, bilateral hearing loss without tinnitus (HL, and normal hearing without tinnitus (NH. We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone.

  7. The sound of study : Student experiences of listening in the university soundscape

    NARCIS (Netherlands)

    Dr. Ernst Thoutenhoofd; Jana Knot-Dickscheit; Jana Rogge; Margriet van der Meer; Gisela Schulze; Gerold Jacobs; Beppie van den Bogaerde

    2015-01-01

    The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on

  8. The sound of study : Student experiences of listening in the university soundscape

    NARCIS (Netherlands)

    Thoutenhoofd, Ernst D.; Knot-Dickscheit, Jana; Rogge, Jana; van der Meer, Margriet; Schulze, Gisela C.; Jacobs, Gerold; van den Bogaerde, Beppie

    2016-01-01

    The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their study. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on

  9. The sound of study: Student experiences of listening in the university soundscape

    NARCIS (Netherlands)

    Thoutenhoofd, E.D.; Knot-Dickscheit, J.; Rogge, J.; van der Meer, M.; Schulze, G.; Jacobs, G.; van den Bogaerde, B.

    2016-01-01

    The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on

  10. Effects of musical training and hearing loss on pitch discrimination

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Bianchi, Federica; Dau, Torsten

    2018-01-01

    content of the sound and whether the harmonics are resolved by the auditory frequency analysis operated by cochlear processing. F0DLs are also heavily influenced by the amount of musical training received by the listener and by the spectrotemporal auditory processing deficits that often accompany...... sensorineural hearing loss. This paper reviews the latest evidence for how musical training and hearing loss affect pitch discrimination performance, based on behavioral F0DL experiments with complex tones containing either resolved or unresolved harmonics, carried out in listeners with different degrees...... of hearing loss and musicianship. A better understanding of the interaction between these two factors is crucial to determine whether auditory training based on musical tasks or targeted towards specific auditory cues may be useful to hearing-impaired patients undergoing hearing rehabilitation....

  11. Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users.

    Science.gov (United States)

    Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, Jeremy

    2015-08-26

    Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion. © The Author(s) 2015.

  12. Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users

    Directory of Open Access Journals (Sweden)

    Nicolas Vannson

    2015-08-01

    Full Text Available Musical enjoyment for cochlear implant (CI recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic. Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear and the diotic mode (both parts in both ears. Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy. Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.

  13. Is AGC beneficial in hearing aids?

    Science.gov (United States)

    King, A B; Martin, M C

    1984-02-01

    Three different functions of Automatic Gain Control (AGC) circuits in hearing aids are distinguished and the evidence for their benefits is considered. The value of AGC's function as a relatively distortion-free means of limiting output has been well established. With regard to compression, the benefit of short-term or 'syllabic' compression has not been demonstrated convincingly. Most evaluations of this type of AGC have looked for increase in speech intelligibility, but theoretical predictions of its effect do not appear to take account of the acoustic cues to consonant contrasts actually used by hearing impaired people, and empirical studies have often used listening conditions which do not give a realistic test of benefit. Relatively little attention has been paid to long-term compression, or to the effect of AGC on comfort rather than intelligibility. Listening tests carried out at the RNID and reported here have shown that AGC can benefit hearing aid users by allowing them to listen to a wider range of sound levels without either strain or discomfort, and, if time constants are well chosen, without adverse effects on speech intelligibility in quiet or in noise.

  14. Relation of distortion product otoacoustic emission and tinnitus in normal hearing patients: A pilot study

    Directory of Open Access Journals (Sweden)

    Datt Modh

    2014-01-01

    Full Text Available Introduction: Tinnitus, the perception of the sound in the absence of an external acoustic source, disrupts the daily life 1 out of every 200 adults, yet its physiological basis remains largely a mystery. The generation of tinnitus is commonly linked with the impaired functioning of the outer hair cells (OHC inside the cochlea. Otoacoustic emissions are the objective test used to assess their activity. Objective: The objective of the investigation was to study the features of Distortion product OtoAcoustic emissions (DPOAE in a group of tinnitus patients with normal hearing and to find out whether there is any difference in DPOAE findings in the tinnitus patients with normal hearing and in persons with normal hearing with no complaint of tinnitus. Materials and Methods: The participants consisted of two groups. The subject group consisted of 16 ears of patients, in which 6 subjects were having tinnitus in both ears while 4 subjects were having tinnitus only in one ear. All subjects were aged between 20 to 60 years with complaint of tinnitus with audiometrically normal hearing. Control group was comprised of 16 audiometrically normal hearing ears of persons who were age and gender matched with the subject groups and had no complaint of tinnitus. Both the subject group as well as control group was subjected for DPOAE test. Findings of both the groups were compared using the unpaired t test. Result and conclusion: It was observed that the amplitudes of DPOAE were significantly lower in tinnitus patients than that of persons without complaint of tinnitus, at a frequency of 1281-1560, 5120-6250, 7243-8837 Hz, which imply that decrease of DPOAEs amplitudes may be related to the presence of tinnitus. It can be concluded that there is association between tinnitus and reduced OHC activity which indicate the OHC of cochlea are involved in the generation of tinnitus.

  15. 10 Ways to Identify Hearing Loss | NIH MedlinePlus the Magazine

    Science.gov (United States)

    ... Does a hearing problem cause you difficulty when listening to TV or radio? Do you feel that any difficulty with your hearing limits or hampers your personal or social life? Does a hearing problem cause you difficulty ...

  16. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    Science.gov (United States)

    Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas

    2016-01-01

    To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782

  17. Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE

    Directory of Open Access Journals (Sweden)

    Birger Kollmeier

    2016-06-01

    Full Text Available To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE is extended here using the Attenuation and Distortion (A+D approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A- and Distortion (D- parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified Speech Intelligibility Index approach which is based on the individual threshold data only.

  18. Listening to music during electromyography does not influence the examinee's anxiety and pain levels.

    Science.gov (United States)

    Abraham, Alon; Drory, Vivian E

    2014-09-01

    Listening to music is a low-cost intervention that has demonstrated ability to reduce pain and anxiety levels in various medical procedures. Subjects undergoing electrophysiological examinations were randomized into a music-listening group and a control group. Visual analog scales were used to measure anxiety and pain levels during the procedure. Thirty subjects were randomized to each group. No statistically significant difference was found in anxiety or pain levels during the procedure between groups. However, most subjects in the music-listening group reported anxiety and pain reduction and would prefer to hear music in a future examination. Although listening to music during electrophysiological examinations did not reduce anxiety or pain significantly, most subjects felt a positive effect and would prefer to hear music; therefore, we suggest that music may be offered optionally in the electromyography laboratory setting. © 2014 Wiley Periodicals, Inc.

  19. The Listening and Spoken Language Data Repository: Design and Project Overview

    Science.gov (United States)

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  20. Music perception and appraisal: cochlear implant users and simulated cochlear implant listening.

    Science.gov (United States)

    Wright, Rose; Uchanski, Rosalie M

    2012-05-01

    The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users' enjoyment and perception of music from the users' demographic variables and other perceptual skills (Gfeller et al, 2008). Gfeller's results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best. The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks. A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to 20 NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised 24 musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP). Twenty-five NH adults (22-56 yr old), recruited from the local and research communities, participated in the study. Ten adult CI users (46-80 yr old), recruited from the patient population of the local adult cochlear implant

  1. Short-term auditory effects of listening to an MP3 player.

    Science.gov (United States)

    Keppler, Hannah; Dhooge, Ingeborg; Maes, Leen; D'haenens, Wendy; Bockstael, Annelies; Philips, Birgit; Swinnen, Freya; Vinck, Bart

    2010-06-01

    To determine the output levels of a commercially available MPEG layer-3 (MP3) player and to evaluate changes in hearing after 1 hour of listening to the MP3 player. First, A-weighted sound pressure levels (measured in decibels [dBA]) for 1 hour of pop-rock music on an MP3 player were measured on a head and torso simulator. Second, after participants listened to 1 hour of pop-rock music using an MP3 player, changes in hearing were evaluated with pure-tone audiometry, transient-evoked otoacoustic emissions, and distortion product otoacoustic emissions. Twenty-one participants were exposed to pop-rock music in 6 different sessions using 2 types of headphones at multiple preset gain settings of the MP3 player. Output levels of an MP3 player and temporary threshold and emission shifts after 1 hour of listening. The output levels at the full gain setting were 97.36 dBA and 102.56 dBA for the supra-aural headphones and stock earbuds, respectively. In the noise exposure group, significant changes in hearing thresholds and transient-evoked otoacoustic emission amplitudes were found between preexposure and postexposure measurements. However, this pattern was not seen for distortion product otoacoustic emission amplitudes. Significant differences in the incidence of significant threshold or emission shifts were observed between almost every session of the noise exposure group compared with the control group. Temporary changes in hearing sensitivity measured by audiometry and otoacoustic emissions indicate the potential harmful effects of listening to an MP3 player. Further research is needed to evaluate the long-term risk of cumulative noise exposure on the auditory system of adolescents and adults.

  2. Noise exposure in gold miners: utilising audiogram configuration to determine hearing handicap

    CSIR Research Space (South Africa)

    Vermaas, RL

    2007-09-01

    Full Text Available of important stimuli, which assists the listener to exclude background sounds.6 NIHL results in sounds being heard in an abnormal way and the hearing loss results in reduced hearing thresholds and reduced supra-threshold functioning and speech processing 7... to frequency responses of hearing aids and listening devices during rehabilitation. The results of this study highlight the need for an extension to the traditional use of PLH, namely as a measure of when compensation for NIHL will be paid. PLH should...

  3. Benefits of incorporating the adaptive dynamic range optimization amplification scheme into an assistive listening device for people with mild or moderate hearing loss.

    Science.gov (United States)

    Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai

    2017-08-28

    This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.

  4. Personal cassette players ('Walkman'). Do they cause noise-induced hearing loss?

    Science.gov (United States)

    Turunen-Rise, I; Flottorp, G; Tvete, O

    1991-01-01

    Playing selected types of music on five different personal cassette players (PCPs) and using different gain (volume) settings, A-weighted maximum and equivalent sound pressure levels (SPLs) were measured on KEMAR (Knowles Electronics Manikin for Acoustic Research). The octave band SPLs were measured on KEMAR ear and transformed to field values in order to compare measured values with the Norwegian noise risk criteria. Temporary threshold shifts (TTS) measured in 6 subjects after listening to two different pop music cassettes on one PCP in two separate sessions, are presented. Based upon these studies we conclude that the risk of acquiring permanent noise-induced hearing loss (NIHL) from use of PCP is very small for what we found to be normal listening conditions.

  5. The Effects of Audiovisual Stimulation on the Acceptance of Background Noise.

    Science.gov (United States)

    Plyler, Patrick N; Lang, Rowan; Monroe, Amy L; Gaudiano, Paul

    2015-05-01

    Previous examinations of noise acceptance have been conducted using an auditory stimulus only; however, the effect of visual speech supplementation of the auditory stimulus on acceptance of noise remains limited. The purpose of the present study was to determine the effect of audiovisual stimulation on the acceptance of noise in listeners with normal and impaired hearing. A repeated measures design was utilized. A total of 92 adult participants were recruited for this experiment. Of these participants, 54 were listeners with normal hearing and 38 were listeners with sensorineural hearing impairment. Most comfortable levels and acceptable noise levels (ANL) were obtained using auditory and auditory-visual stimulation modes for the unaided listening condition for each participant and for the aided listening condition for 35 of the participants with impaired hearing that owned hearing aids. Speech reading ability was assessed using the Utley test for each participant. The addition of visual input did not impact the most comfortable level values for listeners in either group; however, visual input improved unaided ANL values for listeners with normal hearing and aided ANL values in listeners with impaired hearing. ANL benefit received from visual speech input was related to the auditory ANL in listeners in each group; however, it was not related to speech reading ability for either listener group in any experimental condition. Visual speech input can significantly impact measures of noise acceptance. The current ANL measure may not accurately reflect acceptance of noise values when in more realistic environments, where the signal of interest is both audible and visible to the listener. American Academy of Audiology.

  6. Patient-reported outcome measures (PROMs) for assessing perceived listening effort in hearing loss: protocol for a systematic review

    Science.gov (United States)

    Rapport, Frances L; Boisvert, Isabelle; McMahon, Catherine M; Hutchings, Hayley A

    2017-01-01

    Introduction In the UK, it is estimated that a disabling hearing loss (HL) affects 1 in 6 people. HL has functional, economic and social-emotional consequences for affected individuals. Intervention for HL focuses on improving access to the auditory signal using hearing aids or cochlear implants. However, even if sounds are audible and speech is understood, individuals with HL often report increased effort when listening. Listening effort (LE) may be measured using self-reported measures such as patient-reported outcome measures (PROMs). PROMs are validated questionnaires completed by patients to measure their perceptions of their own functional status and well-being. When selecting a PROM for use in research or clinical practice, it is necessary to appraise the evidence of a PROM’s acceptability to patients, validity, responsiveness and reliability. Methods and analysis A systematic review of studies evaluating the measurement properties of PROMs available to measure LE in HL will be undertaken. MEDLINE, EMBASE, CINAHL, PsychINFO and Web of Science will be searched electronically. Reference lists of included studies, key journals and the grey literature will be hand-searched to identify further studies for inclusion. Two reviewers will independently complete title, abstract and full-text screening to determine study eligibility. Data on the characteristics of each study and each PROM will be extracted. Methodological quality of the included studies will be appraised using the COnsensus-based Standards for the selection of health Measurement INstruments, the quality of included PROMs appraised and the credibility of the evidence assessed. A narrative synthesis will summarise extracted data. Ethics and dissemination Ethical permission is not required, as this study uses data from published research. Dissemination will be through publication in peer-reviewed journals, conference presentations and the lead author’s doctoral dissertation. Findings may inform the

  7. Modeling Speech Level as a Function of Background Noise Level and Talker-to-Listener Distance for Talkers Wearing Hearing Protection Devices

    DEFF Research Database (Denmark)

    Bouserhal, Rachel E.; Bockstael, Annelies; MacDonald, Ewen

    2017-01-01

    Purpose: Studying the variations in speech levels with changing background noise level and talker-to-listener distance for talkers wearing hearing protection devices (HPDs) can aid in understanding communication in background noise. Method: Speech was recorded using an intra-aural HPD from 12...... complements the existing model presented by Pelegrín-García, Smits, Brunskog, and Jeong (2011) and expands on it by taking into account the effects of occlusion and background noise level on changes in speech sound level. Conclusions: Three models of the relationship between vocal effort, background noise...

  8. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  9. Music Perception and Appraisal: Cochlear Implant Users and Simulated CI Listening

    Science.gov (United States)

    Wright, Rose; Uchanski, Rosalie M.

    2012-01-01

    Background The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users’ to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al., 2008). Gfeller’s results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best. Purpose The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI-simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks. Research Design A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to twenty NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised twenty-four musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP). Study Sample Twenty-five NH adults (22 – 56 years old), recruited from the local and research communities, participated in the study. Ten adult CI users (46 – 80

  10. DESCRIPTION OF BRAINSTEM AUDITORY EVOKED RESPONSES (AIR AND BONE CONDUCTION IN CHILDREN WITH NORMAL HEARING

    Directory of Open Access Journals (Sweden)

    A. V. Pashkov

    2014-01-01

    Full Text Available Diagnosis of hearing level in small children with conductive hearing loss associated with congenital craniofacial abnormalities, particularly with agenesis of external ear and external auditory meatus is a pressing issue. Conventional methods of assessing hearing in the first years of life, i. e. registration of brainstem auditory evoked responses to acoustic stimuli in the event of air conduction, does not give an indication of the auditory analyzer’s condition due to potential conductive hearing loss in these patients. This study was aimed at assessing potential of diagnosing the auditory analyzer’s function with registering brainstem auditory evoked responses (BAERs to acoustic stimuli transmitted by means of a bone vibrator. The study involved 17 children aged 3–10 years with normal hearing. We compared parameters of registering brainstem auditory evoked responses (peak V depending on the type of stimulus transmission (air/bone in children with normal hearing. The data on thresholds of the BAERs registered to acoustic stimuli in the event of air and bone conduction obtained in this study are comparable; hearing thresholds in the event of acoustic stimulation by means of a bone vibrator correlates with the results of the BAERs registered to the stimuli transmitted by means of air conduction earphones (r = 0.9. High correlation of thresholds of BAERs to the stimuli transmitted by means of a bone vibrator with thresholds of BAERs registered when air conduction earphones were used helps to assess auditory analyzer’s condition in patients with any form of conductive hearing loss.  

  11. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    Science.gov (United States)

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  12. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    Science.gov (United States)

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability

  13. The Sound of Study: Student Experiences of Listening in the University Soundscape

    Science.gov (United States)

    Thoutenhoofd, Ernst D.; Knot-Dickscheit, Jana; Rogge, Jana; van der Meer, Margriet; Schulze, Gisela; Jacobs, Gerold; van den Bogaerde, Beppie

    2016-01-01

    The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on psychosocial strain resulting from impaired…

  14. Assessment of sound quality perception in cochlear implant users during music listening.

    Science.gov (United States)

    Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2012-04-01

    Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this

  15. MP3 player listening habits of 17 to 23 year old university students.

    Science.gov (United States)

    McNeill, Kylie; Keith, Stephen E; Feder, Katya; Konkle, Anne T M; Michaud, David S

    2010-08-01

    This study evaluated the potential risk to hearing associated with the use of portable digital audio players. Twenty-eight university students (12 males, 16 females; aged 17-23) completed a 49-item questionnaire assessing user listening habits and subjective measures of hearing health. Sound level measurements of participants' self-identified typical and 'worst case' volume levels were taken in different classrooms with background sound levels between 43 and 52 dBA. The median frequency and duration of use was 2 h per day, 6.5 days a week. The median sound levels and interquartile ranges (IQR) at typical and 'worst case' volume settings were 71 dBA (IQR=12) and 79 dBA (IQR=9), respectively. When typical sound levels were considered with self-reported duration of daily use, none of the participants surpassed Leq(8) 85 dBA. On the questionnaire, 19 students reported experiencing at least one symptom of possible noise-induced hearing loss. Significant differences in MP3 user listening patterns were found between respondents who had experienced tinnitus and those who had not. The findings add to a growing body of literature that collectively supports a need for further research investigating MP3 player user listening habits in order to assess their potential risk to hearing health.

  16. Exploring the link between cognitive abilities and speech recognition in the elderly under different listening conditions

    DEFF Research Database (Denmark)

    Nuesse, Theresa; Steenken, Rike; Neher, Tobias

    2018-01-01

    , and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing...... or hearing-impairment. To that end, speech recognition threshold (SRT) measurements were performed under several masking conditions that varied along the perceptual dimensions of dip listening, spatial separation, and informational masking. In addition, a neuropsychological test battery was administered......, which included measures of verbal working- and short-term memory, executive functioning, selective and divided attention, and lexical and semantic abilities. Age-matched groups of older adults with either age-appropriate hearing (ENH, N = 20) or aided hearing impairment (EHI, N = 21) participated...

  17. Age-Related Change in Vestibular Ganglion Cell Populations in Individuals With Presbycusis and Normal Hearing.

    Science.gov (United States)

    Gluth, Michael B; Nelson, Erik G

    2017-04-01

    We sought to establish that the decline of vestibular ganglion cell counts uniquely correlates with spiral ganglion cell counts, cochlear hair cell counts, and hearing phenotype in individuals with presbycusis. The relationship between aging in the vestibular system and aging in the cochlea is a topic of ongoing investigation. Histopathologic age-related changes the vestibular system may mirror what is seen in the cochlea, but correlations with hearing phenotype and the impact of presbycusis are not well understood. Vestibular ganglion cells, spiral ganglion cells, and cochlear hair cells were counted in specimens from individuals with presbycusis and normal hearing. These were taken from within a large collection of processed human temporal bones. Correlations between histopathology and hearing phenotype were investigated. Vestibular ganglion cell counts were positively correlated with spiral ganglion cell counts and cochlear hair cell counts and were negatively correlated with hearing phenotype. There was no statistical evidence on linear regression to suggest that the relationship between age and cell populations differed significantly according to whether presbycusis was present or not. Superior vestibular ganglion cells were more negatively correlated with age than inferior ganglion cells. No difference in vestibular ganglion cells was noted based on sex. Vestibular ganglion cell counts progressively deteriorate with age, and this loss correlates closely with changes in the cochlea, as well as hearing phenotype. However, these correlations do not appear to be unique in individuals with presbycusis as compared with those with normal hearing.

  18. Devices for hearing loss

    Science.gov (United States)

    ... the sounds you want to hear. Assistive listening devices bring certain sounds directly to your ears. This can ... a small room or on a stage. Other devices can bring the sound from your TV, radio, or music ...

  19. Characteristics of noise-canceling headphones to reduce the hearing hazard for MP3 users.

    Science.gov (United States)

    Liang, Maojin; Zhao, Fei; French, David; Zheng, Yiqing

    2012-06-01

    Three pairs of headphones [standard iPod ear buds and two noise-canceling headphones (NCHs)] were chosen to investigate frequency characteristics of noise reduction, together with their attenuation effects on preferred listening levels (PLLs) in the presence of various types of background noise. Twenty-six subjects with normal hearing chose their PLLs in quiet, street noise, and subway noise using the three headphones and with the noise-canceling system on/off. Both sets of NCHs reduced noise levels at mid- and high-frequencies. Further noise reductions occurred in low frequencies with the noise canceling system switched on. In street noise, both NCHs had similar noise reduction effects. In subway noise, better noise reduction effects were found in the expensive NCH and with noise-canceling on. A two way repeated measures analysis of variance showed that both listening conditions and headphone styles were significant influencing factors on the PLLs. Subjects tended to increase their PLLs as the background noise level increased. Compared with ear buds, PLLs obtained from NCHs-on in the presence of background noise were reduced up to 4 dB. Therefore, proper selection and use of NCHs appears beneficial in reducing the risk of hearing damage caused by high music listening levels in the presence of background noise.

  20. The effect of symmetrical and asymmetrical hearing impairment on music quality perception.

    Science.gov (United States)

    Cai, Yuexin; Zhao, Fei; Chen, Yuebo; Liang, Maojin; Chen, Ling; Yang, Haidi; Xiong, Hao; Zhang, Xueyuan; Zheng, Yiqing

    2016-09-01

    The purpose of this study was to investigate the effect of symmetrical, asymmetrical and unilateral hearing impairment on music quality perception. Six validated music pieces in the categories of classical music, folk music and pop music were used to assess music quality in terms of its 'pleasantness', 'naturalness', 'fullness', 'roughness' and 'sharpness'. 58 participants with sensorineural hearing loss [20 with unilateral hearing loss (UHL), 20 with bilateral symmetrical hearing loss (BSHL) and 18 with bilateral asymmetrical hearing loss (BAHL)] and 29 normal hearing (NH) subjects participated in the present study. Hearing impaired (HI) participants had greater difficulty in overall music quality perception than NH participants. Participants with BSHL rated music pleasantness and naturalness to be higher than participants with BAHL. Moreover, the hearing thresholds of the better ears from BSHL and BAHL participants as well as the hearing thresholds of the worse ears from BSHL participants were negatively correlated to the pleasantness and naturalness perception. HI participants rated the familiar music pieces higher than unfamiliar music pieces in the three music categories. Music quality perception in participants with hearing impairment appeared to be affected by symmetry of hearing loss, degree of hearing loss and music familiarity when they were assessed using the music quality rating test (MQRT). This indicates that binaural symmetrical hearing is important to achieve a high level of music quality perception in HI listeners. This emphasizes the importance of provision of bilateral hearing assistive devices for people with asymmetrical hearing impairment.

  1. Otoacoustic Emissions before and after Listening to Music on a Personal Player

    Science.gov (United States)

    Trzaskowski, Bartosz; Jędrzejczak, W. Wiktor; Piłka, Edyta; Cieślicka, Magdalena; Skarżyński, Henryk

    2014-01-01

    Background The problem of the potential impact of personal music players on the auditory system remains an open question. The purpose of the present study was to investigate, by means of otoacoustic emissions (OAEs), whether listening to music on a personal player affected auditory function. Material/Methods A group of 20 normally hearing adults was exposed to music played on a personal player. Transient evoked OAEs (TEOAEs) and distortion product OAEs (DPOAEs), as well as pure tone audiometry (PTA) thresholds, were tested at 3 stages: before, immediately after, and the next day following 30 min of exposure to music at 86.6 dBA. Results We found no statistically significant changes in OAE parameters or PTA thresholds due to listening to the music. Conclusions These results suggest that exposure to music at levels similar to those used in our study does not disturb cochlear function in a way that can be detected by means of PTA, TEOAE, or DPOAE tests. PMID:25116920

  2. Word Recognition for Temporally and Spectrally Distorted Materials

    DEFF Research Database (Denmark)

    Smith, Sherri L.; Pichora-Fuller, Margaret Kathleen; Wilson, Richard H.

    2012-01-01

    listeners with near-normal hearing and hearing loss performed best in the unaltered condition, followed by the jitter and smear conditions, with the poorest performance in the combined jitter-smear condition in both quiet and noise. Overall, listeners with near-normal hearing performed better than listeners...... to predict group differences, but not the effects of distortion. Individual differences in performance were similar across all distortion conditions with both age and hearing loss being implicated. The speech materials needed to be both spectrally and temporally distorted to mimic the effects of age...

  3. How well can centenarians hear?

    Directory of Open Access Journals (Sweden)

    Zhongping Mao

    Full Text Available With advancements in modern medicine and significant improvements in life conditions in the past four decades, the elderly population is rapidly expanding. There is a growing number of those aged 100 years and older. While many changes in the human body occur with physiological aging, as many as 35% to 50% of the population aged 65 to 75 years have presbycusis. Presbycusis is a progressive sensorineural hearing loss that occurs as people get older. There are many studies of the prevalence of age-related hearing loss in the United States, Europe, and Asia. However, no audiological assessment of the population aged 100 years and older has been done. Therefore, it is not clear how well centenarians can hear. We measured middle ear impedance, pure-tone behavioral thresholds, and distortion-product otoacoustic emission from 74 centenarians living in the city of Shaoxing, China, to evaluate their middle and inner ear functions. We show that most centenarian listeners had an "As" type tympanogram, suggesting reduced static compliance of the tympanic membrane. Hearing threshold tests using pure-tone audiometry show that all centenarian subjects had varying degrees of hearing loss. More than 90% suffered from moderate to severe (41 to 80 dB hearing loss below 2,000 Hz, and profound (>81 dB hearing loss at 4,000 and 8,000 Hz. Otoacoustic emission, which is generated by the active process of cochlear outer hair cells, was undetectable in the majority of listeners. Our study shows the extent and severity of hearing loss in the centenarian population and represents the first audiological assessment of their middle and inner ear functions.

  4. How Well Can Centenarians Hear?

    Science.gov (United States)

    Mao, Zhongping; Zhao, Lijun; Pu, Lichun; Wang, Mingxiao; Zhang, Qian; He, David Z. Z.

    2013-01-01

    With advancements in modern medicine and significant improvements in life conditions in the past four decades, the elderly population is rapidly expanding. There is a growing number of those aged 100 years and older. While many changes in the human body occur with physiological aging, as many as 35% to 50% of the population aged 65 to 75 years have presbycusis. Presbycusis is a progressive sensorineural hearing loss that occurs as people get older. There are many studies of the prevalence of age-related hearing loss in the United States, Europe, and Asia. However, no audiological assessment of the population aged 100 years and older has been done. Therefore, it is not clear how well centenarians can hear. We measured middle ear impedance, pure-tone behavioral thresholds, and distortion-product otoacoustic emission from 74 centenarians living in the city of Shaoxing, China, to evaluate their middle and inner ear functions. We show that most centenarian listeners had an “As” type tympanogram, suggesting reduced static compliance of the tympanic membrane. Hearing threshold tests using pure-tone audiometry show that all centenarian subjects had varying degrees of hearing loss. More than 90% suffered from moderate to severe (41 to 80 dB) hearing loss below 2,000 Hz, and profound (>81 dB) hearing loss at 4,000 and 8,000 Hz. Otoacoustic emission, which is generated by the active process of cochlear outer hair cells, was undetectable in the majority of listeners. Our study shows the extent and severity of hearing loss in the centenarian population and represents the first audiological assessment of their middle and inner ear functions. PMID:23755251

  5. Listening to the Shape of a Drum

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 10. Listening to the Shape of a Drum - You Cannot Hear the Shape of a Drum! S Kesavan. General Article Volume 3 Issue 10 October 1998 pp 49-58. Fulltext. Click here to view fulltext PDF. Permanent link:

  6. Psychosocial Aspects of Hearing Loss in Children.

    Science.gov (United States)

    Sorkin, Donna L; Gates-Ulanet, Patricia; Mellon, Nancy K

    2015-12-01

    Pediatric hearing loss changed more in the past two decades than it had in the prior 100 years with children now identified in the first weeks of life and fit early with amplification. Dramatic improvements in hearing technology allow children the opportunity to listen, speak and read on par with typically hearing peers. National laws mandate that public and private schools, workplaces, and anywhere people go must be accessible to individuals with disabilities. In 2015, most children with hearing loss attended mainstream schools with typically hearing peers. Psychosocial skills still present challenges for some children with hearing loss. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Relatively effortless listening promotes understanding and recall of medical instructions in older adults

    Directory of Open Access Journals (Sweden)

    Roberta Maria DiDonato

    2015-06-01

    Full Text Available Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual and top-down (cognitive-linguistic resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational versus clear speech technique, the listener (hearing-listening and cognitive-linguistic factors, and the environment (in competing speech babble noise versus quiet interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with ‘clear speech,’ (presented at audible levels through insertion earphones their learning efficiency, immediate and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field. This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker’s speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding.

  8. Relatively effortless listening promotes understanding and recall of medical instructions in older adults

    Science.gov (United States)

    DiDonato, Roberta M.; Surprenant, Aimée M.

    2015-01-01

    Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual) and top-down (cognitive-linguistic) resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational vs. clear speech technique), the listener (hearing-listening and cognitive-linguistic factors), and the environment (in competing speech babble noise vs. quiet) interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with “clear speech,” (presented at audible levels through insertion earphones) their learning efficiency, immediate, and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field). This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker's speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding. PMID:26106353

  9. An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students1

    Science.gov (United States)

    Makagon, Maja M.; Funayama, E. Sumie; Owren, Michael J.

    2008-01-01

    Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations. PMID:18646991

  10. The cerebral functional location in normal subjects when they listened to a story in unfamiliar Japanese

    International Nuclear Information System (INIS)

    Sun Da; Xu Wei; Zhang Hongwei; Liu Hongbiao; Liu Qichang

    2004-01-01

    Purpose: To detect the cerebral functional location when normal subjects listened to a story in unfamiliar Japanese. Methods: 7 normal young students of the medical collage of Zhejiang University, 22-24 years old, 4 male and 3 female. The first they underwent a 99mTc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in unfamiliar Japanese carefully on a tap for 20 minters. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listen to the story in unfamiliar Japanese the right superior temporal in 5 cases, left superior temporal in 2 cases, right inferior temporal in 2 cases, and left inferior temporal in 1 case were activated. Among them, dual temporal were activated in 2 cases, only right temporal in 4 cases and left temporal in 1 case. Although they were no asked to remember the plot of the story, the frontal lobes were activated lightly in all 9 subjects. Among them dual inferior frontal and/or medial frontal lobes (3 cases), right inferior frontal and/or medial frontal lobes (2 cases), left inferior frontal (5 cases), right inferior frontal (1 case), right superior frontal (3 cases) were activated. The were activated in 6 subjects, and dual occipital in 5 cases, left occipital in 1 case. Other regions that were activated included parietal lobes (right in 2 cases and left in 1 case), and left occipital lobes (in 1 case) were activated. Conclusion: During listened to the story in unfamiliar Japanese the auditory association cortex in the superior temporal and some right midtemporal (it is more in right than in left) were activated. The frontal lobes were activated widely too, and mainly in left inferior frontal lobes (Broca's area), and in the frontal eye fields and the superolateral prefrontal cortex. It is consistent with the

  11. The Effects of Listener's Familiarity about a Talker on the Free Recall Task of Spoken Words

    Directory of Open Access Journals (Sweden)

    Chikako Oda

    2011-10-01

    Full Text Available Several recent studies have examined an interaction between talker's acoustic characteristics and spoken word recognition in speech perception and have shown that listener's familiarity about a talker influences an easiness of spoken word processing. The present study examined the effect of listener's familiarity about talkers on the free recall task of words spoken by two talkers. Subjects participated in three conditions of the task: the listener has (1 explicit knowledge, (2 implicit knowledge, and (3 no knowledge of the talker. In condition (1, subjects were familiar with talker's voices and were initially informed whose voices they would hear. In condition (2, subjects were familiar with talkers' voices but were not informed whose voices they would hear. In condition (3, subjects were entirely unfamiliar with talker's voices and were not informed whose voices they would hear. We analyzed the percentage of correct answers and compared these results across three conditions. We will discuss the possibility of whether a listener's knowledge about the individual talker's acoustic characteristics stored in long term memory could reduce the quantity of the cognitive resources required in the verbal information processing.

  12. The effects of previewing questions, repetition of input, and topic preparation on listening comprehension of Iranian EFL learners

    Directory of Open Access Journals (Sweden)

    Afsar Rouhi

    2014-07-01

    Full Text Available In this study, an attempt was made to examine the effects of previewing questions, repetition of input, and topic preparation on listening comprehension of Iranian learners of English. The study was conducted with 104 high school students in 3 experimental and one control groups. The participants in the previewing questions group read the comprehension questions before hearing the text and answering the questions. The topic preparation group took advantage of topic-related texts in Persian followed by previewing questions; then they listened to the texts and answered the questions. The repetition of input group had two hearings with previewing before each hearing that preceded answering the comprehension questions. The control group, however, only had one hearing before answering the questions. The results obtained from data analysis showed that the topic preparation group performed better than the other participating groups. The repetition group, in turn, did better than the previewing group. There was, however, no statistically significant difference between the previewing and repetition groups. Based on the results obtained, it can be argued that providing and/or activating background knowledge and repeating a listening task might facilitate listening comprehension in EFL classroom settings. The findings and pedagogical implications of the study are discussed in detail.

  13. Rotatory and collic vestibular evoked myogenic potential testing in normal-hearing and hearing-impaired children.

    Science.gov (United States)

    Maes, Leen; De Kegel, Alexandra; Van Waelvelde, Hilde; Dhooge, Ingeborg

    2014-01-01

    Vertigo and imbalance are often underestimated in the pediatric population, due to limited communication abilities, atypical symptoms, and relatively quick adaptation and compensation in children. Moreover, examination and interpretation of vestibular tests are very challenging, because of difficulties with cooperation and maintenance of alertness, and because of the sometimes nauseatic reactions. Therefore, it is of great importance for each vestibular laboratory to implement a child-friendly test protocol with age-appropriate normative data. Because of the often masked appearance of vestibular problems in young children, the vestibular organ should be routinely examined in high-risk pediatric groups, such as children with a hearing impairment. Purposes of the present study were (1) to determine age-appropriate normative data for two child-friendly vestibular laboratory techniques (rotatory and collic vestibular evoked myogenic potential [cVEMP] test) in a group of children without auditory or vestibular complaints, and (2) to examine vestibular function in a group of children presenting with bilateral hearing impairment. Forty-eight typically developing children (mean age 8 years 0 months; range: 4 years 1 month to 12 years 11 months) without any auditory or vestibular complaints as well as 39 children (mean age 7 years 8 months; range: 3 years 8 months to 12 years 10 months) with a bilateral sensorineural hearing loss were included in this study. All children underwent three sinusoidal rotations (0.01, 0.05, and 0.1 Hz at 50 degrees/s) and bilateral cVEMP testing. No significant age differences were found for the rotatory test, whereas a significant increase of N1 latency and a significant threshold decrease was noticeable for the cVEMP, resulting in age-appropriate normative data. Hearing-impaired children demonstrated significantly lower gain values at the 0.01 Hz rotation and a larger percentage of absent cVEMP responses compared with normal-hearing children

  14. Clear speech and lexical competition in younger and older adult listeners.

    Science.gov (United States)

    Van Engen, Kristin J

    2017-08-01

    This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with identifying words with many phonological neighbors.

  15. A critical review of hearing-aid single-microphone noise-reduction studies in adults and children.

    Science.gov (United States)

    Chong, Foong Yen; Jenstad, Lorienne M

    2017-10-26

    Single-microphone noise reduction (SMNR) is implemented in hearing aids to suppress background noise. The purpose of this article was to provide a critical review of peer-reviewed studies in adults and children with sensorineural hearing loss who were fitted with hearing aids incorporating SMNR. Articles published between 2000 and 2016 were searched in PUBMED and EBSCO databases. Thirty-two articles were included in the final review. Most studies with adult participants showed that SMNR has no effect on speech intelligibility. Positive results were reported for acceptance of background noise, preference, and listening effort. Studies of school-aged children were consistent with the findings of adult studies. No study with infants or young children of under 5 years old was found. Recent studies on noise-reduction systems not yet available in wearable hearing aids have documented benefits of noise reduction on memory for speech processing for older adults. This evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort. Future research should test SMNR with infants and children who are younger than 5 years of age. Further development, testing, and clinical trials should be carried out on algorithms not yet available in wearable hearing aids. Testing higher cognitive level for speech processing and learning of novel sounds or words could show benefits of advanced signal processing features. These approaches should be expanded to other populations such as children and younger adults. Implications for rehabilitation The review provides a quick reference for students and clinicians regarding the efficacy and effectiveness of SMNR in wearable hearing aids. This information is useful during counseling session to build a realistic expectation among hearing aid users. Most studies in the adult population suggest that SMNR may provide some benefits to adult listeners in terms of listening

  16. Conductive hearing loss and bone conduction devices: restored binaural hearing?

    Science.gov (United States)

    Agterberg, Martijn J H; Hol, Myrthe K S; Cremers, Cor W R J; Mylanus, Emmanuel A M; van Opstal, John; Snik, Ad F M

    2011-01-01

    An important aspect of binaural hearing is the proper detection of interaural sound level differences and interaural timing differences. Assessments of binaural hearing were made in patients with acquired unilateral conductive hearing loss (UCHL, n = 11) or congenital UCHL (n = 10) after unilateral application of a bone conduction device (BCD), and in patients with bilateral conductive or mixed hearing loss after bilateral BCD application. Benefit (bilateral versus unilateral listening) was assessed by measuring directional hearing, compensation of the acoustic head shadow, binaural summation and binaural squelch. Measurements were performed after an acclimatization time of at least 10 weeks. Unilateral BCD application was beneficial, but there was less benefit in the patients with congenital UCHL as compared to patients with acquired UCHL. In adults with bilateral hearing loss, bilateral BCD application was clearly beneficial as compared to unilateral BCD application. Binaural summation was present, but binaural squelch could not be proven. To explain the poor results in the patients with congenital UCHL, two factors seemed to be important. First, a critical period in the development of binaural hearing might affect the binaural hearing abilities. Second, crossover stimulation, referring to additional stimulation of the cochlea contralateral to the BCD side, might deteriorate binaural hearing in patients with UCHL. Copyright © 2011 S. Karger AG, Basel.

  17. Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

    Science.gov (United States)

    Mao, Yitao; Xu, Li

    2017-01-01

    The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance. Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors. Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study. Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise. There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.

  18. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  19. Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

    Science.gov (United States)

    Wolfe, Jace; Duke, Mila; Schafer, Erin; Jones, Christine; Rakita, Lori

    2017-05-01

    Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations. The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies. A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment. Twelve children with mild to moderately severe sensorineural hearing loss. Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions. On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement

  20. The effects of familiarity and complexity on appraisal of complex songs by cochlear implant recipients and normal hearing adults.

    Science.gov (United States)

    Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen

    2003-01-01

    The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.

  1. Examining the Role of Concentration, Vocabulary and Self-Concept in Listening and Reading Comprehension

    Science.gov (United States)

    Wolfgramm, Christine; Suter, Nicole; Göksel, Eva

    2016-01-01

    Listening is regarded as a key requirement for successful communication and is fundamentally linked to other language skills. Unlike reading, it requires both hearing and processing information in real-time. We therefore propose that the ability to concentrate is a strong predictor of listening comprehension. Using structural equation modeling,…

  2. Modern prescription theory and application: realistic expectations for speech recognition with hearing AIDS.

    Science.gov (United States)

    Johnson, Earl E

    2013-01-01

    A major decision at the time of hearing aid fitting and dispensing is the amount of amplification to provide listeners (both adult and pediatric populations) for the appropriate compensation of sensorineural hearing impairment across a range of frequencies (e.g., 160-10000 Hz) and input levels (e.g., 50-75 dB sound pressure level). This article describes modern prescription theory for hearing aids within the context of a risk versus return trade-off and efficient frontier analyses. The expected return of amplification recommendations (i.e., generic prescriptions such as National Acoustic Laboratories-Non-Linear 2, NAL-NL2, and Desired Sensation Level Multiple Input/Output, DSL m[i/o]) for the Speech Intelligibility Index (SII) and high-frequency audibility were traded against a potential risk (i.e., loudness). The modeled performance of each prescription was compared one with another and with the efficient frontier of normal hearing sensitivity (i.e., a reference point for the most return with the least risk). For the pediatric population, NAL-NL2 was more efficient for SII, while DSL m[i/o] was more efficient for high-frequency audibility. For the adult population, NAL-NL2 was more efficient for SII, while the two prescriptions were similar with regard to high-frequency audibility. In terms of absolute return (i.e., not considering the risk of loudness), however, DSL m[i/o] prescribed more outright high-frequency audibility than NAL-NL2 for either aged population, particularly, as hearing loss increased. Given the principles and demonstrated accuracy of desensitization (reduced utility of audibility with increasing hearing loss) observed at the group level, additional high-frequency audibility beyond that of NAL-NL2 is not expected to make further contributions to speech intelligibility (recognition) for the average listener.

  3. An Examination of Listening Acquisition: A Study of Japanese University Students

    Directory of Open Access Journals (Sweden)

    Bryan Hahn

    2018-02-01

    Full Text Available English language learners seek strong speaking, reading, writing, and listening skills. When it comes to the last it is commonly assumed that if students have many opportunities to hear spoken English then that exposure will improve their ability to comprehend it. Unfortunately, this is often not the case since many second language learners do not get the opportunity to develop their listening skills naturally. Despite this, classrooms dedicate little to no time in English for Academic Purposes coursework towards listening strategies and techniques. One strategy which has shown to be effective is "connected speech". Students learn how to hear the connection between words that native speakers develop naturally. In the Fall 2016 (September 16 - December 15, 43 students were the subject of a class dedicated to training their listening skills to identify this technique. A pre-test and post-test control group design analyzed listening interventions on listening fluency among English for Academic Purposes students. An independent t-test was used to measure the mean average scores on the listening section of the treatment group's Test of English as a Foreign Language exams (n=35 taken in December 2016 and were compared to scores taken in April and September 2016 (n=37. The treatment group saw mean gains of +3.03, findings that were significant. The research also compared Test of English as a Foreign Language results taken in April and September 2015 (n=38 to those taken in December 2015 (n=29. Students had slightly higher mean gains of +3.65, also significant, perhaps indicating other variables may have led to similar findings.

  4. Risky music listening, permanent tinnitus and depression, anxiety, thoughts about suicide and adverse general health.

    Directory of Open Access Journals (Sweden)

    Ineke Vogel

    Full Text Available OBJECTIVE: To estimate the extent to which exposure to music through earphones or headphones with MP3 players or at discotheques and pop/rock concerts exceeded current occupational safety standards for noise exposure, to examine the extent to which temporary and permanent hearing-related symptoms were reported, and to examine whether the experience of permanent symptoms was associated with adverse perceived general and mental health, symptoms of depression, and thoughts about suicide. METHODS: A total of 943 students in Dutch inner-city senior-secondary vocational schools completed questionnaires about their sociodemographics, music listening behaviors and health. Multiple logistic regression analyses were used to examine associations. RESULTS: About 60% exceeded safety standards for occupational noise exposure; about one third as a result of listening to MP3 players. About 10% of the participants experienced permanent hearing-related symptoms. Temporary hearing symptoms that occurred after using an MP3 player or going to a discotheque or pop/rock concert were associated with exposure to high-volume music. However, compared to participants not experiencing permanent hearing-related symptoms, those experiencing permanent symptoms were less often exposed to high volume music. Furthermore, they reported at least two times more often symptoms of depression, thoughts about suicide and adverse self-assessed general and mental health. CONCLUSIONS: Risky music-listening behaviors continue up to at least the age of 25 years. Permanent hearing-related symptoms are associated with people's health and wellbeing. Participants experiencing such symptoms appeared to have changed their behavior to be less risky. In order to induce behavior change before permanent and irreversible hearing-related symptoms occur, preventive measurements concerning hearing health are needed.

  5. Risky music listening, permanent tinnitus and depression, anxiety, thoughts about suicide and adverse general health.

    Science.gov (United States)

    Vogel, Ineke; van de Looij-Jansen, Petra M; Mieloo, Cathelijne L; Burdorf, Alex; de Waart, Frouwkje

    2014-01-01

    To estimate the extent to which exposure to music through earphones or headphones with MP3 players or at discotheques and pop/rock concerts exceeded current occupational safety standards for noise exposure, to examine the extent to which temporary and permanent hearing-related symptoms were reported, and to examine whether the experience of permanent symptoms was associated with adverse perceived general and mental health, symptoms of depression, and thoughts about suicide. A total of 943 students in Dutch inner-city senior-secondary vocational schools completed questionnaires about their sociodemographics, music listening behaviors and health. Multiple logistic regression analyses were used to examine associations. About 60% exceeded safety standards for occupational noise exposure; about one third as a result of listening to MP3 players. About 10% of the participants experienced permanent hearing-related symptoms. Temporary hearing symptoms that occurred after using an MP3 player or going to a discotheque or pop/rock concert were associated with exposure to high-volume music. However, compared to participants not experiencing permanent hearing-related symptoms, those experiencing permanent symptoms were less often exposed to high volume music. Furthermore, they reported at least two times more often symptoms of depression, thoughts about suicide and adverse self-assessed general and mental health. Risky music-listening behaviors continue up to at least the age of 25 years. Permanent hearing-related symptoms are associated with people's health and wellbeing. Participants experiencing such symptoms appeared to have changed their behavior to be less risky. In order to induce behavior change before permanent and irreversible hearing-related symptoms occur, preventive measurements concerning hearing health are needed.

  6. Brainstem auditory evoked response characteristics in normal-hearing subjects with chronic tinnitus and in non-tinnitus group

    Directory of Open Access Journals (Sweden)

    Shadman Nemati

    2014-06-01

    Full Text Available Background and Aim: While most of the people with tinnitus have some degrees of hearing impairment, a small percent of patients admitted to ear, nose and throat clinics or hearing evaluation centers are those who complain of tinnitus despite having normal hearing thresholds. This study was performed to better understanding of the reasons of probable causes of tinnitus and to investigate possible changes in the auditory brainstem function in normal-hearing patients with chronic tinnitus.Methods: In this comparative cross-sectional, descriptive and analytic study, 52 ears (26 with and 26 without tinnitus were examined. Components of the auditory brainstem response (ABR including wave latencies and wave amplitudes were determined in the two groups and analyzed using appropriate statistical methods.Results: The mean differences between the absolute latencies of waves I, III and V was less than 0.1 ms between the two groups that was not statistically significant. Also, the interpeak latency values of waves I-III, III-V and I-V in both groups had no significant difference. Only, the V/I amplitude ratio in the tinnitus group was significantly higher (p=0.04.Conclusion: The changes observed in amplitude of waves, especially in the latter ones, can be considered as an indication of plastic changes in neuronal activity and its possible role in generation of tinnitus in normal-hearing patients.

  7. Small-Group Phonological Awareness Training for Pre-Kindergarten Children with Hearing Loss Who Wear Cochlear Implants and/or Hearing Aids

    Science.gov (United States)

    Werfel, Krystal L.; Douglas, Michael; Ackal, Leigh

    2016-01-01

    This case report details a year-long phonological awareness (PA) intervention for pre-kindergarten children with hearing loss (CHL) who use listening and spoken language. All children wore cochlear implants and/or hearing aids. Intervention occurred for 15 min/day, 4 days per week across the pre-kindergarten school year and was delivered by…

  8. Teaching listening to older second language learners: Classroom implications

    Directory of Open Access Journals (Sweden)

    Agata Słowik

    2017-09-01

    Full Text Available Listening is often listed as the most challenging language skill that the students need to learn in the language classrooms. Therefore the awareness of listening strategies and techniques, such as bottom-up and top-down processes, specific styles of listening, or various compensatory strategies, prove to facilitate the process of learning of older individuals. Indeed, older adult learners find decoding the aural input, more challenging than the younger students. Therefore, both students’ and teachers’ subjective theories and preferences regarding listening comprehension as well as the learners’ cognitive abilities should be taken into account while designing a teaching model for this age group. The aim of this paper is, thus, to draw the conclusions regarding processes, styles and strategies involved in teaching listening to older second language learners and to juxtapose them with the already existing state of research regarding age-related hearing impairments, which will serve as the basis for future research.

  9. 45 CFR 1308.11 - Eligibility criteria: Hearing impairment including deafness.

    Science.gov (United States)

    2010-10-01

    ... OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES THE ADMINISTRATION FOR... impairment including deafness. (a) A child is classified as deaf if a hearing impairment exists which is so... hearing loss can include impaired listening skills, delayed language development, and articulation...

  10. The cerebral functional location in normal subjects when they listened to story in Chinese

    International Nuclear Information System (INIS)

    Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; Bao Chengkan

    2004-01-01

    Purpose: To detect the cerebral functional location when normal subjects listened to a story in Chinese. Methods: 9 normal young students of the medical collage of Zhejiang University, 23-24 years old, 5 male and 4 female. The first they underwent a 99m Tc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in Chinese on a tap for 20 minters. The related an emotional story about a young president of a radio station and his girlfriend as well as his audience and fan, a young girl. They were also asked to pay special attention to the name of the personage in the story, what time and site did the story open out. They were also asked to imagine the imagination of the story. 99m Tc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listen to the story in Chinese and asked to remember the imagination of story the right superior temporal in 5 cases, left superior temporal in 3 cases, right mid temporal in 2 cases were activated. Among them, dual temporal were activated in l case, only right temporal in 6 cases and left temporal in 2 cases. It is very interesting that the inferior frontal and/or medial frontal lobes were activated lightly in all 9 subjects. Among them dual frontal lobes were activated in 5 subjects, and only right frontal in 3 cases and left frontal in 1 case. The occipital lobes were activated in 6 subjects, and dual occipital in 5 cases, left occipital in 1 case. Other regions that were activated included pre-cingulated gyrus (in 1 case), and left thalamic (in 1 case). Conclusion: During listened to the story in Chinese and asked remember the plot of the story the auditory association cortex in the superior temporal (it is more in right than in left) and some right mid temporal were activated. The inferior frontal and

  11. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    Science.gov (United States)

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by

  12. Persian competing word test: Development and preliminary results in normal children

    Directory of Open Access Journals (Sweden)

    Mohammad Ebrahim Mahdavi

    2008-12-01

    Full Text Available Background and Aim: Assessment of central auditory processing skills needs various behavioral tests in format of a test battery. There is a few Persian speech tests for documenting central auditory processing disorders. The purpose of this study was developing a dichotic test formed of one-syllabic words suitable for evaluation of central auditory processing in Persian language children and reporting its preliminary results in a group of normal children.Materials and Methods: Persian words in competing manner test was developed utilizing most frequent monosyllabic words in children storybooks reported in the previous researches. The test was performed at MCL on forty-five normal children (39 right-handed and 6 left-handed aged 5-11 years. The children did not show any obvious problem in hearing, speech, language and learning. Free (n=28 and directed listening (n=17 tasks were investigated.Results: The results show that in directed listening task, there is significant advantage for performance of pre-cued ear relative to opposite side. Right ear advantage is evident in free recall condition. Average performance of the children in directed recall is significantly better than free recall. Average row score of the test increases with the children age.Conclusion: Persian words in competing manner test as a dichotic test, can show major characteristics of dichotic listening and effect of maturation of central auditory system on it in normal children.

  13. Learning to listen again: the role of compliance in auditory training for adults with hearing loss.

    Science.gov (United States)

    Chisolm, Theresa Hnath; Saunders, Gabrielle H; Frederick, Melissa T; McArdle, Rachel A; Smith, Sherri L; Wilson, Richard H

    2013-12-01

    To examine the role of compliance in the outcomes of computer-based auditory training with the Listening and Communication Enhancement (LACE) program in Veterans using hearing aids. The authors examined available LACE training data for 5 tasks (i.e., speech-in-babble, time compression, competing speaker, auditory memory, missing word) from 50 hearing-aid users who participated in a larger, randomized controlled trial designed to examine the efficacy of LACE training. The goals were to determine: (a) whether there were changes in performance over 20 training sessions on trained tasks (i.e., on-task outcomes); and (b) whether compliance, defined as completing all 20 sessions, vs. noncompliance, defined as completing less than 20 sessions, influenced performance on parallel untrained tasks (i.e., off-task outcomes). The majority, 84% of participants, completed 20 sessions, with maximum outcome occurring with at least 10 sessions of training for some tasks and up to 20 sessions of training for others. Comparison of baseline to posttest performance revealed statistically significant improvements for 4 of 7 off-task outcome measures for the compliant group, with at least small (0.2 compliance in the present study may be attributable to use of systematized verbal and written instructions with telephone follow-up. Compliance, as expected, appears important for optimizing the outcomes of auditory training. Methods to improve compliance in clinical populations need to be developed, and compliance data are important to report in future studies of auditory training.

  14. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    Science.gov (United States)

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Phonological processes in the speech of school-age children with hearing loss: Comparisons with children with normal hearing.

    Science.gov (United States)

    Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline

    2018-04-27

    In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. The effect of hearing aid signal-processing schemes on acceptable noise levels: perception and prediction.

    Science.gov (United States)

    Wu, Yu-Hsiang; Stangl, Elizabeth

    2013-01-01

    The acceptable noise level (ANL) test determines the maximum noise level that an individual is willing to accept while listening to speech. The first objective of the present study was to systematically investigate the effect of wide dynamic range compression processing (WDRC), and its combined effect with digital noise reduction (DNR) and directional processing (DIR), on ANL. Because ANL represents the lowest signal-to-noise ratio (SNR) that a listener is willing to accept, the second objective was to examine whether the hearing aid output SNR could predict aided ANL across different combinations of hearing aid signal-processing schemes. Twenty-five adults with sensorineural hearing loss participated in the study. ANL was measured monaurally in two unaided and seven aided conditions, in which the status of the hearing aid processing schemes (enabled or disabled) and the location of noise (front or rear) were manipulated. The hearing aid output SNR was measured for each listener in each condition using a phase-inversion technique. The aided ANL was predicted by unaided ANL and hearing aid output SNR, under the assumption that the lowest acceptable SNR at the listener's eardrum is a constant across different ANL test conditions. Study results revealed that, on average, WDRC increased (worsened) ANL by 1.5 dB, while DNR and DIR decreased (improved) ANL by 1.1 and 2.8 dB, respectively. Because the effects of WDRC and DNR on ANL were opposite in direction but similar in magnitude, the ANL of linear/DNR-off was not significantly different from that of WDRC/DNR-on. The results further indicated that the pattern of ANL change across different aided conditions was consistent with the pattern of hearing aid output SNR change created by processing schemes. Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode

  17. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    Science.gov (United States)

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  18. Subjective and psychophysiological indices of listening effort in a competing-talker task

    Science.gov (United States)

    Mackersie, Carol L.; Cones, Heather

    2010-01-01

    Background The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech recognition task that requires inhibition of competing sounds. Purpose The purpose was a) to determine if psychophysiological indices of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand and c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. Research Design A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Study Sample Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24–54). Data Collection and Analysis Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic activity (EMG) were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of tasks demand were: single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the NASA-TLX questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Results Mean performance was higher than 96

  19. Subjective and psychophysiological indexes of listening effort in a competing-talker task.

    Science.gov (United States)

    Mackersie, Carol L; Cones, Heather

    2011-02-01

    The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech-recognition task that requires inhibition of competing sounds. The purpose was (a) to determine if psychophysiological indexes of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task, (b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand, and (c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24-54). Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic (EMG) activity were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of task demand were single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard, while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the National Aeronautics and Space Administration Task Load Index questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Mean performance was higher than 96% for all listening tasks. There

  20. A correlational method to concurrently measure envelope and temporal fine structure weights: effects of age, cochlear pathology, and spectral shaping.

    Science.gov (United States)

    Fogerty, Daniel; Humes, Larry E

    2012-09-01

    The speech signal may be divided into spectral frequency-bands, each band containing temporal properties of the envelope and fine structure. This study measured the perceptual weights for the envelope and fine structure in each of three frequency bands for sentence materials in young normal-hearing listeners, older normal-hearing listeners, aided older hearing-impaired listeners, and spectrally matched young normal-hearing listeners. The availability of each acoustic property was independently varied through noisy signal extraction. Thus, the full speech stimulus was presented with noise used to mask six different auditory channels. Perceptual weights were determined by correlating a listener's performance with the signal-to-noise ratio of each acoustic property on a trial-by-trial basis. Results demonstrate that temporal fine structure perceptual weights remain stable across the four listener groups. However, a different weighting typography was observed across the listener groups for envelope cues. Results suggest that spectral shaping used to preserve the audibility of the speech stimulus may alter the allocation of perceptual resources. The relative perceptual weighting of envelope cues may also change with age. Concurrent testing of sentences repeated once on a previous day demonstrated that weighting strategies for all listener groups can change, suggesting an initial stabilization period or susceptibility to auditory training.

  1. Exploring the Link Between Cognitive Abilities and Speech Recognition in the Elderly Under Different Listening Conditions

    Directory of Open Access Journals (Sweden)

    Theresa Nuesse

    2018-05-01

    Full Text Available Elderly listeners are known to differ considerably in their ability to understand speech in noise. Several studies have addressed the underlying factors that contribute to these differences. These factors include audibility, and age-related changes in supra-threshold auditory processing abilities, and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing or hearing-impairment. To that end, speech recognition threshold (SRT measurements were performed under several masking conditions that varied along the perceptual dimensions of dip listening, spatial separation, and informational masking. In addition, a neuropsychological test battery was administered, which included measures of verbal working and short-term memory, executive functioning, selective and divided attention, and lexical and semantic abilities. Age-matched groups of older adults with either age-appropriate hearing (ENH, n = 20 or aided hearing impairment (EHI, n = 21 participated. In repeated linear regression analyses, composite scores of cognitive test outcomes (evaluated using PCA were included to predict SRTs. These associations were different for the two groups. When hearing thresholds were controlled for, composed cognitive factors were significantly associated with the SRTs for the ENH listeners. Whereas better lexical and semantic abilities were associated with lower (better SRTs in this group, there was a negative association between attentional abilities and speech recognition in the presence of spatially separated speech-like maskers. For the EHI group, the pure-tone thresholds (averaged across 0.5, 1, 2, and 4 kHz were significantly associated with the SRTs, despite the fact that all signals were amplified and therefore in principle

  2. Temporal and spectral contributions to musical instrument identification and discrimination among cochlear implant users.

    Science.gov (United States)

    Prentiss, Sandra M; Friedland, David R; Fullmer, Tanner; Crane, Alison; Stoddard, Timothy; Runge, Christina L

    2016-09-01

    To investigate the contributions of envelope and fine-structure to the perception of timbre by cochlear implant (CI) users as compared to normal hearing (NH) listeners. This was a prospective cohort comparison study. Normal hearing and cochlear implant patients were tested. Three experiments were performed in sound field using musical notes altered to affect the characteristic pitch of an instrument and the acoustic envelope. Experiment 1 assessed the ability to identify the instrument playing each note, while experiments 2 and 3 assessed the ability to discriminate the different stimuli. Normal hearing subjects performed better than CI subjects in all instrument identification tasks, reaching statistical significance for 4 of 5 stimulus conditions. Within the CI population, acoustic envelope modifications did not significantly affect instrument identification or discrimination. With envelope and pitch cues removed, fine structure discrimination performance was similar between normal hearing and CI users for the majority of conditions, but some specific instrument comparisons were significantly more challenging for CI users. Cochlear implant users perform significantly worse than normal hearing listeners on tasks of instrument identification. However, cochlear implant listeners can discriminate differences in envelope and some fine structure components of musical instrument sounds as well as normal hearing listeners. The results indicated that certain fine structure cues are important for cochlear implant users to make discrimination judgments, and therefore may affect interpretation toward associating with a specific instrument for identification.

  3. Listening Niches across a Century of Popular Music

    Science.gov (United States)

    Krumhansl, Carol Lynne

    2017-01-01

    This article investigates the contexts, or “listening niches”, in which people hear popular music. The study spanned a century of popular music, divided into 10 decades, with participants born between 1940 and 1999. It asks about whether they know and like the music in each decade, and their emotional reactions. It also asks whether the music is associated with personal memories and, if so, with whom they were listening, or whether they were listening alone. Finally, it asks what styles of music they were listening to, and the music media they were listening with, in different periods of their lives. The results show a regular progression through the life span of listening with different individuals (from parents to children) and with different media (from records to streaming services). A number of effects found in previous studies were replicated, but the study also showed differences across the birth cohorts. Overall, there was a song specific age effect with preferences for music of late adolescence and early adulthood; however, this effect was stronger for the older participants. In general, music of the 1940s, 1960s, and 1980s was preferred, particularly among younger participants. Music of these decades also produced the strongest emotional responses, and the most frequent and specific personal memories. When growing up, the participants tended to listen to the older music on the older media, but rapidly shifted to the new music technologies in their late teens and early 20s. Younger listeners are currently listening less to music alone than older listeners, suggesting an important role of socially sharing music, but they also report feeling sadder when listening to music. Finally, the oldest listeners had the broadest taste, liking music that they had been exposed to during their lifetimes in different listening niches. PMID:28424637

  4. Comparison of visual working memory in deaf and hearing-impaired students with normal counterparts: A research in people without sign language

    Directory of Open Access Journals (Sweden)

    Farideh Tangestani Zadeh

    2015-02-01

    Full Text Available Background and Aim: The hearing defects in deaf and hearing-impaired students also affect their cognitive skills such as memory in addition to communication skills. Hence, the aim of this study was to compare visual working memory in deaf and hearing-impaired students with that in normal counterparts.Method: In the present study, which was a causal-comparative study using the André Rey test, 30 deaf and 30 hearing-impaired students were compared with 30 students in a normal group, and they were matched based on gender, intelligence, educational grade, and socioeconomic status.Findings: Findings show that there is significant difference between the three groups’ subjects (p0.05.Conclusion: Function of deaf or hard-of-hearing students in the visual working memory task was weaker in comparison with the normal counterparts, while the two deaf and hard-of-hearing groups have similar functions. With a better identification and understanding of the factors that affect the development of this cognitive ability, we can offer new methods of teaching and reduce many of the disadvantages of this group of people in the different fields of cognitive science.

  5. Stress in Mothers of Hearing Impaired Children Compared to Mothers of Normal and Other Disabled Children

    Directory of Open Access Journals (Sweden)

    Mahnaz Aliakbari Dehkordi

    2011-06-01

    Full Text Available Background and Aim: Stress is associated with life satisfaction and also development of some physical diseases. Birth of a disabled child with mental or physical disability (especially deaf or blind children, impose an enormous load of stress on their parents especially the mothers. This study compared stress levels of mothers with hearing impaired children and mothers of normal children or with other disabilities.Methods: In this study, cluster random sampling was performed in Karaj city. 120 mothers in four groups of having a child with mental retardation, low vision, hearing impairment and with normal children were included. Family inventory of life events (FILE of Mc Cubbin et al. was used to determine stress level in four groups of mothers.Results: The results of this research indicated a significant difference (p<0.05 between stress levels of mothers with hearing impaired children and mothers of other disabled and normal children in subscales of intra-family stress, finance and business strains, stress of job transitions, stress of illness and family care and family members "in and out''. There was no difference between compared groups in other subscales.Conclusion: Since deafness is a hidden inability, the child with hearing impairment has a set of social and educational problems causing great stress for parents, especially to mother. In order to decrease mother’s stress, it is suggested to provide more family consultation, adequate social support and to run educational classes for parents to practice stress coping strategies.

  6. Working memory and referential communication – multimodal aspects of interaction between children with sensorineural hearing impairment and normal hearing peers

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2015-03-01

    Full Text Available Whereas the language development of children with sensorineural hearing impairment (SNHI has repeatedly been shown to differ from that of peers with normal hearing (NH, few studies have used an experimental approach to investigate the consequences on everyday communicative interaction. This mini review gives an overview of a range of studies on children with SNHI and NH exploring intra- and inter-individual cognitive and linguistic systems during communication.Over the last decade, our research group has studied the conversational strategies of Swedish speaking children and adolescents with SNHI and NH using referential communication, an experimental analogue to problem-solving in the classroom. We have established verbal and nonverbal control and validation mechanisms, related to working memory capacity (WMC and phonological short term memory (PSTM. We present main findings and future directions relevant for the field of cognitive hearing science and for the clinical and school-based management of children and adolescents with SNHI.

  7. Differences in the perceived music pleasantness between monolateral cochlear implanted and normal hearing children assessed by EEG.

    Science.gov (United States)

    Vecchiato, G; Maglione, A G; Scorpecci, A; Malerba, P; Graziani, I; Cherubino, P; Astolfi, L; Marsella, P; Colosimo, A; Babiloni, Fabio

    2013-01-01

    The perception of the music in cochlear implanted (CI) patients is an important aspect of their quality of life. In fact, the pleasantness of the music perception by such CI patients can be analyzed through a particular analysis of EEG rhythms. Studies on healthy subjects show that exists a particular frontal asymmetry of the EEG alpha rhythm which can be correlated with pleasantness of the perceived stimuli (approach-withdrawal theory). In particular, here we describe differences between EEG activities estimated in the alpha frequency band for a monolateral CI group of children and a normal hearing one during the fruition of a musical cartoon. The results of the present analysis showed that the alpha EEG asymmetry patterns related to the normal hearing group refers to a higher pleasantness perception when compared to the cerebral activity of the monolateral CI patients. In fact, the present results support the statement that a monolateral CI group could perceive the music in a less pleasant way when compared to normal hearing children.

  8. Comparative multivariate analyses of transient otoacoustic emissions and distorsion products in normal and impaired hearing.

    Science.gov (United States)

    Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel

    2015-01-01

    The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high

  9. Changes in Hearing Sensitivity Following Portable Stereo System Use.

    Science.gov (United States)

    Pugsley, S; Stuart, A; Kalinowski, J; Armson, J

    1993-11-01

    Changes in hearing sensitivity following portable stereo system (PSS; Sony Walkman Model WM-AF605 with Sony Semiaural Headphones Model MDR-A21L) use were investigated. Test-retest differences (TRDs) in audiometric thresholds at eight frequencies (250, 500, 1000, 2000, 3000, 4000, 6000, & 8000 Hz) were obtained from 15 young adults before and after one hour of PSS exposure at their preferred listening levels.Values for the 95% confidence levels representing critical differences in test-retest auditory thresholds for the eight test frequencies were generated from a control group of 15 young adults. Experimental subjects' TRDs, when compared to the critical TRDs, failed to display a decrease in hearing sensitivity after one hour of PSS use. It is suggested that PSS use at preferred listening levels, following an exposure time of one hour, may not contribute to a significant impairment in hearing sensitivity.

  10. Developmental Conductive Hearing Loss Reduces Modulation Masking Release.

    Science.gov (United States)

    Ihlefeld, Antje; Chen, Yi-Wen; Sanes, Dan H

    2016-01-01

    Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH) human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated) than when that envelope is flat across time (unmodulated). This perceptual benefit is called modulation masking release (MMR). When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+). In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR-). These interactions across disparate frequency ranges are thought to require central nervous system (CNS) processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL) in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.

  11. Developmental Conductive Hearing Loss Reduces Modulation Masking Release

    Directory of Open Access Journals (Sweden)

    Antje Ihlefeld

    2016-12-01

    Full Text Available Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated than when that envelope is flat across time (unmodulated. This perceptual benefit is called modulation masking release (MMR. When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+. In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR−. These interactions across disparate frequency ranges are thought to require central nervous system (CNS processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.

  12. Hearing Aid-Induced Plasticity in the Auditory System of Older Adults: Evidence from Speech Perception

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph

    2015-01-01

    Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-­in-­noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…

  13. Acoustic properties of naturally produced clear speech at normal speaking rates

    Science.gov (United States)

    Krause, Jean C.; Braida, Louis D.

    2004-01-01

    Sentences spoken ``clearly'' are significantly more intelligible than those spoken ``conversationally'' for hearing-impaired listeners in a variety of backgrounds [Picheny et al., J. Speech Hear. Res. 28, 96-103 (1985); Uchanski et al., ibid. 39, 494-509 (1996); Payton et al., J. Acoust. Soc. Am. 95, 1581-1592 (1994)]. While producing clear speech, however, talkers often reduce their speaking rate significantly [Picheny et al., J. Speech Hear. Res. 29, 434-446 (1986); Uchanski et al., ibid. 39, 494-509 (1996)]. Yet speaking slowly is not solely responsible for the intelligibility benefit of clear speech (over conversational speech), since a recent study [Krause and Braida, J. Acoust. Soc. Am. 112, 2165-2172 (2002)] showed that talkers can produce clear speech at normal rates with training. This finding suggests that clear speech has inherent acoustic properties, independent of rate, that contribute to improved intelligibility. Identifying these acoustic properties could lead to improved signal processing schemes for hearing aids. To gain insight into these acoustical properties, conversational and clear speech produced at normal speaking rates were analyzed at three levels of detail (global, phonological, and phonetic). Although results suggest that talkers may have employed different strategies to achieve clear speech at normal rates, two global-level properties were identified that appear likely to be linked to the improvements in intelligibility provided by clear/normal speech: increased energy in the 1000-3000-Hz range of long-term spectra and increased modulation depth of low frequency modulations of the intensity envelope. Other phonological and phonetic differences associated with clear/normal speech include changes in (1) frequency of stop burst releases, (2) VOT of word-initial voiceless stop consonants, and (3) short-term vowel spectra.

  14. Development and assessment of two fixed-array microphones for use with hearing aids

    NARCIS (Netherlands)

    Bilsen, F.A.; Soede, W.; Berkhout, A.J.

    1993-01-01

    Hearing-impaired listeners often have great difficulty understanding speech in situations with background noise (e.g., meetings, parties) . Conventional hearing aids offer insufficient directivity to significantly reduce background noise relative to the desired speech signal . Based on array

  15. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    Directory of Open Access Journals (Sweden)

    Martine Coene

    2015-01-01

    Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.

  16. The development and standardization of Self-assessment for Hearing Screening of the Elderly.

    Science.gov (United States)

    Kim, Gibbeum; Na, Wondo; Kim, Gungu; Han, Woojae; Kim, Jinsook

    2016-01-01

    The present study aimed to develop and standardize a screening tool for elderly people who wish to check for themselves their level of hearing loss. The Self-assessment for Hearing Screening of the Elderly (SHSE) consisted of 20 questions based on the characteristics of presbycusis using a five-point scale: seven questions covered general issues related to sensorineural hearing loss, seven covered hearing difficulty under distracting listening conditions, two covered hearing difficulty with fast-rated speech, and four covered the working memory function during communication. To standardize SHSE, 83 elderly participants took part in the study: 25 with normal hearing, and 22, 23, and 13 with mild, moderate, and moderate-to-severe sensorineural hearing loss, respectively, according to their hearing sensitivity. All were retested 3 weeks later using the same questionnaire to confirm its reliability. In addition, validity was assessed using various hearing tests such as a sentence test with background noise, a time-compressed speech test, and a digit span test. SHSE and its subcategories showed good internal consistency. SHSE and its subcategories demonstrated high test-retest reliability. A high correlation was observed between the total scores and pure-tone thresholds, which indicated gradually increased SHSE scores of 42.24%, 55.27%, 66.61%, and 78.15% for normal hearing, mild, moderate, and moderate-to-severe groups, respectively. With regard to construct validity, SHSE showed a high negative correlation with speech perception scores in noise and a moderate negative correlation with scores of time-compressed speech perception. However, there was no statistical correlation between digit span results and either the SHSE total or its subcategories. A confirmatory factor analysis supported three factors in SHSE. We found that the developed SHSE had valuable internal consistency, test-retest reliability, and convergent and construct validity. These results suggest that

  17. Distortion-Product Otoacoustic Emission Measured Below 300 Hz in Normal-Hearing Human Subjects

    DEFF Research Database (Denmark)

    Christensen, Anders Tornvig; Ordoñez Pizarro, Rodrigo Eduardo; Hammershøi, Dorte

    2017-01-01

    , a custom-built low-frequency acoustic probe was put to use in 21 normal-hearing human subjects (of 34 recruited). Distortion-product otoacoustic emission (DPOAE) was measured in the enclosed ear canal volume as the response to two simultaneously presented tones with frequencies f1 and f2. The stimulus...

  18. Selective Inner Hair Cell Dysfunction in Chinchillas Impairs Hearing-in-Noise in the Absence of Outer Hair Cell Loss.

    Science.gov (United States)

    Lobarinas, Edward; Salvi, Richard; Ding, Dalian

    2016-04-01

    Poorer hearing in the presence of background noise is a significant problem for the hearing impaired. Ototoxic drugs, ageing, and noise exposure can damage the sensory hair cells of the inner ear that are essential for normal hearing sensitivity. The relationship between outer hair cell (OHC) loss and progressively poorer hearing sensitivity in quiet or in competing background noise is supported by a number of human and animal studies. In contrast, the effect of moderate inner hair cell (IHC) loss or dysfunction shows almost no impact on behavioral measures of hearing sensitivity in quiet, when OHCs remain intact, but the relationship between selective IHC loss and hearing in noise remains relatively unknown. Here, a moderately high dose of carboplatin (75 mg/kg) that produced IHC loss in chinchillas ranging from 40 to 80 % had little effect on thresholds in quiet. However, when tested in the presence of competing broadband (BBN) or narrowband noise (NBN), thresholds increased significantly. IHC loss >60 % increased signal-to-noise ratios (SNRs) for tones (500-11,300 Hz) in competing BBN by 5-10 dB and broadened the masking function under NBN. These data suggest that IHC loss or dysfunction may play a significant role in listening in noise independent of OHC integrity and that these deficits may be present even when thresholds in quiet are within normal limits.

  19. Reversible induction of phantom auditory sensations through simulated unilateral hearing loss.

    Directory of Open Access Journals (Sweden)

    Roland Schaette

    Full Text Available Tinnitus, a phantom auditory sensation, is associated with hearing loss in most cases, but it is unclear if hearing loss causes tinnitus. Phantom auditory sensations can be induced in normal hearing listeners when they experience severe auditory deprivation such as confinement in an anechoic chamber, which can be regarded as somewhat analogous to a profound bilateral hearing loss. As this condition is relatively uncommon among tinnitus patients, induction of phantom sounds by a lesser degree of auditory deprivation could advance our understanding of the mechanisms of tinnitus. In this study, we therefore investigated the reporting of phantom sounds after continuous use of an earplug. 18 healthy volunteers with normal hearing wore a silicone earplug continuously in one ear for 7 days. The attenuation provided by the earplugs simulated a mild high-frequency hearing loss, mean attenuation increased from 30 dB at 3 and 4 kHz. 14 out of 18 participants reported phantom sounds during earplug use. 11 participants presented with stable phantom sounds on day 7 and underwent tinnitus spectrum characterization with the earplug still in place. The spectra showed that the phantom sounds were perceived predominantly as high-pitched, corresponding to the frequency range most affected by the earplug. In all cases, the auditory phantom disappeared when the earplug was removed, indicating a causal relation between auditory deprivation and phantom sounds. This relation matches the predictions of our computational model of tinnitus development, which proposes a possible mechanism by which a stabilization of neuronal activity through homeostatic plasticity in the central auditory system could lead to the development of a neuronal correlate of tinnitus when auditory nerve activity is reduced due to the earplug.

  20. Temporal and spectral contributions to musical instrument identification and discrimination among cochlear implant users

    Institute of Scientific and Technical Information of China (English)

    Sandra M. Prentiss; David R. Friedland; Tanner Fullmer; Alison Crane; Timothy Stoddard; Christina L. Runge

    2016-01-01

    Objective:To investigate the contributions of envelope and fine-structure to the perception of timbre by cochlear implant (CI) users as compared to normal hearing (NH) lis-teners. Methods: This was a prospective cohort comparison study. Normal hearing and cochlear implant patients were tested. Three experiments were performed in sound field using musical notes altered to affect the characteristic pitch of an instrument and the acoustic envelope. Experiment 1 assessed the ability to identify the instrument playing each note, while experi-ments 2 and 3 assessed the ability to discriminate the different stimuli. Results:Normal hearing subjects performed better than CI subjects in all instrument identifi-cation tasks, reaching statistical significance for 4 of 5 stimulus conditions. Within the CI pop-ulation, acoustic envelope modifications did not significantly affect instrument identification or discrimination. With envelope and pitch cues removed, fine structure discrimination perfor-mance was similar between normal hearing and CI users for the majority of conditions, but some specific instrument comparisons were significantly more challenging for CI users. Conclusions:Cochlear implant users perform significantly worse than normal hearing listeners on tasks of instrument identification. However, cochlear implant listeners can discriminate differences in envelope and some fine structure components of musical instrument sounds as well as normal hearing listeners. The results indicated that certain fine structure cues are important for cochlear implant users to make discrimination judgments, and therefore may affect interpretation toward associating with a specific instrument for identification.

  1. The South African English Smartphone Digits-in-Noise Hearing Test: Effect of Age, Hearing Loss, and Speaking Competence.

    Science.gov (United States)

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Smits, Cas

    2017-11-20

    This study determined the effect of hearing loss and English-speaking competency on the South African English digits-in-noise hearing test to evaluate its suitability for use across native (N) and non-native (NN) speakers. A prospective cross-sectional cohort study of N and NN English adults with and without sensorineural hearing loss compared pure-tone air conduction thresholds to the speech reception threshold (SRT) recorded with the smartphone digits-in-noise hearing test. A rating scale was used for NN English listeners' self-reported competence in speaking English. This study consisted of 454 adult listeners (164 male, 290 female; range 16 to 90 years), of whom 337 listeners had a best ear four-frequency pure-tone average (4FPTA; 0.5, 1, 2, and 4 kHz) of ≤25 dB HL. A linear regression model identified three predictors of the digits-in-noise SRT, namely, 4FPTA, age, and self-reported English-speaking competence. The NN group with poor self-reported English-speaking competence (≤5/10) performed significantly (p English-speaking competence for the N and NN groups (≥6/10) and NN group alone (≤5/10). Logistic regression models, which include age in the analysis, showed a further improvement in sensitivity and specificity for both groups (area under the receiver operating characteristic curve, 0.962 and 0.903, respectively). Self-reported English-speaking competence had a significant influence on the SRT obtained with the smartphone digits-in-noise test. A logistic regression approach considering SRT, self-reported English-speaking competence, and age as predictors of best ear 4FPTA >25 dB HL showed that the test can be used as an accurate hearing screening tool for N and NN English speakers. The smartphone digits-in-noise test, therefore, allows testing in a multilingual population familiar with English digits using dynamic cutoff values that can be chosen according to self-reported English-speaking competence and age.

  2. Effect of conductive hearing loss on central auditory function.

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Semantic priming, not repetition priming, is to blame for false hearing.

    Science.gov (United States)

    Rogers, Chad S

    2017-08-01

    Contextual and sensory information are combined in speech perception. Conflict between the two can lead to false hearing, defined as a high-confidence misidentification of a spoken word. Rogers, Jacoby, and Sommers (Psychology and Aging, 27(1), 33-45, 2012) found that older adults are more susceptible to false hearing than are young adults, using a combination of semantic priming and repetition priming to create context. In this study, the type of context (repetition vs. sematic priming) responsible for false hearing was examined. Older and young adult participants read and listened to a list of paired associates (e.g., ROW-BOAT) and were told to remember the pairs for a later memory test. Following the memory test, participants identified words masked in noise that were preceded by a cue word in the clear. Targets were semantically associated to the cue (e.g., ROW-BOAT), unrelated to the cue (e.g., JAW-PASS), or phonologically related to a semantic associate of the cue (e.g., ROW-GOAT). How often each cue word and its paired associate were presented prior to the memory test was manipulated (0, 3, or 5 times) to test effects of repetition priming. Results showed repetitions had no effect on rates of context-based listening or false hearing. However, repetition did significantly increase sensory information as a basis for metacognitive judgments in young and older adults. This pattern suggests that semantic priming dominates as the basis for false hearing and highlights context and sensory information operating as qualitatively different bases for listening and metacognition.

  4. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    Science.gov (United States)

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  5. A comparative evaluation of dental caries status among hearing-impaired and normal children of Malda, West Bengal, evaluated with the Caries Assessment Spectrum and Treatment.

    Science.gov (United States)

    Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu

    2016-01-01

    Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.

  6. A comparative evaluation of dental caries status among hearing-impaired and normal children of Malda, West Bengal, evaluated with the Caries Assessment Spectrum and Treatment

    Directory of Open Access Journals (Sweden)

    Sudipta Kar

    2016-01-01

    Full Text Available Context: Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. Aims: This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST. Settings and Design: In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Subjects and Methods: Statistically significant difference was found in studied (hearing-impaired and control group (normal children. In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical Analysis Used: Statistical analysis was carried out utilizing Z-test. Results: Statistically significant difference was found in studied (hearing-impaired and control group (normal children. In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Conclusions: Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.

  7. Speech Intelligibility and Hearing Protector Selection

    Science.gov (United States)

    2016-08-29

    not only affect the listener of speech communication in a noisy environment, HPDs can also affect the speaker . Tufts and Frank (2003) found that...of hearing protection on speech intelligibility in noise. Sound and Vibration . 20(10): 12-14. Berger, E. H. 1980. EARLog #4 – The

  8. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  9. Dichotic and dichoptic digit perception in normal adults.

    Science.gov (United States)

    Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T

    2011-06-01

    Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a

  10. Estimating adolescent risk for hearing loss based on data from a large school-based survey.

    Science.gov (United States)

    Vogel, Ineke; Verschuure, Hans; van der Ploeg, Catharina P B; Brug, Johannes; Raat, Hein

    2010-06-01

    We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. In 2007, 1512 adolescents (aged 12-19 years) in Dutch secondary schools completed questionnaires about their music-listening behavior and whether they experienced hearing-related symptoms after listening to high-volume music. We used their self-reported data in conjunction with published average sound levels of music players, discotheques, and pop concerts to estimate their noise exposure, and we compared that exposure to our own "loosened" (i.e., less strict) version of current European safety standards for occupational noise exposure. About half of the adolescents exceeded safety standards for occupational noise exposure. About one third of the respondents exceeded safety standards solely as a result of listening to MP3 players. Hearing symptoms that occurred after using an MP3 player or going to a discotheque were associated with exposure to high-volume music. Adolescents often exceeded current occupational safety standards for noise exposure, highlighting the need for specific safety standards for leisure-time noise exposure.

  11. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  12. Use of nouns and verbs in the oral narrative of individuals with hearing impairment and normal hearing between 5 and 11 years of age

    Directory of Open Access Journals (Sweden)

    Erica Endo Amemiya

    Full Text Available CONTEXT AND OBJECTIVE: Nouns and verbs indicate actions in oral communication. However, hearing impairment can compromise the acquisition of oral language to such an extent that appropriate use of these can be challenging. The objective of this study was to compare the use of nouns and verbs in the oral narrative of hearing-impaired and hearing children. DESIGN AND SETTING: Analytical cross-sectional study at the Department of Speech-Language and Hearing Sciences, Universidade Federal de São Paulo. METHODS: Twenty-one children with moderate to profound bilateral neurosensory hearing impairment and twenty-one with normal hearing (controls were matched according to sex, school year and school type. A board showing pictures was presented to each child, to elicit a narrative and measure their performance in producing nouns and verbs. RESULTS: Twenty-two (52.4% of the subjects were males. The mean age was 8 years (standard deviation, SD = 1.5. Comparing averages between the groups of boys and girls, we did not find any significant difference in their use of nouns, but among verbs, there was a significant difference regarding use of the imperative (P = 0.041: more frequent among boys (mean = 2.91. There was no significant difference in the use of nouns and verbs between deaf children and hearers, in relation to school type. Regarding use of the indicative, there was a nearly significant trend (P = 0.058. CONCLUSION: Among oralized hearing-impaired children who underwent speech therapy, their performance regarding verbs and noun use was similar to that of their hearing counterparts.

  13. A Taxonomy of Fatigue Concepts and Their Relation to Hearing Loss

    Science.gov (United States)

    Hornsby, Benjamin W. Y.; Naylor, Graham; Bess, Fred H.

    2016-01-01

    Fatigue is common in individuals with a variety of chronic health conditions and can have significant negative effects on quality of life. Although limited in scope, recent work suggests persons with hearing loss may be at increased risk for fatigue, in part due to effortful listening that is exacerbated by their hearing impairment. However, the…

  14. Variation in Music Player Listening Level as a Function of Campus Location.

    Science.gov (United States)

    Park, Yunea; Guercio, Diana; Ledon, Victoria; Le Prell, Colleen G

    2017-04-01

    There has been significant discussion in the literature regarding music player use by adolescents and young adults, including whether device use is driving an increase in hearing loss in these populations. While many studies report relatively safe preferred listening levels, some studies with college student participants have reported listening habits that may put individuals at risk for noise-induced hearing loss (NIHL) if those listening habits continue over the long term. The goal of the current investigation was to extend listening level data collection sites from urban city settings studied by others to a more rural campus setting. This was a prospective study. Participants were 138 students on the University of Florida campus (94 males, 44 females), 18 years or older (mean = 21 years; range: 18-33 years). In this investigation, the current output level (listening level) was measured from personal listening devices used by students as they passed by a recruiting table located in one of three areas of the University of Florida campus. One location was in an open-air campus square; the other two locations were outside the campus recreation building ("gym") and outside the undergraduate library, with participants recruited as they exited the gym or library buildings. After providing written informed consent, participants completed a survey that included questions about demographics and typical listening habits (hours per day, days per week). The output level on their device was then measured using a "Jolene" mannequin. Average listening levels for participants at the three locations were as follows: gym: 85.9 ± 1.4 dBA; campus square: 83.3 ± 2.0 dBA; library: 76.9 ± 1.3 dBA. After adjusting to free-field equivalent level, average listening levels were gym: 79.7 ± 1.4 dBA; campus square: 76.9 ± 2.1 dBA; library: 70.4 ± 1.4 dBA. There were no statistically significant differences between male and female listeners, and there were no reliable differences as a

  15. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation.

    Science.gov (United States)

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2017-07-01

    Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    Science.gov (United States)

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  17. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  18. Testing the effects of a message framing intervention on intentions towards hearing loss prevention in adolescents.

    Science.gov (United States)

    de Bruijn, Gert-Jan; Spaans, Pieter; Jansen, Bastiaan; van't Riet, Jonathan

    2016-04-01

    Adolescent hearing loss is a public health problem that has eluded effective intervention. A persuasive message strategy was tested for its effectiveness on adolescents' intention to listen to music at a reduced volume. The messages manipulated both type of message frame [positive consequences of listening to music at a reduced volume (gain-framed) versus negative consequences of not listening to music at a reduced volume (loss-framed)] and type of temporal context (short-term versus long-term consequences). Participants were recruited from four vocational and secondary education schools in the Netherlands and message exposure took place online during class hours. Two weeks prior to message exposure, adolescents provided data on intention and risk perception towards hearing loss and use of (digital) music players. After message exposure, 194 adolescents (mean age = 14.71 years, SD = 1.00, 37.8% males) provided immediate follow-up data on intention. Results revealed that intention to listen to music at a reduced volume increased in those exposed to a loss-framed message with short-term consequences. No changes were found in the other conditions. Messages that emphasize negative short-term consequences of not listening to music at a moderate volume have the ability to influence adolescents' intention towards hearing loss prevention. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Impacts of Authentic Listening Tasks upon Listening Anxiety and Listening Comprehension

    Science.gov (United States)

    Melanlioglu, Deniz

    2013-01-01

    Although listening is the skill mostly used by students in the classrooms, the desired success cannot be attained in teaching listening since this skill is shaped by multiple variables. In this research we focused on listening anxiety, listening comprehension and impact of authentic tasks on both listening anxiety and listening comprehension.…

  20. How Social Psychological Factors May Modulate Auditory and Cognitive Functioning During Listening.

    Science.gov (United States)

    Pichora-Fuller, M Kathleen

    2016-01-01

    The framework for understanding effortful listening (FUEL) draws on psychological theories of cognition and motivation. In the present article, theories of social-cognitive psychology are related to the FUEL. Listening effort is defined in our consensus as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task that involves listening. Listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in challenging situations. Listeners' cost/benefit evaluations involve appraisals of listening demands, their own capacity, and the importance of listening goals. Social psychological factors can affect a listener's actual and self-perceived auditory and cognitive abilities, especially when those abilities may be insufficient to readily meet listening demands. Whether or not listeners experience stress depends not only on how demanding a situation is relative to their actual abilities but also on how they appraise their capacity to meet those demands. The self-perception or appraisal of one's abilities can be lowered by poor self-efficacy or negative stereotypes. Stress may affect performance in a given situation and chronic stress can have deleterious effects on many aspects of health, including auditory and cognitive functioning. Social support can offset demands and mitigate stress; however, the burden of providing support may stress the significant other. Some listeners cope by avoiding challenging situations and withdrawing from social participation. Extending the FUEL using social-cognitive psychological theories may provide valuable insights into how effortful listening could be reduced by adopting health-promoting approaches to rehabilitation.

  1. e-Health technologies for adult hearing screening

    Directory of Open Access Journals (Sweden)

    S. Stenfelt

    2011-03-01

    Full Text Available The development of hearing diagnosis methods and hearing screening methods are not isolated phenomena: they are intimately related to changes in the cultural background and to advances in fields of medicine and engineering. In the recent years, there has been a rapid evolution in the development of fast, easy and reliable techniques for lowcost hearing screening initiatives. Since adults and elderly people typically experience a reduced hearing ability in challenging listening situations [e.g., in background noise, in reverberation, or with competing speech (Pichora‑Fuller & Souza, 2003], these newly developed screening tests mainly rely on the recognition of speech stimuli in noise, so that the real experienced listening difficulties can be effectively targeted (Killion & Niquette, 2000. New tests based on the recognition of speech in noise are being developed on portable, battery- operated devices (see, for example, Paglialonga et al., 2011, or distributed diffusely using information and communication technologies. The evolutions of e-Health and telemedicine have shifted focus from patients coming to the hearing clinic for hearing health evaluation towards the possibility of evaluating the hearing status remotely at home. So far, two ways of distributing the hearing test have primarily been used: ordinary telephone networks (excluding mobile networks and the internet. When using the telephone network for hearing screening, the predominantly test is a speech-in-noise test often referred to as the digit triplet test where the subjects hearing status is evaluated as the speech-to-noise threshold for spoken digits. This test is today available in some ten countries in Europe, North America and Australia. The use of internet as testing platform allows several different types of hearing assessment tests such as questionnaires, different types of speech in noise tests, temporal gap detection, sound localization (minimum audible angle, and spectral

  2. Laboratory evaluation of an optimised internet-based speech-in-noise test for occupational high-frequency hearing loss screening: Occupational Earcheck.

    Science.gov (United States)

    Sheikh Rashid, Marya; Leensen, Monique C J; de Laat, Jan A P M; Dreschler, Wouter A

    2017-11-01

    The "Occupational Earcheck" (OEC) is a Dutch online self-screening speech-in-noise test developed for the detection of occupational high-frequency hearing loss (HFHL). This study evaluates an optimised version of the test and determines the most appropriate masking noise. The original OEC was improved by homogenisation of the speech material, and shortening the test. A laboratory-based cross-sectional study was performed in which the optimised OEC in five alternative masking noise conditions was evaluated. The study was conducted on 18 normal-hearing (NH) adults, and 15 middle-aged listeners with HFHL. The OEC in a low-pass (LP) filtered stationary background noise (test version LP 3: with a cut-off frequency of 1.6 kHz, and a noise floor of -12 dB) was the most accurate version tested. The test showed a reasonable sensitivity (93%), and specificity (94%) and test reliability (intra-class correlation coefficient: 0.84, mean within-subject standard deviation: 1.5 dB SNR, slope of psychometric function: 13.1%/dB SNR). The improved OEC, with homogenous word material in a LP filtered noise, appears to be suitable for the discrimination between younger NH listeners and older listeners with HFHL. The appropriateness of the OEC for screening purposes in an occupational setting will be studied further.

  3. Reducing Listening-Related Stress in School-Aged Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Rance, Gary; Chisari, Donella; Saunders, Kerryn; Rault, Jean-Loup

    2017-07-01

    High levels of stress and anxiety are common in children with Autism Spectrum Disorder (ASD). Within this study of school-aged children (20 male, 6 female) we hypothesised that functional hearing deficits (also pervasive in ASD) could be ameliorated by auditory interventions and that, as a consequence, stress levels would be reduced. The use of Ear-Level Remote Microphone devices and Classroom Amplification systems resulted in significantly improved listening, communication and social interaction and a reduction in physiologic stress levels (salivary cortisol) in both one-on-one and group listening situations.

  4. Commentary: Listening Can Be Exhausting--Fatigue in Children and Adults with Hearing Loss

    Science.gov (United States)

    Bess, Fred H.; Hornsby, Benjamin W.Y.

    2014-01-01

    Anecdotal reports of fatigue after sustained speech-processing demands are common among adults with hearing loss; however, systematic research examining hearing loss-related fatigue is limited, particularly with regard to fatigue among children with hearing loss (CHL). Many audiologists, educators, and parents have long suspected that CHL…

  5. Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss.

    Science.gov (United States)

    Picou, Erin M; Marcrum, Steven C; Ricketts, Todd A

    2015-03-01

    While potentially improving audibility for listeners with considerable high frequency hearing loss, the effects of implementing nonlinear frequency compression (NFC) for listeners with moderate high frequency hearing loss are unclear. The purpose of this study was to investigate the effects of activating NFC for listeners who are not traditionally considered candidates for this technology. Participants wore study hearing aids with NFC activated for a 3-4 week trial period. After the trial period, they were tested with NFC and with conventional processing on measures of consonant discrimination threshold in quiet, consonant recognition in quiet, sentence recognition in noise, and acceptableness of sound quality of speech and music. Seventeen adult listeners with symmetrical, mild to moderate sensorineural hearing loss participated. Better ear, high frequency pure-tone averages (4, 6, and 8 kHz) were 60 dB HL or better. Activating NFC resulted in lower (better) thresholds for discrimination of /s/, whose spectral center was 9 kHz. There were no other significant effects of NFC compared to conventional processing. These data suggest that the benefits, and detriments, of activating NFC may be limited for this population.

  6. Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head.

    Science.gov (United States)

    Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut

    Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH

  7. Effect of Exogenous Cues on Covert Spatial Orienting in Deaf and Normal Hearing Individuals.

    Science.gov (United States)

    Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar

    2015-01-01

    Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.

  8. Gaps-in-Noise test: gap detection thresholds in 9-year-old normal-hearing children.

    Science.gov (United States)

    Marculino, Carolina Finetti; Rabelo, Camila Maia; Schochat, Eliane

    2011-12-01

    To establish the standard criteria for the Gaps-in-Noise (GIN) test in 9-year-old normal-hearing children; to obtain the mean gap detection thresholds; and to verify the influence of the variables gender and ear on the gap detection thresholds. Forty normal-hearing individuals, 20 male and 20 female, with ages ranging from 9 years to 9 years and 11 months, were evaluated. The procedures performed were: anamnesis, audiological evaluation, acoustic immittance measures (tympanometry and acoustic reflex), Dichotic Digits Test, and GIN test. The results obtained were statistically analyzed. The results revealed similar performance of right and left ears in the population studied. There was also no difference regarding the variable gender. In the subjects evaluated, the mean gap detection thresholds were 4.4 ms for the right ear, and 4.2 ms for the left ear. The values obtained for right and left ear, as well as their standard deviations, can be used as standard criteria for 9-year-old children, regardless of ear or gender.

  9. Detection and identification of monaural and binaural pitch contours in dyslexic listeners

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten; Poelmans, Hanne

    2010-01-01

    found that a majority of dyslexic subjects were unable to hear binaural pitch, the latter obtained a clear response of dyslexic listeners to Huggins’ pitch (HP) (Cramer and Huggins, 1958). The present study clarified whether impaired binaural pitch perception is found in dyslexia. Results from a pitch...

  10. Listening comprehension across the adult lifespan.

    Science.gov (United States)

    Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but

  11. Variations in voice level and fundamental frequency with changing background noise level and talker-to-listener distance while wearing hearing protectors: A pilot study.

    Science.gov (United States)

    Bouserhal, Rachel E; Macdonald, Ewen N; Falk, Tiago H; Voix, Jérémie

    2016-01-01

    Speech production in noise with varying talker-to-listener distance has been well studied for the open ear condition. However, occluding the ear canal can affect the auditory feedback and cause deviations from the models presented for the open-ear condition. Communication is a main concern for people wearing hearing protection devices (HPD). Although practical, radio communication is cumbersome, as it does not distinguish designated receivers. A smarter radio communication protocol must be developed to alleviate this problem. Thus, it is necessary to model speech production in noise while wearing HPDs. Such a model opens the door to radio communication systems that distinguish receivers and offer more efficient communication between persons wearing HPDs. This paper presents the results of a pilot study aimed to investigate the effects of occluding the ear on changes in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Twelve participants with a mean age of 28 participated in this study. Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded quiet condition. This implies that a model can be developed to better understand speech production for the occluded ear.

  12. Noise induced hearing loss: Screening with pure-tone audiometry and speech-in-noise testing

    NARCIS (Netherlands)

    Leensen, M.C.J.

    2013-01-01

    Noise-induced hearing loss (NIHL) is a highly prevalent public health problem, caused by exposure to loud noises both during leisure time, e.g. by listening to loud music, and during work. In the past years NIHL was the most commonly reported occupational disease in the Netherlands. Hearing damage

  13. Speech understanding in noise with an eyeglass hearing aid: asymmetric fitting and the head shadow benefit of anterior microphones.

    Science.gov (United States)

    Mens, Lucas H M

    2011-01-01

    To test speech understanding in noise using array microphones integrated in an eyeglass device and to test if microphones placed anteriorly at the temple provide better directivity than above the pinna. Sentences were presented from the front and uncorrelated noise from 45, 135, 225 and 315°. Fifteen hearing impaired participants with a significant speech discrimination loss were included, as well as 5 normal hearing listeners. The device (Varibel) improved speech understanding in noise compared to most conventional directional devices with a directional benefit of 5.3 dB in the asymmetric fit mode, which was not significantly different from the bilateral fully directional mode (6.3 dB). Anterior microphones outperformed microphones at a conventional position above the pinna by 2.6 dB. By integrating microphones in an eyeglass frame, a long array can be used resulting in a higher directionality index and improved speech understanding in noise. An asymmetric fit did not significantly reduce performance and can be considered to increase acceptance and environmental awareness. Directional microphones at the temple seemed to profit more from the head shadow than above the pinna, better suppressing noise from behind the listener.

  14. Cognitive processing load across a wide range of listening conditions: insights from pupillometry.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E

    2014-03-01

    The pupil response to speech masked by interfering speech was assessed across an intelligibility range from 0% to 99% correct. In total, 37 participants aged between 18 and 36 years and with normal hearing were included. Pupil dilation was largest at intermediate intelligibility levels, smaller at high intelligibility, and slightly smaller at very difficult levels. Participants who reported that they often gave up listening at low intelligibility levels had smaller pupil dilations in these conditions. Participants who were good at reading masked text had relatively large pupil dilation when intelligibility was low. We conclude that the pupil response is sensitive to processing load, and possibly reflects cognitive overload in difficult conditions. It seems affected by methodological aspects and individual abilities, but does not reflect subjective ratings. Copyright © 2014 Society for Psychophysiological Research.

  15. Attention, memory, and auditory processing in 10- to 15-year-old children with listening difficulties.

    Science.gov (United States)

    Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon

    2014-12-01

    The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.

  16. Survey of college students' MP3 listening: Habits, safety issues, attitudes, and education.

    Science.gov (United States)

    Hoover, Alicia; Krishnamurti, Sridhar

    2010-06-01

    To survey listening habits and attitudes of typical college students who use MP3 players and to investigate possible safety issues related to MP3 player listening. College students who were frequent MP3 player users (N = 428) filled out a 30-item online survey. Specific areas probed by the present survey included frequency and duration of MP3 player use, MP3 player volume levels used, types of earphones used, typical environments in which MP3 player was worn, specific activities related to safety while listening to MP3 players, and attitudes toward MP3 player use. The majority of listeners wore MP3 players for less than 2 hr daily at safe volume levels. About one third of respondents reported being distracted while wearing an MP3 player, and more than one third of listeners experienced soreness in their ears after a listening session. About one third of respondents reported occasionally using their MP3 players at maximum volume levels. Listeners indicated willingness to (a) reduce volume levels, (b) decrease listening duration, and (c) buy specialized earphones to conserve their hearing. The study found concerns regarding the occasional use of MP3 players at full volume and reduced environmental awareness among some college student users.

  17. Open-type congenital cholesteatoma: differential diagnosis for conductive hearing loss with a normal tympanic membrane.

    Science.gov (United States)

    Kim, Se-Hyung; Cho, Yang-Sun; Chu, Ho-Suk; Jang, Jeon-Yeob; Chung, Won-Ho; Hong, Sung Hwa

    2012-06-01

    In patients with progressive conductive hearing loss and a normal tympanic membrane (TM), and with soft tissue density in the middle ear cavity (MEC) on temporal bone computed tomography (TBCT) scan, open-type congenital cholesteatoma (OCC) should be highly suspected and a proper surgical plan that includes mastoid exploration and second-stage operation is required. The clinical presentation of OCC is very similar to congenital ossicular anomaly (COA) presenting with a conductive hearing loss with intact TM. Therefore, it is challenging to make a correct preoperative diagnosis in patients with OCC. We evaluated the clinical characteristics of OCC compared with those of COA to find diagnostic clues useful in diagnosis of OCC. The medical records of 12 patients with surgically proven OCC and 14 patients with surgically proven COA were reviewed for demographic data, otologic history, preoperative TBCT findings, intraoperative findings, and pre- and postoperative audiologic data. There was no difference between OCC and COA based on demographic data, preoperative hearing, and ossicular status on TBCT. However, the presence of progressive hearing loss, soft tissue density in the MEC on TBCT scan, and the need for mastoid surgery and second-stage operation were significantly more frequent in OCC patients.

  18. Habilidades pragmáticas em crianças deficientes auditivas: estudo de casos e controles Pragmatic abilities in hearing impaired children: a case-control study

    Directory of Open Access Journals (Sweden)

    Luana Curti

    2010-01-01

    Full Text Available OBJETIVO: Avaliar as habilidades pragmáticas de um grupo de crianças deficientes auditivas e compará-las a seus pares normo-ouvintes. MÉTODOS: Estudo de casos e controles composto por 32 crianças de ambos os gêneros com idades entre dois e seis anos. Dentre estas, 16 deficientes auditivas de grau moderadamente severo a profundo sem outros comprometimentos orgânicos (casos e 16 crianças normo-ouvintes sem queixas fonoaudiológicas (controle pareadas por idade. A avaliação e a análise da pragmática foram realizadas a partir do Teste ABFW-Pragmática, seguindo as instruções do protocolo. RESULTADOS: A média de idade das crianças estudadas foi quatro anos (DP=1,3; houve diferença significativa em relação ao número de atos comunicativos por minuto entre casos e controles (p=0,001. As crianças deficientes auditivas apresentaram menos iniciativas comunicativas do que as crianças normo-ouvintes e o meio comunicativo gestual foi utilizado por 13 (81, 3% destas e por cinco (32,2% das crianças normo-ouvintes (p=0,004. Não houve diferença entre os grupos em relação às intenções comunicativas (p=0,465. CONCLUSÃO: As crianças deficientes auditivas foram capazes de interagir em situações contextualizadas utilizando-se de funções comunicativas semelhantes às das ouvintes, e se diferiram destas quanto ao meio comunicativo mais utilizado.PURPOSE: To evaluate the pragmatic abilities of a group of hearing impaired children, and compare them with normal-listener peers. METHODS: Case-control study composed by 32 children of both genders with ages between two and six years, paired by age: 16 hearing impaired with moderately severe to profound hearing loss without other organic dysfunctions (cases, and 16 children with normal hearing with no Speech-Language Pathology complaints (control. The evaluation and analysis of pragmatic abilities were carried out based on the ABFW-Pragmatics Test, following instructions of its own protocol

  19. Listening to the ear

    Science.gov (United States)

    Shera, Christopher A.

    Otoacoustic emissions demonstrate that the ear creates sound while listening to sound, offering a promising acoustic window on the mechanics of hearing in awake, listening human beings. That window is clouded, however, by an incomplete knowledge of wave reflection and transmission, both forth and back within the cochlea and through the middle ear. This thesis "does windows," addressing wave propagation and scattering on both sides of the middle ear. A summary of highlights follows. Measurements of the cochlear input impedance in cat are used to identify a new symmetry in cochlear mechanics-termed "tapering symmetry" after its geometric interpretation in simple models-that guarantees that the wavelength of the traveling wave changes slowly with position near the stapes. Waves therefore propagate without reflection through the basal turns of the cochlea. Analytic methods for solving the cochlear wave equations using a perturbative scattering series are given and used to demonstrate that, contrary to common belief, conventional cochlear models exhibit negligible internal reflection whether or not they accurately represent the tapering symmetries of the inner ear. Frameworks for the systematic "deconstruction" of eardrum and middle-ear transduction characteristics are developed and applied to the analysis of noninvasive measurements of middle-ear and cochlear mechanics. A simple phenomenological model of inner-ear compressibility that correctly predicts hearing thresholds in patients with missing or disarticulated middle-ear ossicles is developed and used to establish an upper bound on cochlear compressibility several orders of magnitude smaller than that provided by direct measurements. Accurate measurements of stimulus frequency evoked otoacoustic emissions are performed and used to determine the form and frequency variation of the cochlear traveling-wave ratio noninvasively. Those measurements are inverted to obtain the spatial distribution of mechanical

  20. Pediatric hearing aid use: parent-reported challenges.

    Science.gov (United States)

    Muñoz, Karen; Olson, Whitney A; Twohig, Michael P; Preston, Elizabeth; Blaiser, Kristina; White, Karl R

    2015-01-01

    The aim of this study was to investigate parent-reported challenges related to hearing aid management and parental psychosocial characteristics during the first 3 years of the child's life. Using a cross-sectional survey design, surveys were distributed to parents of children with hearing loss via state Early Intervention programs in Utah and Indiana. Packets contained one family demographic form and two sets of three questionnaires to obtain responses from mothers and fathers separately: the Parent Hearing Aid Management Inventory explored parent access to information, parent confidence in performing skills, expectations, communication with the audiologist, and hearing aid use challenges. The Acceptance and Action Questionnaire measured psychological flexibility, experiential avoidance, and internal thought processes that can affect problem-solving ability and decrease an individual's ability to take value-based actions. The Patient Health Questionnaire identified symptoms of depression. Thirty-seven families completed questionnaires (35 mothers and 20 fathers). Most responses were parents of toddlers (M = 22 months) who had been wearing binaural hearing aids for an average of 15 months. Both mothers and fathers reported that even though the amount of information they received was overwhelming, most (84%) preferred to have all the information at the beginning, rather than to receive it over an extended time period. Parents reported an array of challenges related to hearing aid management, with the majority related to daily management, hearing aid use, and emotional adjustment. Sixty-six percent of parents reported an audiologist taught them how to complete a listening check using a stethoscope, however, only one-third reported doing a daily hearing aid listening check. Both mothers and fathers reported a wide range of variability in their confidence in performing activities related to hearing aid management, and most reported minimal confidence in their ability to

  1. The Influence of Cochlear Mechanical Dysfunction, Temporal Processing Deficits, and Age on the Intelligibility of Audible Speech in Noise for Hearing-Impaired Listeners

    Directory of Open Access Journals (Sweden)

    Peter T. Johannesen

    2016-05-01

    Full Text Available The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN and a time-reversed two-talker masker (R2TM. Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a cochlear gain loss was unrelated to intelligibility, (b residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers.

  2. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    Science.gov (United States)

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Teaching Techniques: Four Ears Hear More than Two--A Competitive Team Approach to Listening Practice

    Science.gov (United States)

    Shikhantsov, Alexey

    2016-01-01

    This article explores a technique that can be used for almost any kind of classroom listening practice and with all kinds of classes. It seems to work well both in exam preparation and in regular textbook listening exercises.

  4. How Hearing Impairment Affects Sentence Comprehension: Using Eye Fixations to Investigate the Duration of Speech Processing

    DEFF Research Database (Denmark)

    Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas

    2015-01-01

    ; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which...... reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions......: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive...

  5. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  6. Cognitive function predicts listening effort performance during complex tasks in normally aging adults

    Directory of Open Access Journals (Sweden)

    Jennine Harvey

    2017-01-01

    Full Text Available Purpose: This study examines whether cognitive function, as measured by the subtests of the Woodcock–Johnson III (WCJ-III assessment, predicts listening-effort performance during dual tasks across the adults of varying ages. Materials and Methods: Participants were divided into two groups. Group 1 consisted of 14 listeners (number of females = 11 who were 41–61 years old [mean = 53.18; standard deviation (SD = 5.97]. Group 2 consisted of 15 listeners (number of females = 9 who were 63–81 years old (mean = 72.07; SD = 5.11. Participants were administered the WCJ-III Memory for Words, Auditory Working Memory, Visual Matching, and Decision Speed subtests. All participants were tested in each of the following three dual-task experimental conditions, which were varying in complexity: (1 auditory word recognition + visual processing, (2 auditory working memory (word + visual processing, and (3 auditory working memory (sentence + visual processing in noise. Results: A repeated measures analysis of variance revealed that task complexity significantly affected the performance measures of auditory accuracy, visual accuracy, and processing speed. Linear regression revealed that the cognitive subtests of the WCJ-III test significantly predicted performance across dependent variable measures. Conclusion: Listening effort is significantly affected by task complexity, regardless of age. Performance on the WCJ-III test may predict listening effort in adults and may assist speech-language pathologist (SLPs to understand challenges faced by participants when subjected to noise.

  7. Precise Head Tracking in Hearing Applications

    Science.gov (United States)

    Helle, A. M.; Pilinski, J.; Luhmann, T.

    2015-05-01

    The paper gives an overview about two research projects, both dealing with optical head tracking in hearing applications. As part of the project "Development of a real-time low-cost tracking system for medical and audiological problems (ELCoT)" a cost-effective single camera 3D tracking system has been developed which enables the detection of arm and head movements of human patients. Amongst others, the measuring system is designed for a new hearing test (based on the "Mainzer Kindertisch"), which analyzes the directional hearing capabilities of children in cooperation with the research project ERKI (Evaluation of acoustic sound source localization for children). As part of the research project framework "Hearing in everyday life (HALLO)" a stereo tracking system is being used for analyzing the head movement of human patients during complex acoustic events. Together with the consideration of biosignals like skin conductance the speech comprehension and listening effort of persons with reduced hearing ability, especially in situations with background noise, is evaluated. For both projects the system design, accuracy aspects and results of practical tests are discussed.

  8. The development and standardization of Self-assessment for Hearing Screening of the Elderly

    Directory of Open Access Journals (Sweden)

    Kim G

    2016-06-01

    Full Text Available Gibbeum Kim,1 Wondo Na,1 Gungu Kim,1 Woojae Han,2 Jinsook Kim2 1Department of Speech Pathology and Audiology, Hallym University Graduate School, Chuncheon, Republic of Korea; 2Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym Universtiy, Chuncheon, Republic of Korea Purpose: The present study aimed to develop and standardize a screening tool for elderly people who wish to check for themselves their level of hearing loss. Methods: The Self-assessment for Hearing Screening of the Elderly (SHSE consisted of 20 questions based on the characteristics of presbycusis using a five-point scale: seven questions covered general issues related to sensorineural hearing loss, seven covered hearing difficulty under distracting listening conditions, two covered hearing difficulty with fast-rated speech, and four covered the working memory function during communication. To standardize SHSE, 83 elderly participants took part in the study: 25 with normal hearing, and 22, 23, and 13 with mild, moderate, and moderate-to-severe sensorineural hearing loss, respectively, according to their hearing sensitivity. All were retested 3 weeks later using the same questionnaire to confirm its reliability. In addition, validity was assessed using various hearing tests such as a sentence test with background noise, a time-compressed speech test, and a digit span test. Results: SHSE and its subcategories showed good internal consistency. SHSE and its subcategories demonstrated high test–retest reliability. A high correlation was observed between the total scores and pure-tone thresholds, which indicated gradually increased SHSE scores of 42.24%, 55.27%, 66.61%, and 78.15% for normal hearing, mild, moderate, and moderate-to-severe groups, respectively. With regard to construct validity, SHSE showed a high negative correlation with speech perception scores in noise and a moderate negative

  9. Fast self paced listening times in syntactic comprehension is aphasia -- implications for deficits

    Directory of Open Access Journals (Sweden)

    Jennifer Michaud

    2015-04-01

    (the DV against corrected self paced listening times in the corresponding words in the baseline sentences (the IV in correct responses. We call the residuals of these regressions “relative corrected listening times.” Relative corrected listening times are based on listening times at points at which task-related operations are similar, and therefore factor out these effects. They also factor out differences in general speed of processing and motor functioning, which determine the intercepts of the regressions. The relative corrected listening times for each participant therefore reflect the time taken by each pwa or control participant to perform the parsing and interpretation operations needed in the experimental sentences, compared to the time taken by the other controls or pwa. We performed these regressions separately in each of the four groups of sentences. We analyzed pwa and controls separately. We analyzed groups and not individuals, because performing these regressions on an individual basis would lead to a mean residual for an experimental sentence type compared to a baseline sentence type that approximates zero in each individual (this is a property of linear regression. We analyzed controls and pwa separately to produce separate estimates of basic speed of processing, decision-making and other processes in controls and pwa. We calculated the normal range of relative corrected listening times for each sentence type group by applying the formula in Crawford and Howell (1998 to the results of the regressions in the controls. We then determined whether the relative corrected listening time for each group of sentences in each pwa was within the normal range of residuals. We focus on a finding that has not previously been reported, which is that, in some pwa, some relative corrected listening times (i.e., residuals of the regression of critical words in experimental sentences against corresponding words in baseline sentences were lower than those seen in

  10. Music-Listening Behavior of Adolescents and Hearing Conservation: many risks, few precautions

    NARCIS (Netherlands)

    I. Vogel (Ineke)

    2009-01-01

    textabstractNoise-induced hearing loss (NIHL) is a significant social and public-health problem. Long-term exposure to high-volume levels will cause permanent hearing loss after 5-10 years. With the massive spread in the popularity of portable MP3 players, exposure to high sound levels has increased

  11. A portable digital speech-rate converter for hearing impairment.

    Science.gov (United States)

    Nejime, Y; Aritsuka, T; Imamura, T; Ifukube, T; Matsushima, J

    1996-06-01

    A real-time hand-sized portable device that slows speech speed without changing the pitch is proposed for hearing impairment. By using this device, people can listen to fast speech at a comfortable speed. A combination of solid-state memory recording and real-time digital signal processing with a single chip processor enables this unique function. A simplified pitchsynchronous, time-scale-modification algorithm is proposed to minimize the complexity of the DSP operation. Unlike the traditional algorithm, this dynamic-processing algorithm reduces distortion even when the expansion rate is only just above 1. Seven out of 10 elderly hearing-impaired listeners showed improvement in a sentence recognition test when using speech-rate conversion with the largest expansion rate, although no improvement was observed in a word recognition test. Some subjects who showed large improvement had limited auditory temporal resolution, but the correlation was not significant. The results suggest that, unlike conventional hearing aids, this device can be used to overcome the deterioration of auditory ability by improving the transfer of information from short-term (echoic) memory into a more stable memory trace in the human auditory system.

  12. Negotiating hearing disability and hearing disabled identities

    DEFF Research Database (Denmark)

    Lykke Hindhede, Anette

    2012-01-01

        Using disability theory as a framework and social science theories of identity to strengthen the arguments, this paper explores empirically how working-age adults confront the medical diagnosis of hearing impairment. For most participants hearing impairment threatens the stability of social...... interaction and the construction of hearing disabled identities is seen as shaped in the interaction with the hearing impaired person‟s surroundings. In order to overcome the potential stigmatisation the „passing‟ as normal becomes predominant. For many the diagnosis provokes radical redefinitions of the self....... The discursively produced categorisation and subjectivity of senescence mean that rehabilitation technologies such as hearing aids identify a particular life-style (disabled) which determines their social significance. Thus wearing a hearing aid works against the contemporary attempt to create socially ideal...

  13. The use of listening devices to ameliorate auditory deficit in children with autism.

    Science.gov (United States)

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  14. The hearing benefit of cochlear implantation for individuals with unilateral hearing loss, but no tinnitus.

    Science.gov (United States)

    Skarzynski, Henryk; Lorens, Artur; Kruszynska, Marika; Obrycka, Anita; Pastuszak, Dorota; Skarzynski, Piotr Henryk

    2017-07-01

    Cochlear implants improve the hearing abilities of individuals with unilateral hearing loss and no tinnitus. The benefit is no different from that seen in patients with unilateral hearing loss and incapacitating tinnitus. To evaluate hearing outcomes after cochlear implantation in individuals with unilateral hearing loss and no tinnitus and compare them to those obtained in a similar group who had incapacitating tinnitus. Six cases who did not experience tinnitus before operation and 15 subjects with pre-operative tinnitus were evaluated with a structured interview, a monosyllabic word test under difficult listening situations, a sound localization test, and an APHAB (abbreviated profile of hearing aid benefit) questionnaire. All subjects used their cochlear implant more than 8 hours a day, 7 days a week. In 'no tinnitus' patients, mean benefit of cochlear implantation was 19% for quiet speech, 15% for speech in noise (with the same signal-to-noise ratio in the implanted and non-implanted ear), and 16% for a more favourable signal-to-noise ratio at the implanted ear. Sound localization error improved by an average of 19°. The global score of APHAB improved by 16%. The benefits across all evaluations did not differ significantly between the 'no tinnitus' and 'tinnitus' groups.

  15. How to quantify binaural hearing in patients with unilateral hearing using hearing implants.

    Science.gov (United States)

    Snik, Ad; Agterberg, Martijn; Bosman, Arjan

    2015-01-01

    Application of bilateral hearing devices in bilateral hearing loss and unilateral application in unilateral hearing loss (second ear with normal hearing) does not a priori lead to binaural hearing. An overview is presented on several measures of binaural benefits that have been used in patients with unilateral or bilateral deafness using one or two cochlear implants, respectively, and in patients with unilateral or bilateral conductive/mixed hearing loss using one or two percutaneous bone conduction implants (BCDs), respectively. Overall, according to this overview, the most significant and sensitive measure is the benefit in directional hearing. Measures using speech (viz. binaural summation, binaural squelch or use of the head shadow effect) showed minor benefits, except for patients with bilateral conductive/mixed hearing loss using two BCDs. Although less feasible in daily practise, the binaural masking level difference test seems to be a promising option in the assessment of binaural function. © 2015 S. Karger AG, Basel.

  16. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  17. Assessing the efficacy of hearing-aid amplification using a phoneme test

    DEFF Research Database (Denmark)

    Scheidiger, Christoph; Allen, Jont B; Dau, Torsten

    2017-01-01

    Consonant-vowel (CV) perception experiments provide valuable insights into how humans process speech. Here, two CV identification experiments were conducted in a group of hearing-impaired (HI) listeners, using 14 consonants followed by the vowel /ɑ/. The CVs were presented in quiet and with added......, in combination with a well-controlled phoneme speech test, may be used to assess the impact of hearing-aid signal processing on speech intelligibility....

  18. Individual differences in selective attention predict speech identification at a cocktail party.

    Science.gov (United States)

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  19. Comparison of general health status in mothers of hearing and hearing-impaired children

    Directory of Open Access Journals (Sweden)

    Movallali

    2013-05-01

    Full Text Available Background and Aim: The birth of a hearing-impaired child and raising him/her often brings special psychological feelings for parents, especially mothers who spend more time with the child. This study aimed to compare the general health status in mothers of hearing-impaired and hearing children. Methods: This was a descriptive-analytic study. General Health Questionnaire was used to identify general health status; and data were analyzed with independent-t test. Results: The general health level of mothers of hearing-impaired children was lower than mothers of normal hearing children (p=0.01 . The average scores of anxiety (p=0.01, depression (p= 0.01 and physical (p=0.02 symptoms and social function (p=0.01 of mothers of hearing-impaired children was higher than mothers of normal hearing ones (p=0.01. Conclusion: Having a child with hearing impairment affects mothers’ general health status. Our findings show that it’s necessary to provide psychological and social support for mothers of hearing-impaired children.

  20. Hearing Benefit and Rated Satisfaction in Children with Unilateral Conductive Hearing Loss Using a Transcutaneous Magnetic-Coupled Bone-Conduction Hearing Aid.

    Science.gov (United States)

    Polonenko, Melissa J; Carinci, Lora; Gordon, Karen A; Papsin, Blake C; Cushing, Sharon L

    Bilateral hearing is important for learning, development, and function in complex everyday environments. Children with conductive and mixed hearing loss (HL) have been treated for years with percutaneous coupling through an abutment, which achieves powerful output, but the implant site is susceptible to skin reactions and trauma. To overcome these complications, transcutaneous magnetic coupling systems were recently introduced. The purpose of the study was to evaluate whether the new transcutaneous magnetic coupling is an effective coupling paradigm for bone-conduction hearing aids (BCHAs). We hypothesized that magnetic coupling will (1) have limited adverse events, (2) provide adequate functional gain, (3) improve spatial hearing and aid listening in everyday situations, and (4) provide satisfactory outcomes to children and their families given one normal hearing ear. Retrospective analysis of audiological outcomes in a tertiary academic pediatric hospital. Nine children aged 5-17 yr with permanent unilateral conductive HL (UCHL) or mixed HL were implanted with a transcutaneous magnet-retained BCHA. Average hearing thresholds of the better and implanted ears were 12.3 ± 11.5 dB HL and 69.1 ± 11.6 dB HL, respectively, with a 59.4 ± 4.8 dB (mean ± standard deviation) conductive component. Data were extracted from audiology charts of the children with permanent UCHL or mixed HL who qualified for a surgically retained BCHA and agreed to the magnetic coupling. Outcomes were collected from the 3- to 9-mo follow-up appointments, and included surgical complications, aided audiometric thresholds with varying magnet strength, speech performance in quiet and noise, and patient-rated benefit and satisfaction using questionnaires. Repeated measures analysis of variance was used to analyze audiometric outcomes, and nonparametric tests were used to evaluate rated benefit and satisfaction. All nine children tolerated the device and only one child had discomfort at the wound

  1. Objective Measures of Listening Effort: Effects of Background Noise and Noise Reduction

    Science.gov (United States)

    Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin

    2009-01-01

    Purpose: This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. Method: To address this, the hypothesis…

  2. Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners' Recognition of English Sentences.

    Science.gov (United States)

    Shi, Lu-Feng; Koenig, Laura L

    2016-01-01

    Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN

  3. Listening to humans walking together activates the social brain circuitry.

    Science.gov (United States)

    Saarela, Miiamaaria V; Hari, Riitta

    2008-01-01

    Human footsteps carry a vast amount of social information, which is often unconsciously noted. Using functional magnetic resonance imaging, we analyzed brain networks activated by footstep sounds of one or two persons walking. Listening to two persons walking together activated brain areas previously associated with affective states and social interaction, such as the subcallosal gyrus bilaterally, the right temporal pole, and the right amygdala. These areas seem to be involved in the analysis of persons' identity and complex social stimuli on the basis of auditory cues. Single footsteps activated only the biological motion area in the posterior STS region. Thus, hearing two persons walking together involved a more widespread brain network than did hearing footsteps from a single person.

  4. Interaural level differences do not suffice for restoring spatial release from masking in simulated cochlear implant listening.

    Directory of Open Access Journals (Sweden)

    Antje Ihlefeld

    Full Text Available Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.

  5. Interaural level differences do not suffice for restoring spatial release from masking in simulated cochlear implant listening.

    Science.gov (United States)

    Ihlefeld, Antje; Litovsky, Ruth Y

    2012-01-01

    Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.

  6. More about…ENT

    African Journals Online (AJOL)

    more effectively in different communication situations. However, it is important to realise that hearing aids cannot 'cure' hearing loss and do not restore hearing to normal. Hearing aids will, through a process of amplification of sound, improve hearing and listening abilities. 'The function of a hearing aid is to amplify sounds to ...

  7. Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom.

    Science.gov (United States)

    Anderson, Karen L; Goldstein, Howard

    2004-04-01

    Children typically learn in classroom environments that have background noise and reverberation that interfere with accurate speech perception. Amplification technology can enhance the speech perception of students who are hard of hearing. This study used a single-subject alternating treatments design to compare the speech recognition abilities of children who are, hard of hearing when they were using hearing aids with each of three frequency modulated (FM) or infrared devices. Eight 9-12-year-olds with mild to severe hearing loss repeated Hearing in Noise Test (HINT) sentence lists under controlled conditions in a typical kindergarten classroom with a background noise level of +10 dB signal-to-noise (S/N) ratio and 1.1 s reverberation time. Participants listened to HINT lists using hearing aids alone and hearing aids in combination with three types of S/N-enhancing devices that are currently used in mainstream classrooms: (a) FM systems linked to personal hearing aids, (b) infrared sound field systems with speakers placed throughout the classroom, and (c) desktop personal sound field FM systems. The infrared ceiling sound field system did not provide benefit beyond that provided by hearing aids alone. Desktop and personal FM systems in combination with personal hearing aids provided substantial improvements in speech recognition. This information can assist in making S/N-enhancing device decisions for students using hearing aids. In a reverberant and noisy classroom setting, classroom sound field devices are not beneficial to speech perception for students with hearing aids, whereas either personal FM or desktop sound field systems provide listening benefits.

  8. Effect of gender on the hearing performance of adult cochlear implant patients.

    Science.gov (United States)

    Lenarz, Minoo; Sönmez, Hasibe; Joseph, Gert; Büchner, Andreas; Lenarz, Thomas

    2012-05-01

    To evaluate the role of gender on the hearing performance of postlingually deafened adult patients with cochlear implants. Individual retrospective cohort study. There were 638 postlingually deafened adults (280 men and 358 women) selected for a retrospective evaluation of their hearing performance with cochlear implants. Both genders underwent the same surgical and rehabilitative procedures and benefited from the latest technological advances available. There was no significant difference in the age, duration of deafness, and preoperative hearing performance between the genders. The test battery was composed of the Freiburger Monosyllabic Test, Speech Tracking, and the Hochmair-Schulz-Moser (HSM) sentence test in quiet and in 10-dB noise. The results of 5 years of follow-up are presented here. Genders showed a similar performance in Freiburger Monosyllabic Test and Speech Tracking Test. However, in the HSM test in noise, men performed slightly better than women in all of the follow-up sessions, which was statistically significant at 2 and 4 years after implantation. Although normal-hearing women use more predictive cognitive strategies in speech comprehension and are supposed to have a more efficient declarative memory system, this may not necessarily lead to a better adaptation to the altered auditory information delivered by a cochlear implant. Our study showed that in more complex listening situations such as speech tests in noise, men tend to perform slightly better than women. Gender may have an influence on the hearing performance of postlingually deafened adults with cochlear implants. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  9. Learning to listen: Listening Strategies and Listening Comprehension of Islamic Senior High School Students

    Directory of Open Access Journals (Sweden)

    DESMA YULISA

    2018-05-01

    Full Text Available The purpose of this research was to identify the correlation and the influence between listening strategies and listening comprehension. The eleventh grade students were selected as participants of this study. The instruments used in this research were listening strategies questionaire adapted from Lee (1997 and modified by Ho (2006 (as cited Golchi, 2012, and listening comprehension test conducted to measure students’ listening comprehension. Pearson product moment, regression analysis, R-square were used to find out the correlation and the influence between variables. The result revealed that there was a significant correlation between listening strategies and listening comprehension with r = .516. Besides, there was also a significant influence of listening strategies on listening comprehension with 26.6 %. This study could have implications for English language teachers, course designers, learners, and text book writers.

  10. Comparison of psychological well-being and coping styles in mothers of deaf and normally-hearing children

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2012-12-01

    Full Text Available Background and Aim: Families who have a child with hearing deficiency deal with different challenges, and mothers have a greater responsibility towards these children because of their traditional role of caregiver; so, they deal with more psychological problems. The aim of this study was to compare the psychological well-being and coping styles in mothers of deaf and normal children.Methods: In this cross-sectional and post event study (causal-comparative method, 30 mothers of deaf students and 30 mothers of normal students from elementary schools of Ardabil, Iran, were selected using available sampling. The Ryff psychological well-being (1989 and Billings and Moos coping styles (1981 questionnaires were used in this study. The data were analyzed using MANOVA test.Results: We found that in mother's of deaf children, psychological well-being and its components was significantly lower than mothers of normal children (p<0.01 and p<0.05, respectively. There was a significant difference between two groups in terms of cognitive coping style, too (p<0.01. However, mothers of deaf children used less cognitive coping style.Conclusions: It seems that child's hearing loss affects on mothers psychological well-being and coping styles; this effect can be visible as psychological problems and lower use of adaptive coping styles.

  11. Individual differences in selective attention predict speech identification at a cocktail party

    Science.gov (United States)

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  12. Validation of the second version of the LittlEARS® Early Speech Production Questionnaire (LEESPQ) in German-speaking children with normal hearing.

    Science.gov (United States)

    Keilmann, Annerose; Friese, Barbara; Lässig, Anne; Hoffmann, Vanessa

    2018-04-01

    The introduction of neonatal hearing screening and the increasingly early age at which children can receive a cochlear implant has intensified the need for a validated questionnaire to assess the speech production of children aged 0‒18. Such a questionnaire has been created, the LittlEARS ® Early Speech Production Questionnaire (LEESPQ). This study aimed to validate a second, revised edition of the LEESPQ. Questionnaires were returned for 362 children with normal hearing. Completed questionnaires were analysed to determine if the LEESPQ is reliable, prognostically accurate, internally consistent, and if gender or multilingualism affects total scores. Total scores correlated positively with age. The LEESPQ is reliable, accurate, and consistent, and independent of gender or lingual status. A norm curve was created. This second version of the LEESPQ is a valid tool to assess the speech production development of children with normal hearing, aged 0‒18, regardless of their gender. As such, the LEESPQ may be a useful tool to monitor the development of paediatric hearing device users. The second version of the LEESPQ is a valid instrument for assessing early speech production of children aged 0‒18 months.

  13. Can You Hear Me Now? Jean-Jacques Rousseau on Listening Education

    Science.gov (United States)

    Laverty, Megan J.

    2011-01-01

    In this essay Megan J. Laverty argues that Jean-Jacques Rousseau's conception of humane communication and his proposal for teaching it have implications for our understanding of the role of listening in education. She develops this argument through a close reading of Rousseau's most substantial work on education, "Emile: Or, On Education". Laverty…

  14. Print Knowledge of Preschool Children with Hearing Loss

    Science.gov (United States)

    Werfel, Krystal L.; Lund, Emily; Schuele, C. Melanie

    2015-01-01

    Measures of print knowledge were compared across preschoolers with hearing loss and normal hearing. Alphabet knowledge did not differ between groups, but preschoolers with hearing loss performed lower on measures of print concepts and concepts of written words than preschoolers with normal hearing. Further study is needed in this area.

  15. Extended bandwidth nonlinear frequency compression in Mandarin-speaking hearing-aid users

    Directory of Open Access Journals (Sweden)

    Wen-Hsuan Tseng

    2018-02-01

    Conclusion: Patients with high-frequency hearing loss may benefit more from using EB-NLFC for word and consonant recognition; however, the improvement was small under a noisy listening environment. The subjective questionnaires did not show significant benefit of EB-NLFC either.

  16. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).

    Science.gov (United States)

    Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F

    Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition

  17. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  18. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  19. Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

    Science.gov (United States)

    Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey

    The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective

  20. Dichotic listening performance predicts language comprehension.

    Science.gov (United States)

    Asbjørnsen, Arve E; Helland, Turid

    2006-05-01

    Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.