Hwang, Jung Sun; Kim, Kyung Hyun; Lee, Jae Hee
Despite amplified speech, listeners with hearing loss often report more difficulties understanding speech in background noise compared to normalhearing listeners. Various factors such as deteriorated hearing sensitivity, age, suprathreshold temporal resolution, and reduced capacity of working memory and attention can attribute to their sentence-in-noise problems. The present study aims to determine a primary explanatory factor for sentence-in-noise recognition difficulties in adults with or without hearing loss. Forty normal-hearing (NH) listeners (23-73 years) and thirty-four hearing-impaired (HI) listeners (24-80 years) participated for experimental testing. For both NH and HI group, the younger, middle-aged, older listeners were included. The sentence recognition score in noise was measured at 0 dB signal-to-noise ratio. The ability of temporal resolution was evaluated by gap detection performance using the Gaps-In-Noise test. Listeners' short-term auditory working memory span was measured by forward and backward digit spans. Overall, the HI listeners' sentence-in-noise recognition, temporal resolution abilities, and digit forward and backward spans were poorer compared to the NH listeners. Both NH and HI listeners had a substantial variability in performance. For NH listeners, only the digit backward span explained a small proportion of the variance in their sentence-in-noise performance. For the HI listeners, all the performance was influenced by age, and their sentence-in-noise difficulties were associated with various factors such as high-frequency hearing sensitivity, suprathreshold temporal resolution abilities, and working memory span. For the HI listeners, the critical predictors of the sentence-in-noise performance were composite measures of peripheral hearing sensitivity and suprathreshold temporal resolution abilities. The primary explanatory factors for the sentence-in-noise recognition performance differ between NH and HI listeners. Factors
Zaar, Johannes; Dau, Torsten
, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102, 2892–2905]. The model was evaluated based on the extensive consonant perception data set provided by Zaar and Dau [(2015). J. Acoust. Soc. Am. 138, 1253–1267], which was obtained with normal-hearing listeners using 15 consonant-vowel combinations...... confusion groups. The large predictive power of the proposed model suggests that adaptive processes in the auditory preprocessing in combination with a cross-correlation based template-matching back end can account for some of the processes underlying consonant perception in normal-hearing listeners....... The proposed model may provide a valuable framework, e.g., for investigating the effects of hearing impairment and hearing-aid signal processing on phoneme recognition....
Santurette, Sébastien; Dau, Torsten
The effects of hearing impairment on the perception of binaural-pitch stimuli were investigated. Several experiments were performed with normal-hearing and hearing-impaired listeners, including detection and discrimination of binaural pitch, and melody recognition using different types of binaural...... pitches. For the normal-hearing listeners, all types of binaural pitches could be perceived immediately and were musical. The hearing-impaired listeners could be divided into three groups based on their results: (a) some perceived all types of binaural pitches, but with decreased salience or musicality...... compared to normal-hearing listeners; (b) some could only perceive the strongest pitch types; (c) some were unable to perceive any binaural pitch at all. The performance of the listeners was not correlated with audibility. Additional experiments investigated the correlation between performance in binaural...
Nielsen, Lars Bramsløw
11 hearing-impaired (HI) and 12 normal-hearing (NH) subjects have performed sound quality ratings on 6 perceptual scales (Loudness, Clarity, Sharpness, Fullness, Spaciousness and Overall judgement). The signals for the rating experiment consisted of running speech and music with or without......, but the normal-hearing group was slightly more reliable. There were significant differences between stimuli and between subjects, with stimuli affecting the ratings the most. Normal-hearing and hearing-impaired subjects showed similar trends, but normal-hearing listeners were generally more sensitive, i...
Nielsen, Lars Bramsløw
) Subjective sound quality ratings of clean and distorted speech and music signals, by normal-hearing and hearing-impaired listeners, to provide reference data, 2) An auditory model of the ear, including the effects of hearing loss, based on existing psychoacoustic knowledge, coupled to 3) An artificial neural......A new method for the objective estimation of sound quality for both normal-hearing and hearing-impaired listeners has been presented: OSSQAR (Objective Scaling of Sound Quality and Reproduction). OSSQAR is based on three main parts, which have been carried out and documented separately: 1...... network, which was trained to predict the sound quality ratings. OSSQAR predicts the perceived sound quality on two independent perceptual rating scales: Clearness and Sharpness. These two scales were shown to be the most relevant for assessment of sound quality, and they were interpreted the same way...
Oxenham, Andrew J.; Dau, Torsten
curvature. Results from 12 listeners with sensorineural hearing loss showed reduced masker phase effects, when compared with data from normal-hearing listeners, at both 250- and 1000-Hz signal frequencies. The effects of hearing impairment on phase-related masking differences were not well simulated...... are affected by a common underlying mechanism, presumably related to cochlear outer hair cell function. The results also suggest that normal peripheral compression remains strong even at 250 Hz....
Arweiler, Iris; Dau, Torsten; Poulsen, Torben
Speech intelligibility depends on many factors such as room acoustics, the acoustical properties and location of the signal and the interferers, and the ability of the (normal and impaired) auditory system to process monaural and binaural sounds. In the present study, the effect of reverberation...... on spatial release from masking was investigated in normal hearing and hearing impaired listeners using three types of interferers: speech shaped noise, an interfering female talker and speech-modulated noise. Speech reception thresholds (SRT) were obtained in three simulated environments: a listening room......, a classroom and a church. The data from the study provide constraints for existing models of speech intelligibility prediction (based on the speech intelligibility index, SII, or the speech transmission index, STI) which have shortcomings when reverberation and/or fluctuating noise affect speech...
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten
Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…
Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.
Bianchi, Federica; Dau, Torsten; Santurette, Sébastien
-discrimination performance for NH listeners. It is unclear whether a comparable effect of musical training occurs for listeners whose sensory encoding of F0 is degraded. To address this question, F0 discrimination was investigated for three groups of listeners (14 young NH, 9 older NH and 10 HI listeners), each......Hearing-impaired (HI) listeners, as well as elderly listeners, typically have a reduced ability to discriminate the fundamental frequency (F0) of complex tones compared to young normal-hearing (NH) listeners. Several studies have shown that musical training, on the other hand, leads to improved F0...... including musicians and non-musicians, using complex tones that differed in harmonic content. Musical training significantly improved F0 discrimination for all groups of listeners, especially for complex tones containing low-numbered harmonics. In a second experiment, the sensitivity to temporal fine...
Borg, Erik; Bergkvist, Christina; Gustafsson, Dan
What underlying mechanisms are involved in the ability to talk and listen simultaneously and what role does self-masking play under conditions of hearing impairment? The purpose of the present series of studies is to describe a technique for assessment of masked thresholds during vocalization, to describe normative data for males and females, and to focus on hearing impairment. The masking effect of vocalized [a:] on narrow-band noise pulses (250-8000 Hz) was studied using the maximum vocalization method. An amplitude-modulated series of sound pulses, which sounded like a steam engine, was masked until the criterion of halving the perceived pulse rate was reached. For masking of continuous reading, a just-follow-conversation criterion was applied. Intra-session test-retest reproducibility and inter-session variability were calculated. The results showed that female voices were more efficient in masking high frequency noise bursts than male voices and more efficient in masking both a male and a female test reading. The male had to vocalize 4 dBA louder than the female to produce the same masking effect on the test reading. It is concluded that the method is relatively simple to apply and has small intra-session and fair inter-session variability. Interesting gender differences were observed.
Aniansson, G.; Peterson, Y.
Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.
Locsei, Gusztav; Pedersen, Julie Hefting; Laugesen, Søren
This study investigated the relationship between speech perception performance in spatially complex, lateralized listening scenarios and temporal fine-structure (TFS) coding at low frequencies. Young normal-hearing (NH) and two groups of elderly hearing-impaired (HI) listeners with mild or moderate...... hearing loss above 1.5 kHz participated in the study. Speech reception thresholds (SRTs) were estimated in the presence of either speech-shaped noise, two-, four-, or eight-talker babble played reversed, or a nonreversed two-talker masker. Target audibility was ensured by applying individualized linear...... threshold nor the interaural phase difference threshold tasks showed a correlation with the SRTs or with the amount of masking release due to binaural unmasking, respectively. The results suggest that, although HI listeners with normal hearing thresholds below 1.5 kHz experienced difficulties with speech...
Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.
Reinhart, Paul N; Souza, Pamela E
Reverberation enhances music perception and is one of the most important acoustic factors in auditorium design. However, previous research on reverberant music perception has focused on young normal-hearing (YNH) listeners. Old hearing-impaired (OHI) listeners have degraded spatial auditory processing; therefore, they may perceive reverberant music differently. Two experiments were conducted examining the effects of varying reverberation on music perception for YNH and OHI listeners. Experiment 1 examined whether YNH listeners and OHI listeners prefer different amounts of reverberation for classical music listening. Symphonic excerpts were processed at a range of reverberation times using a point-source simulation. Listeners performed a paired-comparisons task in which they heard two excerpts with different reverberation times, and they indicated which they preferred. The YNH group preferred a reverberation time of 2.5 s; however, the OHI group did not demonstrate any significant preference. Experiment 2 examined whether OHI listeners are less sensitive to (e, less able to discriminate) differences in reverberation time than YNH listeners. YNH and OHI participants listened to pairs of music excerpts and indicated whether they perceived the same or different amount of reverberation. Results indicated that the ability of both groups to detect differences in reverberation time improved with increasing reverberation time difference. However, discrimination was poorer for the OHI group than for the YNH group. This suggests that OHI listeners are less sensitive to differences in reverberation when listening to music than YNH listeners, which might explain the lack of group reverberation time preferences of the OHI group.
Brons, Inge; Dreschler, Wouter A; Houben, Rolph
Hearing-aid noise reduction should reduce background noise, but not disturb the target speech. This objective is difficult because noise reduction suffers from a trade-off between the amount of noise removed and signal distortion. It is unknown if this important trade-off differs between normal-hearing (NH) and hearing-impaired (HI) listeners. This study separated the negative effect of noise reduction (distortion) from the positive effect (reduction of noise) to allow the measurement of the detection threshold for noise-reduction (NR) distortion. Twelve NH subjects and 12 subjects with mild to moderate sensorineural hearing loss participated in this study. The detection thresholds for distortion were determined using an adaptive procedure with a three-interval, two-alternative forced-choice paradigm. Different levels of distortion were obtained by changing the maximum amount of noise reduction. Participants were also asked to indicate their preferred NR strength. The detection threshold for overall distortion was higher for HI subjects than for NH subjects, suggesting that stronger noise reduction can be applied for HI listeners without affecting the perceived sound quality. However, the preferred NR strength of HI listeners was closer to their individual detection threshold for distortion than in NH listeners. This implies that HI listeners tolerate fewer audible distortions than NH listeners.
Ohl, Björn; Laugesen, Søren; Buchholz, Jörg
The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...
The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly
Zaar, Johannes; Jørgensen, Søren; Dau, Torsten
Speech perception is often studied in terms of natural meaningful speech, i.e., by measuring the in- telligibility of a given set of single words or full sentences. However, when trying to understand how background noise, various sorts of transmission channels (e.g., mobile phones) or hearing...... perception data: (i) an audibility-based approach, which corresponds to the Articu- lation Index (AI), and (ii) a modulation-masking based approach, as reflected in the speech-based Envelope Power Spectrum Model (sEPSM). For both models, the internal representations of the same stimuli as used...
Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin
Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…
Locsei, Gusztav; Santurette, Sébastien; Dau, Torsten
and two-talker babble in terms of SRTs, HI listeners could utilize ITDs to a similar degree as NH listeners to facilitate the binaural unmasking of speech. A slight difference was observed between the group means when target and maskers were separated from each other by large ITDs, but not when separated...... SRMs are elicited by small ITDs. Speech reception thresholds (SRTs) and SRM due to ITDs were measured over headphones for 10 young NH and 10 older HI listeners, who had normal or close-to-normal hearing below 1.5 kHz. Diotic target sentences were presented in diotic or dichotic speech-shaped noise...... or two-talker babble maskers. In the dichotic conditions, maskers were lateralized by delaying the masker waveforms in the left headphone channel. Multiple magnitudes of masker ITDs were tested in both noise conditions. Although deficits were observed in speech perception abilities in speechshaped noise...
Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A
Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as
Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels henrik
Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss...... affects the perception of a sung vowel based on FM cues. Method: Vibrato maps were obtained in 14 NH and 12 HI listeners with different degrees of musical experience. The FM rate and FM excursion of a synthesized vowel, to which coherent FM was applied, were adjusted until a singing voice emerged. Results......: In NH listeners, adding FM to the steady vowel components produced perception of a singing voice for FM rates between 4.1 and 7.5 Hz and FM excursions between 17 and 83 cents on average. In contrast, HI listeners showed substantially broader vibrato maps. Individual differences in map boundaries were...
Barrett, Jillian Gallant
The purpose of this study was to identify listeners' signal-to-background-ratio (SBR) preference levels for vocal music and to investigate whether or not SBR differences existed for different music genres. The ``signal'' was the singer's voice, and the ``background'' was the accompanying music. Three songs were each produced in two different genres (total of 6 genres represented). Each song was performed by three male and three female singers. Analyses addressed influences of musical genre, singing style, and singer timbre on listener's SBR choices. Fifty-three normal-hearing California State University of Northridge students ranging in age from 20-52 years participated as subjects. Subjects adjusted the overall music loudness to a comfortable listening level, and manipulated a second gain control which affected only the singer's voice. Subjects listened to 72 stimuli and adjusted the singer's voice to the level they felt sounded appropriate in comparison to the background music. Singer and Genre were the two primary contributors to significant differences in subject's SBR preferences, although the results clearly indicate Genre, Style and Singer interact in different combinations under different conditions. SBR differences for each song, each singer, and each subject did not occur in a predictable manner, and support the hypothesis that SBR preferences are neither fixed nor dependent merely upon music application or setting. Further investigations regarding psychoacoustical bases responsible for differences in SBR preferences are warranted.
Barrett, Jillian G.
The primary purpose of speech is to convey a message. Many factors affect the listener's overall reception, several of which have little to do with the linguistic content itself, but rather with the delivery (e.g., prosody, intonation patterns, pragmatics, paralinguistic cues). Music, however, may convey a message either with or without linguistic content. In instances in which music has lyrics, one cannot assume verbal content will take precedence over sonic properties. Lyric emphasis over other aspects of music cannot be assumed. Singing introduces distortion of the vowel-consonant temporal ratio of speech, emphasizing vowels and de-emphasizing consonants. The phonemic production alterations of singing make it difficult for even those with normal hearing to understand the singer. This investigation was designed to identify singer-to-background-ratio (SBR) prefer- ences for normal hearing adult listeners (as opposed to SBR levels maxi-mizing speech discrimination ability). Stimuli were derived from three different original songs, each produced in two different genres and sung by six different singers. Singer and genre were the two primary contributors to significant differences in SBR preferences, though results clearly indicate genre, style and singer interact in different combinations for each song, each singer, and for each subject in an unpredictable manner.
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
Gordon-Salant, Sandra; Cole, Stacey Samuels
This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Buus, Søren; Florentine, Mary; Poulsen, Torben
To investigate how hearing loss affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level. Loudness functions derived from these data indicated that the gain required to restore loudness usually is the same for short and long sounds....
Choi, Ji Eun; Won, Jong Ho; Kim, Cheol Hee; Cho, Yang-Sun; Hong, Sung Hwa; Moon, Il Joon
The objective of this study was to examine the relationship between spectrotemporal modulation (STM) sensitivity and the ability to perceive music. Ten normal-nearing (NH) listeners, ten hearing aid (HA) users with moderate hearing loss, and ten cochlear Implant (CI) users participated in this study. Three different types of psychoacoustic tests including spectral modulation detection (SMD), temporal modulation detection (TMD), and STM were administered. Performances on these psychoacoustic tests were compared to music perception abilities. In addition, psychoacoustic mechanisms involved in the improvement of music perception through HA were evaluated. Music perception abilities in unaided and aided conditions were measured for HA users. After that, HA benefit for music perception was correlated with aided psychoacoustic performance. STM detection study showed that a combination of spectral and temporal modulation cues were more strongly correlated with music perception abilities than spectral or temporal modulation cues measured separately. No correlation was found between music perception performance and SMD threshold or TMD threshold in each group. Also, HA benefits for melody and timbre identification were significantly correlated with a combination of spectral and temporal envelope cues though HA.
Yoshida, M; Sagara, T; Nagano, M; Korenaga, K; Makishima, K
The discrimination of mono-syllable words (67S word-list) pronounced by a male and a female speaker was investigated in noise in 39 normal hearing subjects. The subjects listened to the test words at a constant level of 62 dB together with white or weighted noise in four S/N conditions. By processing the data with logit transformation, S/N-discrimination curves were presumed for each combination of a speech material and a noise. Regardless of the type of noise, the discrimination scores for the female voice started to decrease gradually at a S/N ratio of +10 dB, and reached 10 to 20% at-10 dB. For the male voice in white noise, the discrimination curve was similar to those for the female voice. On the contrary, the discrimination score for the male voice in weighted noise declined rapidly from a S/N ratio of +5 dB, and went below 10% at -5 dB. The discrimination curves seem to be shaped by the interrelations between the spectrum of the speech material and that of the noise.
Jürgens, Tim; Brand, Thomas
This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.
Gustafson, Samantha; McCreery, Ryan; Hoover, Brenda; Kopun, Judy G; Stelmachowicz, Pat
The goal of this study was to evaluate how digital noise reduction (DNR) impacts listening effort and judgment of sound clarity in children with normal hearing. It was hypothesized that when two DNR algorithms differing in signal-to-noise ratio (SNR) output are compared, the algorithm that provides the greatest improvement in overall output SNR will reduce listening effort and receive a better clarity rating from child listeners. A secondary goal was to evaluate the relation between the inversion method measurements and listening effort with DNR processing. Twenty-four children with normal hearing (ages 7 to 12 years) participated in a speech recognition task in which consonant-vowel-consonant nonwords were presented in broadband background noise. Test stimuli were recorded through two hearing aids with DNR off and DNR on at 0 dB and +5 dB input SNR. Stimuli were presented to listeners and verbal response time (VRT) and phoneme recognition scores were measured. The underlying assumption was that an increase in VRT reflects an increase in listening effort. Children rated the sound clarity for each condition. The two commercially available HAs were chosen based on: (1) an inversion technique, which was used to quantify the magnitude of change in SNR with the activation of DNR, and (2) a measure of magnitude-squared coherence, which was used to ensure that DNR in both devices preserved the spectrum. One device provided a greater improvement in overall output SNR than the other. Both DNR algorithms resulted in minimal spectral distortion as measured using coherence. For both devices, VRT decreased for the DNR-on condition, suggesting that listening effort decreased with DNR in both devices. Clarity ratings were also better in the DNR-on condition for both devices. The device showing the greatest improvement in output SNR with DNR engaged improved phoneme recognition scores. The magnitude of this improved phoneme recognition was not accurately predicted with
Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J
The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.
This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Füllgrabe, Christian; Rosen, Stuart
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.
Ruggles, Dorea; Shinn-Cunningham, Barbara
Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.
Cho, Soojin; Yu, Jyaehyoung; Chun, Hyungi; Seo, Hyekyung; Han, Woojae
Deficits of the aging auditory system negatively affect older listeners in terms of speech communication, resulting in limitations to their social lives. To improve their perceptual skills, the goal of this study was to investigate the effects of time alteration, selective word stress, and varying sentence lengths on the speech perception of older listeners. Seventeen older people with normal hearing were tested for seven conditions of different time-altered sentences (i.e., ±60%, ±40%, ±20%, 0%), two conditions of selective word stress (i.e., no-stress and stress), and three different lengths of sentences (i.e., short, medium, and long) at the most comfortable level for individuals in quiet circumstances. As time compression increased, sentence perception scores decreased statistically. Compared to a natural (or no stress) condition, the selectively stressed words significantly improved the perceptual scores of these older listeners. Long sentences yielded the worst scores under all time-altered conditions. Interestingly, there was a noticeable positive effect for the selective word stress at the 20% time compression. This pattern of results suggests that a combination of time compression and selective word stress is more effective for understanding speech in older listeners than using the time-expanded condition only.
Cooper, William B.; Tobey, Emily; Loizou, Philipos C.
Objectives The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, Champod, & Hyde, 2003) to assess the music perception abilities of cochlear implant (CI) users. Design The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable to that of CI users. Results CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval) – a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to utilize effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data appear to most closely match with the 4- and 6- channel NH listener conditions for the pitch-based tasks. Conclusions Consistent with previous studies, both CI
Cooper, William B; Tobey, Emily; Loizou, Philipos C
The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, et al., Ann N Y Acad Sci, 999, 58-75) to assess the music perception abilities of cochlear implant (CI) users. The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal-hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, and 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable with that of CI users. CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval)--a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to use effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data seem to most closely match with the 4- and 6-channel NH listener conditions for the pitch-based tasks. Consistent with previous studies, both CI users and NH listeners
Hodgetts, William E; Rieger, Jana M; Szarko, Ryan A
The main objective of this study was to determine the influence of listening environment and earphone style on the preferred-listening levels (PLLs) measured in users' ear canals with a commercially-available MP3 player. It was hypothesized that listeners would prefer higher levels with earbud headphones as opposed to over-the-ear headphones, and that the effects would depend on the environment in which the user was listening. A secondary objective was to use the measured PLLs to determine the permissible listening duration to reach 100% daily noise dose. There were two independent variables in this study. The first, headphone style, had three levels: earbud, over-the-ear, and over-the-ear with noise reduction (the same headphones with a noise reduction circuit). The second, environment, also had 3 levels: quiet, street noise and multi-talker babble. The dependent variable was ear canal A-weighted sound pressure level. A 3 x 3 within-subjects repeated-measures ANOVA was used to analyze the data. Thirty-eight normal hearing adults were recruited from the Faculty of Rehabilitation Medicine at the University of Alberta. Each subject listened to the same song and adjusted the level until it "sounded best" to them in each of the 9 conditions. Significant main effects were found for both the headphone style and environment factors. On average, listeners had higher preferred listening levels with the earbud headphones, than with the over-the-ear headphones. When the noise reduction circuit was used with the over-the-ear headphones, the average PLL was even lower. On average, listeners had higher PLLs in street noise than in multi-talker babble and both of these were higher than the PLL for the quiet condition. The interaction between headphone style and environment was also significant. Details of individual contrasts are explored. Overall, PLLs were quite conservative, which would theoretically allow for extended permissible listening durations. Finally, we investigated
Millman, Rebecca E; Mattys, Sven L
Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.
Hou, Limin; Xu, Li
Short-time processing was employed to manipulate the amplitude, bandwidth, and temporal fine structure (TFS) in sentences. Fifty-two native-English-speaking, normal-hearing listeners participated in four sentence-recognition experiments. Results showed that recovered envelope (E) played an important role in speech recognition when the bandwidth was > 1 equivalent rectangular bandwidth. Removing TFS drastically reduced sentence recognition. Preserving TFS greatly improved sentence recognition when amplitude information was available at a rate ≥ 10 Hz (i.e., time segment ≤ 100 ms). Therefore, the short-time TFS facilitates speech perception together with the recovered E and works with the coarse amplitude cues to provide useful information for speech recognition.
Lorenzi, C; Gatehouse, S; Lever, C
The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.
Jepsen, Morten Løve; Dau, Torsten
showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...
McArdle, Rachel; Wilson, Richard H
To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.
Geravanchizadeh, Masoud; Fallah, Ali
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
Zaar, Johannes; Dau, Torsten
between responses. The speech-induced variability across and within talkers and the across-listener variability were substantial and of similar magnitude. The noise-induced variability, obtained with time-shifted realizations of the same random process, was smaller but significantly larger than the amount......Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability......, referring to perceptual differences caused by acoustical differences in the speech tokens and/or the masking noise tokens, and receiver-related variability, referring to perceptual differences caused by within- and across-listener uncertainty. Consonant-vowel combinations consisting of 15 consonants...
Scheidiger, Christoph; Jørgensen, Søren; Dau, Torsten
speech, e.g. phase jitter or spectral subtraction. Recent studies predict SI for normal-hearing (NH) listeners based on a signal-to-noise ratio measure in the envelope domain (SNRenv), in the framework of the speech-based envelope power spectrum model (sEPSM, [20, 21]). These models have shown good...... agreement with measured data under a broad range of conditions, including stationary and modulated interferers, reverberation, and spectral subtraction. Despite the advances in modeling intelligibility in NH listeners, a broadly applicable model that can predict SI in hearing-impaired (HI) listeners...... is not yet available. As a firrst step towards such a model, this study investigates to what extent eects of hearing impairment on SI can be modeled in the sEPSM framework. Preliminary results show that, by only modeling the loss of audibility, the model cannot account for the higher speech reception...
Brons, Inge; Dreschler, Wouter A.; Houben, Rolph
Hearing-aid noise reduction should reduce background noise, but not disturb the target speech. This objective is difficult because noise reduction suffers from a trade-off between the amount of noise removed and signal distortion. It is unknown if this important trade-off differs between
Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.
-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...
Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.
Purpose: This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. Method: Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and…
Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.
In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels...... and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...
Shinn-Cunningham, Barbara G; Best, Virginia
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Fuller, Christina Diechina
Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users
sounds, has found that both normal-hearing and hearing-impaired listeners prefer loud sounds to be closer to the most comfortable loudness-level, than suggested by common non-linear fitting rules. During this project, two listening experiments were carried out. In the first experiment, hearing aid users......Hearing aid processing of loud speech and noise signals: Consequences for loudness perception and listening comfort. Sound processing in hearing aids is determined by the fitting rule. The fitting rule describes how the hearing aid should amplify speech and sounds in the surroundings......, such that they become audible again for the hearing impaired person. The general goal is to place all sounds within the hearing aid users’ audible range, such that speech intelligibility and listening comfort become as good as possible. Amplification strategies in hearing aids are in many cases based on empirical...
Full Text Available Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use. Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real, regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and
Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.
Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in
Oetting, Dirk; Hohmann, Volker; Appell, Jens-E; Kollmeier, Birger; Ewert, Stephan D
Sensorineural hearing loss typically results in a steepened loudness function and a reduced dynamic range from elevated thresholds to uncomfortably loud levels for narrowband and broadband signals. Restoring narrowband loudness perception for hearing-impaired (HI) listeners can lead to overly loud perception of broadband signals and it is unclear how binaural presentation affects loudness perception in this case. Here, loudness perception quantified by categorical loudness scaling for nine normal-hearing (NH) and ten HI listeners was compared for signals with different bandwidth and different spectral shape in monaural and in binaural conditions. For the HI listeners, frequency- and level-dependent amplification was used to match the narrowband monaural loudness functions of the NH listeners. The average loudness functions for NH and HI listeners showed good agreement for monaural broadband signals. However, HI listeners showed substantially greater loudness for binaural broadband signals than NH listeners: on average a 14.1 dB lower level was required to reach "very loud" (range 30.8 to -3.7 dB). Overall, with narrowband loudness compensation, a given binaural loudness for broadband signals above "medium loud" was reached at systematically lower levels for HI than for NH listeners. Such increased binaural loudness summation was not found for loudness categories below "medium loud" or for narrowband signals. Large individual variations in the increased loudness summation were observed and could not be explained by the audiogram or the narrowband loudness functions. Copyright © 2016 Elsevier B.V. All rights reserved.
Ohlenforst, Barbara; Zekveld, Adriana A; Jansma, Elise P; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E
Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.
Dau, Torsten; Santurette, Sébastien; Strelcyk, Olaf
When two white noises differing only in phase in a particular frequency range are presented simultaneously each to one of our ears, a pitch sensation may be perceived inside the head. This phenomenon, called ’binaural pitch’ or ’dichotic pitch’, can be produced by frequency-dependent interaural...... phasedifference patterns. The evaluation of these interaural phase differences depends on the functionality of the binaural auditory system and the spectro-temporal information at its input. A melody recognition task was performed in the present study using pure-tone stimuli and six different types of noises...... that can generate a binaural pitch sensation. Normal-hearing listeners and hearing-impaired listeners with different kinds of hearing impairment participated in the experiment....
Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal
Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...
Lew, Joyce; Purcell, Alison A; Doble, Maree; Lim, Lynne H
Early use of hearing devices and family participation in auditory-verbal therapy has been associated with age-appropriate verbal communication outcomes for children with hearing loss. However, there continues to be great variability in outcomes across different oral intervention programmes and little consensus on how therapists should prioritise goals at each therapy session for positive clinical outcomes. This pilot intervention study aimed to determine whether therapy goals that concentrate on teaching preschool children with hearing loss how to distinguish between words in a structured listening programme is effective, and whether gains in speech perception skills impact on vocabulary and speech development without them having to be worked on directly in therapy. A multiple baseline across subjects design was used in this within-subject controlled study. 3 children aged between 2:6 and 3:1 with moderate-severe to severe-profound hearing loss were recruited for a 6-week intervention programme. Each participant commenced at different stages of the 10-staged listening programme depending on their individual listening skills at recruitment. Speech development and vocabulary assessments were conducted before and after the training programme in addition to speech perception assessments and probes conducted throughout the intervention programme. All participants made gains in speech perception skills as well as vocabulary and speech development. Speech perception skills acquired were noted to be maintained a week after intervention. In addition, all participants were able to generalise speech perception skills learnt to words that had not been used in the intervention programme. This pilot study found that therapy directed at listening alone is promising and that it may have positive impact on speech and vocabulary development without these goals having to be incorporated into a therapy programme. Although a larger study is necessary for more conclusive findings, the
Desjardins, Jamie L
Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self
Dr. Ali Asghar Kakojoibari
Full Text Available Background and Aim: listening, speaking, reading and writing are considered the lingual skills. These skills are in direct relation with each other. Listening is the first skill learnt by the individual through development. If damaged by hearing impairment, listening can cause serious defect to lingual skills. The goal of our research was to study the effect of hearing loss on reading literacy in hearing impairment students in comparison with normal hearing students.Methods: Study was performed using the examination booklets of Progress in International Reading Literacy Study (PIRLS 2001. 119 hearing impairment students of 4th grade primary school, last year guidance school, and last year high school levels in schools providing exceptional student education were included. These individuals were compared to 46 normal hearing students of 4th grade primary school of ordinary schools. Comparative statistical analysis was performed using t-test.Results: Reading literacy and literal contents understanding was shown to have a significant difference between normal hearing and whole hearing impaired student (p<0.05, except the ones in high school level with moderate hearing loss. There was also seen a significant difference between normal hearing and hearing impairment students in understanding of information contents (p=0.03.Conclusion: Hearing loss has a negative effect on reading literacy. Consequently, curriculum change and evolution of educational programs in exceptional centers is needed, in order to promote reading literacy and to enhance rest hearing
Stephen E Widen
Full Text Available Introduction: The aim of this study was to investigate self-reported hearing and portable music listening habits, measured hearing function and music exposure levels in Swedish adolescents. The study was divided into two parts. Materials and Methods: The first part included 280 adolescents, who were 17 years of age and focused on self-reported data on subjective hearing problems and listening habits regarding portable music players. From this group, 50 adolescents volunteered to participate in Part II of the study, which focused on audiological measurements and measured listening volume. Results: The results indicated that longer lifetime exposure in years and increased listening frequency were associated with poorer hearing thresholds and more self-reported hearing problems. A tendency was found for listening to louder volumes and poorer hearing thresholds. Women reported more subjective hearing problems compared with men but exhibited better hearing thresholds. In contrast, men reported more use of personal music devices, and they listen at higher volumes. Discussion: Additionally, the study shows that adolescents listening for ≥3 h at every occasion more likely had tinnitus. Those listening at ≥85 dB LAeq, FF and listening every day exhibited poorer mean hearing thresholds, reported more subjective hearing problems and listened more frequently in school and while sleeping. Conclusion: Although the vast majority listened at moderate sound levels and for shorter periods of time, the study also indicates that there is a subgroup (10% that listens between 90 and 100 dB for longer periods of time, even during sleep. This group might be at risk for developing future noise-induced hearing impairments.
Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.
Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat
To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress
Bianchi, Federica; Fereczkowski, Michal; Zaar, Johannes
estimated in the same listeners. The estimated reduction of cochlear compression was significantly correlated with the increase in the F0DL ratio, while no correlation was found with filter bandwidth. The effects of degraded frequency selectivity and loss of compression were considered in a simplified......-discrimination performance in listeners with SNHL. Pitch-discrimination thresholds were obtained for 14 normal-hearing (NH) and 10 hearing-impaired (HI) listeners for sine-phase (SP) and random-phase (RP) complex tones. When all harmonics were unresolved, the HI listeners performed, on average, worse than NH listeners...... in the RP condition but similarly to NH listeners in the SP condition. The increase in pitch-discrimination performance for the SP relative to the RP condition (F0DL ratio) was significantly larger in the HI as compared with the NH listeners. Cochlear compression and auditory-filter bandwidths were...
Eline Borch Petersen
Full Text Available Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory. One neural signature of working memory load is enhanced power of alpha oscillations (6 ‒ 12 Hz. However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of working memory when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and background noise level, while the electroencephalogram (EEG was recorded. In each trial, participants were presented with 2, 4, or 6 spoken digits embedded in one of three different levels of background noise. After a stimulus-free delay interval, participants indicated whether a probe digit had appeared in the sequence of digits. Participants were healthy older adults (62 – 86 years, with normal to moderately impaired hearing. Importantly, the background noise levels were individually adjusted and participants were wearing hearing aids to equalize audibility across participants. Irrespective of hearing loss, behavioral performance improved with lower memory load and also with lower levels of background noise. Interestingly, the alpha power in the stimulus-free delay interval was dependent on the interplay between task demands (memory load and noise level and hearing loss; while alpha power increased with hearing loss during low and intermediate levels of memory load and background noise, it dropped for participants with the relatively most severe hearing loss under the highest memory load and background noise level. These findings suggest that adaptive neural mechanisms for coping with adverse listening conditions break down for higher degrees of hearing loss, even when adequate hearing aid amplification is in place.
Different listening training methods exist, which are based on the assumption that people can be trained to process incoming sound more effectively. It is often distinguished between the terms hearing (=passive reception of sound) and listening (=active process of tuning in to those sounds we wish...... to receive). Listening training methods claim to benefit a wide variety of people, e.g. people having learning disabilities, developmental delay or concentration problems. Sound therapists report about improved hearing/ listening curves following listening training programs. No independent research study has...... confirmed these results using standardized hearing test measures. Dr. Alfred Tomatis, a French ear nose throat doctor, developed the Tomatis listening training in the 1950s. The principles of the Tomatis method are described. A literature review has been conducted to investigate, whether the Tomatis method...
Desjardins, Jamie L; Doherty, Karen A
The purpose of the present study was to evaluate the effect of a noise-reduction (NR) algorithm on the listening effort hearing-impaired participants expend on a speech in noise task. Twelve hearing-impaired listeners fitted with behind-the-ear hearing aids with a fast-acting modulation-based NR algorithm participated in this study. A dual-task paradigm was used to measure listening effort with and without the NR enabled in the hearing aid. The primary task was a sentence-in-noise task presented at fixed overall speech performance levels of 76% (moderate listening condition) and 50% (difficult listening condition) correct performance, and the secondary task was a visual-tracking test. Participants also completed measures of working memory (Reading Span test), and processing speed (Digit Symbol Substitution Test) ability. Participants' speech recognition in noise scores did not significantly change with the NR algorithm activated in the hearing aid in either listening condition. The NR algorithm significantly decreased listening effort, but only in the more difficult listening condition. Last, there was a tendency for participants with faster processing speeds to expend less listening effort with the NR algorithm when listening to speech in background noise in the difficult listening condition. The NR algorithm reduced the listening effort adults with hearing loss must expend to understand speech in noise.
Santurette, Sébastien; Dau, Torsten
Binaural pitch is a tonal sensation produced by introducing a frequency-dependent interaural phase shift in binaurally presented white noise. As no spectral cues are present in the physical stimulus, binaural pitch perception is assumed to rely on accurate temporal fine structure coding and intact...... binaural integration mechanisms. This study investigated to what extent basic auditory measures of binaural processing as well as cognitive abilities are correlated with the ability of hearing-impaired listeners to perceive binaural pitch. Subjects from three groups (1: normal-hearing; 2: cochlear...... hearingloss; 3: retro-cochlear impairment) were asked to identify the pitch contour of series of five notes of equal duration, ranging from 523 to 784 Hz, played either with Huggins’ binaural pitch stimuli (BP) or perceptually similar, but monaurally detectable, pitches (MP). All subjects from groups 1 and 2...
Different listening training methods exist, which are based on the assumption that people can be trained to process incoming sound more effectively. It is often distinguished between the terms hearing (=passive reception of sound) and listening (=active process of tuning in to those sounds we wish to receive). Listening training methods claim to benefit a wide variety of people, e.g. people having learning disabilities, developmental delay or concentration problems. Sound therapists report ab...
Kathleen Hutchinson Marron
Full Text Available This study examined the relationship between hearing levels, otoacoustic emission levels and listening habits related to the use of personal listening devices (PLDs in adults with varying health-related fitness. Duration of PLD use was estimated and volume level was directly measured. Biomarkers of health-related fitness were co-factored into the analyses. 115 subjects ages 18–84 participated in this study. Subjects were divided into two sub-groups; PLD users and non-PLD users. Both groups completed audiological and health-related fitness tests. Due to the mismatch in the mean age of the PLD user versus the non-PLD user groups, age-adjusted statistics were performed to determine factors that contributed to hearing levels. Age was the most significant predictor of hearing levels across listening and health-related fitness variables. PLD user status did not impact hearing measures, yet PLD users who listened less than 8 hours per week with intensities of less than 80 dBA were found to have better hearing. Other variables found to be associated with hearing levels included: years listening to PLD, number of noise environments and use of ear protection. Finally, a healthy waist-to-hip ratio was a significant predictor of better hearing, while body mass index approached, but did not reach statistical significance.
Verschuure, J.; Dreschler, W. A.; de Haan, E. H.; van Cappellen, M.; Hammerschlag, R.; Maré, M. J.; Maas, A. J.; Hijmans, A. C.
Syllabic compression has not been shown unequivocally to improve speech intelligibility in hearing-impaired listeners. This paper attempts to explain the poor results by introducing the concept of minimum overshoots. The concept was tested with a digital signal processor on hearing-impaired
Smits, J.T.S.; Duifhuis, H.
3 listeners with sensorineural hearing loss ranging from moderate to moderate-severe starting at frequencies higher than 1 kHz participated in two masking experiments and a partial masking experiment. In the first masking experiment, fM = 1 kHz and LM = 50 dB SPL, higher than normal masked
Jepsen, Morten Løve
in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...
Full Text Available Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings.
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955
Kirchberger, Martin; Russo, Frank A
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.
Rogers, Deanna S.; Lentz, Jennifer J.
The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.
Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica
Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (ppostural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Reuter Andersen, Karen
confirmed these results using standardized hearing test measures. Dr. Alfred Tomatis, a French ear nose throat doctor, developed the Tomatis listening training in the 1950s. The principles of the Tomatis method are described. A literature review has been conducted to investigate, whether the Tomatis method...
Ferguson, Sarah Hargus; Morgan, Shae D.
Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…
Gregan, Melanie J.; Nelson, Peggy B.; Oxenham, Andrew J.
Hearing-impaired (HI) listeners often show less masking release (MR) than normal-hearing listeners when temporal fluctuations are imposed on a steady-state masker, even when accounting for overall audibility differences. This difference may be related to a loss of cochlear compression in HI listeners. Behavioral estimates of compression, using temporal masking curves (TMCs), were compared with MR for band-limited (500–4000 Hz) speech and pure tones in HI listeners and age-matched, noise-masked normal-hearing (NMNH) listeners. Compression and pure-tone MR estimates were made at 500, 1500, and 4000 Hz. The amount of MR was defined as the difference in performance between steady-state and 10-Hz square-wave-gated speech-shaped noise. In addition, temporal resolution was estimated from the slope of the off-frequency TMC. No significant relationship was found between estimated cochlear compression and MR for either speech or pure tones. NMNH listeners had significantly steeper off-frequency temporal masking recovery slopes than did HI listeners, and a small but significant correlation was observed between poorer temporal resolution and reduced MR for speech. The results suggest either that the effects of hearing impairment on MR are not determined primarily by changes in peripheral compression, or that the TMC does not provide a sufficiently reliable measure of cochlear compression. PMID:24116426
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.
Ruggles, Dorea; Bharadwaj, Hari; Shinn-Cunningham, Barbara G
Anecdotally, middle-aged listeners report difficulty conversing in social settings, even when they have normal audiometric thresholds [1-3]. Moreover, young adult listeners with "normal" hearing vary in their ability to selectively attend to speech amid similar streams of speech. Ignoring age, these individual differences correlate with physiological differences in temporal coding precision present in the auditory brainstem, suggesting that the fidelity of encoding of suprathreshold sound helps explain individual differences . Here, we revisit the conundrum of whether early aging influences an individual's ability to communicate in everyday settings. Although absolute selective attention ability is not predicted by age, reverberant energy interferes more with selective attention as age increases. Breaking the brainstem response down into components corresponding to coding of stimulus fine structure and envelope, we find that age alters which brainstem component predicts performance. Specifically, middle-aged listeners appear to rely heavily on temporal fine structure, which is more disrupted by reverberant energy than temporal envelope structure is. In contrast, the fidelity of envelope cues predicts performance in younger adults. These results hint that temporal envelope cues influence spatial hearing in reverberant settings more than is commonly appreciated and help explain why middle-aged listeners have particular difficulty communicating in daily life. Copyright © 2012 Elsevier Ltd. All rights reserved.
Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas
Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No
Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther
The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing
Speech recognition by normal-hearing listeners improves as a function of the number of spectral channels when tested with a noiseband vocoder simulating cochlear implant signal processing. Speech recognition by the best cochlear implant users, however, saturates around eight channels and does not
David L Woods
Full Text Available Hearing aids (HAs only partially restore the ability of older hearing impaired (OHI listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC syllables containing different vowels (/ɑ/, /i/, or /u/ and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs spanning a 12 dB range. Noise levels were adjusted over training sessions based on d' measures. Listeners were tested before and after training to measure (1 changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2 sentence reception thresholds (SeRTs using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d' thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in
Kumar, K. V.; Rao, A. B.
This paper presents the results of an assessment of speech discrimination by hearing-impaired listeners (sensori-neural, conductive, and mixed groups) under binaural free-field listening in the presence of background noise. Subjects with pure-tone thresholds greater than 20 dB in 0.5, 1.0 and 2.0 kHz were presented with a version of the W-22 list of phonetically balanced words under three conditions: (1) 'quiet', with the chamber noise below 28 dB and speech at 60 dB; (2) at a constant S/N ratio of +10 dB, and with a background white noise at 70 dB; and (3) same as condition (2), but with the background noise at 80 dB. The mean speech discrimination scores decreased significantly with noise in all groups. However, the decrease in binaural speech discrimination scores with an increase in hearing impairment was less for material presented under the noise conditions than for the material presented in quiet.
Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker
Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…
Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.
Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…
Alhanbali, Sara; Dawes, Piers; Lloyd, Simon; Munro, Kevin J
To investigate the correlations between hearing handicap, speech recognition, listening effort, and fatigue. Eighty-four adults with hearing loss (65 to 85 years) completed three self-report questionnaires: the Fatigue Assessment Scale, the Effort Assessment Scale, and the Hearing Handicap Inventory for Elderly. Audiometric assessment included pure-tone audiometry and speech recognition in noise. There was a significant positive correlation between handicap and fatigue (r = 0.39, p speech recognition and fatigue (r = 0.22, p speech recognition both correlate with self-reported listening effort and fatigue, which is consistent with a model of listening effort and fatigue where perceived difficulty is related to sustained effort and fatigue for unrewarding tasks over which the listener has low control. A clinical implication is that encouraging clients to recognize and focus on the pleasure and positive experiences of listening may result in greater satisfaction and benefit from hearing aid use.
Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa
Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses' ability to assess patients effectively. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilized an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patients' experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.
Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily
The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.
Horn, David L.; Won, Jong Ho; Rubinstein, Jay T.; Werner, Lynne A.
Objectives Spectral resolution is a correlate of open-set speech understanding in post-lingually deaf adults as well as pre-lingually deaf children who use cochlear implants (CIs). In order to apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in NH children. In this study, spectral ripple discrimination (SRD) was used to measure listeners’ sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. Design SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90 degree shift in phase of the sinusoidally-modulated amplitude spectrum. A 2X3 between subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough “depth” (10, 13, and 20 dB) on SRD in normal hearing listeners (Experiment 1). In Experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In Experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). Results In Experiment 1, there was a significant interaction between age and ripple depth. The Infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in Experiment 2 infant performance was
Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.
Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua
To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.
Wu, Mengfan; El-Haj-Ali, Mouhamad; Sanchez Lopez, Raul
hearing aid settings that differed in terms of signal-to-noise ratio (SNR) improvement and temporal and spectral speech distortions were selected for testing based on a comprehensive technical evaluation of different parameterisations of the hearing aid simulator. Speech-in-noise perception was assessed...... stimulus comparison paradigm. RESULTS We hypothesize that the perceptual outcomes from the six hearing aid settings will differ across listeners with different auditory profiles. More specifically, we expect listeners showing high sensitivity to temporal and spectral differences to perform best with and....../or to favour hearing aid settings that preserve those cues. In contrast, we expect listeners showing low sensitivity to temporal and spectral differences to perform best with and/or to favour settings that maximize SNR improvement, independent of any additional speech distortions. Altogether, we anticipate...
Anouk P Netten
Full Text Available The purpose of this study was to examine the level of empathy in deaf and hard of hearing (preadolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy.The study group (mean age 11.9 years consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior.Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported language or attend special education. However, they are still outperformed by normal hearing children.Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.
Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M
The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.
Alicea, Carly C. M.; Doherty, Karen A.
Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…
Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M
Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast
Johnson, Jani A; Xu, Jingjing; Cox, Robyn M
Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants' speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily
Horn, David L; Won, Jong Ho; Rubinstein, Jay T; Werner, Lynne A
Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 d
Nemec, Patricia B; Spagnolo, Amy Cottone; Soydan, Anne Sullivan
This column provides an overview of methods for training to improve service provider active listening and reflective responding skills. Basic skills in active listening and reflective responding allow service providers to gather information about and explore the needs, desires, concerns, and preference of people using their services-activities that are of critical importance if services are to be truly person-centered and person-driven. Sources include the personal experience of the authors as well as published literature on the value of basic counseling skills and best practices in training on listening and other related soft skills. Training in listening is often needed but rarely sought by behavioral health service providers. Effective curricula exist, providing content and practice opportunities that can be incorporated into training, supervision, and team meetings. When providers do not listen well to the people who use their services, the entire premise of recovery-oriented person-driven services is undermined. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Lentz, Jennifer J; Walker, Matthew A; Short, Ciara E; Skinner, Kimberly G
This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Audiometric thresholds (octave frequencies from 0.25-16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1-5 scale and selected a preferred type. For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms.
Vogel, I.; Brug, J.; Ploeg, C.P.B. van der; Raat, H.
There is an increasing population at risk of hearing loss and tinnitus due to increasing high-volume music listening. To inform prevention strategies and interventions, this study aimed to identify important protection motivation theory-based constructs as well as the constructs 'consideration of
Vogel, Ineke; Brug, Johannes; Van Der Ploeg, Catharina P. B.; Raat, Hein
There is an increasing population at risk of hearing loss and tinnitus due to increasing high-volume music listening. To inform prevention strategies and interventions, this study aimed to identify important protection motivation theory-based constructs as well as the constructs "consideration of future consequences" and "habit…
Petersen, Bjørn; Friis Andersen, Anne Sofie; Højlund, Andreas
With the considerable advances made in cochlear implant (CI) technology with regards to speech perception, it is natural that many CI users express hopes of being able to enjoy music. For the majority of CI users, however, the music experience is disappointing and their discrimination of musical...... features as well as self-reported levels of music enjoyment is significantly lower than normal-hearing (NH) listeners (1,2). Therefore, it is important that ongoing efforts are made to improve the quality of music through a CI. To aid in this process, the aim of this study is to validate two new musical...
Pichora-Fuller, M Kathleen; Kramer, Sophia E; Eckert, Mark A; Edwards, Brent; Hornsby, Benjamin W Y; Humes, Larry E; Lemke, Ulrike; Lunner, Thomas; Matthen, Mohan; Mackersie, Carol L; Naylor, Graham; Phillips, Natalie A; Richter, Michael; Rudner, Mary; Sommers, Mitchell S; Tremblay, Kelly L; Wingfield, Arthur
The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.
Ohlenforst, Barbara; Zekveld, Adriana A.; Lunner, Thomas
Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results...... in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared......-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil...
Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.
Listening is one of the most crucial skills that leaders need to possess but is often the most difficult to master. It takes hard work, concentration, and specific skill sets to become an effective listener. Facilities leaders need to perfect the art of listening to their employees. Employees possess pertinent knowledge about day-to-day operations…
Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.
Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365
Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John
Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.
Netten, A.P.; Rieffe, C.; Theunissen, S.C.P.M.; Soede, W.; Dirks, E.; Briaire, J.J.; Frijns, J.H.M.
Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age
Petersen, Eline B.; Wöstmann, Malte; Obleser, Jonas; Stenfelt, Stefan; Lunner, Thomas
Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory (WM). One neural signature of WM load is enhanced power of alpha oscillations (6–12 Hz). However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of WM when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and backgrou...
Lau, Sin Tung; Pichora-Fuller, M Kathleen; Li, Karen Z H; Singh, Gurjit; Campos, Jennifer L
Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges. The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking. In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth). The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments. Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within
Jepsen, Morten Løve; Dau, Torsten
–438 (2008)] was used as a framework. The parameters of the cochlear processing stage of the model were adjusted to account for behaviorally estimated individual basilar-membrane inputoutput functions and the audiogram, from which the amounts of inner hair-cell and outer hair-cell losses were estimated......This study considered consequences of sensorineural hearing loss in ten listeners. The characterization of individual hearing loss was based on psychoacoustic data addressing audiometric pure-tone sensitivity, cochlear compression, frequency selectivity, temporal resolution, and intensity...
Strelcyk, Olaf; Dau, Torsten
Hearing-impaired people often experience great difficulty with speech communication when background noise is present, even if reduced audibility has been compensated for. Other impairment factors must be involved. In order to minimize confounding effects, the subjects participating in this study...... consisted of groups with homogeneous, symmetric audiograms. The perceptual listening experiments assessed the intelligibility of full-spectrum as well as low-pass filtered speech in the presence of stationary and fluctuating interferers, the individual's frequency selectivity and the integrity of temporal...... modulation were obtained. In addition, these binaural and monaural thresholds were measured in a stationary background noise in order to assess the persistence of the fine-structure processing to interfering noise. Apart from elevated speech reception thresholds, the hearing impaired listeners showed poorer...
Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young
In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Greenstein, Elizabeth A; Arora, Vineet M; Staisiunas, Paul G; Banerjee, Stacy S; Farnan, Jeanne M
The increasing fragmentation of healthcare has resulted in more patient handoffs. Many professional groups, including the Accreditation Council on Graduate Medical Education and the Society of Hospital Medicine, have made recommendations for safe and effective handoffs. Despite the two-way nature of handoff communication, the focus of these efforts has largely been on the person giving information. To observe and characterise the listening behaviours of handoff receivers during hospitalist handoffs. Prospective observational study of shift change and service change handoffs on a non-teaching hospitalist service at a single academic tertiary care institution. The 'HEAR Checklist', a novel tool created based on review of effective listening behaviours, was used by third party observers to characterise active and passive listening behaviours and interruptions during handoffs. In 48 handoffs (25 shift change, 23 service change), active listening behaviours (eg, read-back (17%), note-taking (23%) and reading own copy of the written signout (27%)) occurred less frequently than passive listening behaviours (eg, affirmatory statements (56%) nodding (50%) and eye contact (58%)) (pRead-back occurred only eight times (17%). In 11 handoffs (23%) receivers took notes. Almost all (98%) handoffs were interrupted at least once, most often by side conversations, pagers going off, or clinicians arriving. Handoffs with more patients, such as service change, were associated with more interruptions (r=0.46, plistening behaviours. While passive listening behaviours are common, active listening behaviours that promote memory retention are rare. Handoffs are often interrupted, most commonly by side conversations. Future handoff improvement efforts should focus on augmenting listening and minimising interruptions.
Weichbold, Viktor; Zorowka, Patrick
This study looked at whether a hearing education campaign would have behavioral effects on the music listening practices of high school students. A total of 1757 students participated in a hearing education campaign. Before the campaign and one year thereafter they completed a survey asking for: (1) average frequency of discotheque attendance, (2) average duration of stay in the discotheque, (3) use of earplugs in discotheques, (4) frequency of regeneration breaks while at a discotheque, and (5) mean time per week spent listening to music through headphones. On questions (2), (3) and (5) no relevant post-campaign changes were reported. On question (1) students' answers indicated that the frequency of discotheque attendance had even increased after the campaign. The only change in keeping with the purpose of the campaign was an increase in the number of regeneration breaks when at a discotheque. The effect of hearing education campaigns on music listening behavior is questioned. Additional efforts are suggested to encourage adolescents to adopt protective behaviors.
Lewis, Dawna E; Smith, Nicholas A; Spalding, Jody L; Valente, Daniel L
Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children
Rekkedal, Ann Mette
This study investigates factors associated with the listening perception of classroom communication by students with hearing loss, based on the students' and their teachers' views. It also examines how students with different degrees of hearing loss may perceive their classmates. To explore the relationships between the factors Structural Equation…
Dreschler, W. A.
In this study, differences between dynamic-range reduction by peak clipping and single-channel compression for phoneme perception through conventional hearing aids have been investigated. The results from 16 hearing-impaired listeners show that compression limiting yields significantly better
Niccum, Nancy; And Others
Conductive hearing losses were simulated in 12 subjects aged 19-35 and performance was compared with normal hearing performance. Digit dichotic performance was affected when test intensities were within 8 dB of the "knees" (95 percent correct point) of monotic performance intensity functions, but not when test intensities were 12 dB…
Lidestam, Björn; Rönnberg, Jerker
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Misurelli, Sara M.
The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.
Full Text Available Background and Aim: Reading is the most important human need for learning. In normal-hearing people working memory is a predictor of reading comprehension. In this study the relationship between working memory and reading comprehension skills was studied in hearing-impaired children, and then compared with the normal-hearing group.Methods: This was a descriptive-analytic study. The working memory and reading comprehension skills of 18 (8 male, 10 female sever hearing-impaired children in year five of exceptional schools were compared by means of a reading test with 18 hearing children as control group. The subjects in the control group were of the same gender and educational level of the sample group.Results: The children with hearing loss performed similarly to the normal-hearing children in tasks related to auditory-verbal memory of sounds (reverse, visual-verbal memory of letters, and visual-verbal memory of pictures. However, they showed lower levels of performance in reading comprehension (p<0.001. Moreover, no significant relationship was observed between working memory and reading comprehension skills.Conclusion: Findings indicated that children with hearing loss have a significant impairment in the reading comprehension skill. Impairment in language knowledge and vocabulary may be the main cause of poor reading comprehension in these children. In hearing-impaired children working memory is not a strong predictor of reading comprehension.
Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa
Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V
Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.
U Ajith Kumar
Full Text Available Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13, 41 50 ( = 13, 41-50 (n = 9, and 51-60 (n = 6 years and their non-noise-exposed counterparts (n = 30 in each age group. Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.
Buus, Søren; Florentine, Mary; Poulsen, Torben
To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4...... of temporal integration—defined as the level difference between equally loud short and long tones—varied nonmonotonically with level and was largest at moderate levels. No consistent effect of frequency was apparent. The impaired listeners varied widely, but most showed a clear effect of level on the amount...... of temporal integration. Overall, their results appear consistent with expectations based on knowledge of the general properties of their loudness-growth functions and the equal-loudness-ratio hypothesis, which states that the loudness ratio between equal-SPL long and brief tones is the same at all SPLs...
Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew
In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.
Brand, Thomas; Hauth, Christopher; Wagener, Kirsten C.
Linked pairs of hearing aids offer various possibilities for directional processing providing adjustable trade-off between improving signal-to-noise ratio and preserving binaural listening. The benefit depends on the processing scheme, the acoustic scenario, and the listener’s ability to exploit...... fine structure. BSIM revealed a benefit due to binaural processing in well-performing listeners when processing provided low-frequency interaural timing cues....
Curtin, Hugh D
This article presents an approach to imaging conductive hearing loss in patients with normal tympanic membranes and discusses entities that should be checked as the radiologist evaluates this potentially complicated issue. Conductive hearing loss in a patient with a normal tympanic membrane is a complicated condition that requires a careful imaging approach. Imaging should focus on otosclerosis, and possible mimics and potential surgical considerations should be evaluated. The radiologist should examine the ossicular chain and the round window and keep in mind that a defect in the superior semicircular canal can disturb the hydraulic integrity of the labyrinth.
Krysne Kelly de França Oliveira
Full Text Available Introduction: There has been an increase in the number of hearing impaired people with access to higher education. Most of them are young people from a different culture who present difficulties in communication, inter-relationship, and learning in a culture of normal hearing people, because they use a different language, the Brazilian Sign Language - LIBRAS. Objective: The present study aimed to identify the forms of communication used between hearing impaired and normal hearing students, verifying how they can interfere with the learning process of the first. Methods: A qualitative study that used the space of a private university in the city of Fortaleza, Ceará state, Brazil, from February to April 2009. We carried out semi-structured interviews with three hearing impaired students, three teachers, three interpreters, and three normal hearing students. The content of the speeches was categorized and organized by the method of thematic analysis. Results: We verified that the forms of communication used ranged from mime and gestures to writing and drawing, but the most accepted by the hearing impaired students was LIBRAS. As a method of communication, it supports the learning of hearing impaired students, and with the mediation of interpreters, it gives them conditions to settle in their zones of development, according to the precepts of Vygotsky. Conclusion: Thus, we recognize the importance of LIBRAS as predominant language, essential to the full academic achievement of hearing impaired students; however, their efforts and dedication, as well as the interest of institutions and teachers on the deaf culture, are also important for preparing future professionals.
Valente, M; Peterein, J; Goebel, J; Neely, J G
In 95 percent of the cases, patients with acoustic neuromas will have some magnitude of hearing loss in the affected ear. This paper reports on four patients who had acoustic neuromas and normal hearing. Results from the case history, audiometric evaluation, auditory brainstem response (ABR), electroneurography (ENOG), and vestibular evaluation are reported for each patient. For all patients, the presence of unilateral tinnitus was the most common complaint. Audiologically, elevated or absent acoustic reflex thresholds and abnormal ABR findings were the most powerful diagnostic tools.
Goedegebure, A.; Hulshof, M.; Maas, R. J.; Dreschler, W. A.; Verschuure, H.
The effect of digital processing on speech intelligibility was studied in hearing-impaired listeners with moderate to severe high-frequency losses. The amount of smoothed phonemic compression in a high-frequency channel was varied using wide-band control. Two alternative systems were tested to
Locsei, Gusztav; Dau, Torsten; Santurette, Sébastien
This study investigated the contribution of interaural timing differences (ITDs) in different frequency regions to binaural unmasking (BU) of speech. Speech reception thresholds (SRTs) and binaural intelligibility level differences (BILDs) were measured in two-talker babble in 6 young normal-hear...
Gfeller, Kate; Olszewski, Carol; Rychener, Marly; Sena, Kimberly; Knutson, John F; Witt, Shelley; Macpherson, Beth
The purposes of this study were (a) to compare recognition of "real-world" music excerpts by postlingually deafened adults using cochlear implants and normal-hearing adults; (b) to compare the performance of cochlear implant recipients using different devices and processing strategies; and (c) to examine the variability among implant recipients in recognition of musical selections in relation to performance on speech perception tests, performance on cognitive tests, and demographic variables. Seventy-nine cochlear implant users and 30 normal-hearing adults were tested on open-set recognition of systematically selected excerpts from musical recordings heard in real life. The recognition accuracy of the two groups was compared for three musical genre: classical, country, and pop. Recognition accuracy was correlated with speech recognition scores, cognitive measures, and demographic measures, including musical background. Cochlear implant recipients were significantly less accurate in recognition of previously familiar (known before hearing loss) musical excerpts than normal-hearing adults (p genre. Implant recipients were most accurate in the recognition of country items and least accurate in the recognition of classical items. There were no significant differences among implant recipients due to implant type (Nucleus, Clarion, or Ineraid), or programming strategy (SPEAK, CIS, or ACE). For cochlear implant recipients, correlations between melody recognition and other measures were moderate to weak in strength; those with statistically significant correlations included age at time of testing (negatively correlated), performance on selected speech perception tests, and the amount of focused music listening following implantation. Current-day cochlear implants are not effective in transmitting several key structural features (i.e., pitch, harmony, timbral blends) of music essential to open-set recognition of well-known musical selections. Consequently, implant
Alexandra Dezani Soares
Full Text Available CONTEXT AND OBJECTIVE: Oral narrative is a means of language development assessment. However, standardized data for deaf patients are scarce. The aim here was to compare the use of narrative competence between hearing-impaired and normal-hearing children. DESIGN AND SETTING: Analytical cross-sectional study at the Department of Speech-Language and Hearing Sciences, Universidade Federal de São Paulo. METHODS: Twenty-one moderately to profoundly bilaterally hearing-impaired children (cases and 21 normal-hearing children without language abnormalities (controls, matched according to sex, age, schooling level and school type, were studied. A board showing pictures in a temporally logical sequence was presented to each child, to elicit a narrative, and the child's performance relating to narrative structure and cohesion was measured. The frequencies of variables, their associations (Mann-Whitney test and their 95% confidence intervals was analyzed. RESULTS: The deaf subjects showed poorer performance regarding narrative structure, use of connectives, cohesion measurements and general punctuation (P < 0.05. There were no differences in the number of propositions elaborated or in referent specification between the two groups. The deaf children produced a higher proportion of orientation-related propositions (P = 0.001 and lower proportions of propositions relating to complicating actions (P = 0.015 and character reactions (P = 0.005. CONCLUSION: Hearing-impaired children have abnormalities in different aspects of language, involving form, content and use, in relation to their normal-hearing peers. Narrative competence was also associated with the children's ages and the school type.
Gilliver, Megan; Carter, Lyndal; Macoun, Denise; Rosen, Jenny; Williams, Warwick
Professional and community concerns about the potentially dangerous noise levels for common leisure activities has led to increased interest on providing hearing health information to participants. However, noise reduction programmes aimed at leisure activities (such as music listening) face a unique difficulty. The noise source that is earmarked for reduction by hearing health professionals is often the same one that is viewed as pleasurable by participants. Furthermore, these activities often exist within a social setting, with additional peer influences that may influence behavior. The current study aimed to gain a better understanding of social-based factors that may influence an individual's motivation to engage in positive hearing health behaviors. Four hundred and eighty-four participants completed questionnaires examining their perceptions of the hearing risk associated with listening to music listening and asking for estimates of their own and their peer's music listening behaviors. Participants were generally aware of the potential risk posed by listening to personal stereo players (PSPs) and the volumes likely to be most dangerous. Approximately one in five participants reported using listening volumes at levels perceived to be dangerous, an incidence rate in keeping with other studies measuring actual PSP use. However, participants showed less awareness of peers' behavior, consistently overestimating the volumes at which they believed their friends listened. Misperceptions of social norms relating to listening behavior may decrease individuals' perceptions of susceptibility to hearing damage. The consequences of hearing health promotion are discussed, along with suggestions relating to the development of new programs.
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
Full Text Available To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE is extended here using the Attenuation and Distortion (A+D approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A- and Distortion (D- parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified Speech Intelligibility Index approach which is based on the individual threshold data only.
Chisolm, Theresa Hnath; Saunders, Gabrielle H; Frederick, Melissa T; McArdle, Rachel A; Smith, Sherri L; Wilson, Richard H
To examine the role of compliance in the outcomes of computer-based auditory training with the Listening and Communication Enhancement (LACE) program in Veterans using hearing aids. The authors examined available LACE training data for 5 tasks (i.e., speech-in-babble, time compression, competing speaker, auditory memory, missing word) from 50 hearing-aid users who participated in a larger, randomized controlled trial designed to examine the efficacy of LACE training. The goals were to determine: (a) whether there were changes in performance over 20 training sessions on trained tasks (i.e., on-task outcomes); and (b) whether compliance, defined as completing all 20 sessions, vs. noncompliance, defined as completing less than 20 sessions, influenced performance on parallel untrained tasks (i.e., off-task outcomes). The majority, 84% of participants, completed 20 sessions, with maximum outcome occurring with at least 10 sessions of training for some tasks and up to 20 sessions of training for others. Comparison of baseline to posttest performance revealed statistically significant improvements for 4 of 7 off-task outcome measures for the compliant group, with at least small (0.2 compliance in the present study may be attributable to use of systematized verbal and written instructions with telephone follow-up. Compliance, as expected, appears important for optimizing the outcomes of auditory training. Methods to improve compliance in clinical populations need to be developed, and compliance data are important to report in future studies of auditory training.
Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony
Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.
Akpinar, Berkcan [University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania (United States); Mousavi, Seyed H., E-mail: email@example.com [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); McDowell, Michael M.; Niranjan, Ajay; Faraji, Amir H. [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); Flickinger, John C. [Department of Radiation Oncology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); Lunsford, L. Dade [Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States)
Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent early (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.
Akpinar, Berkcan; Mousavi, Seyed H.; McDowell, Michael M.; Niranjan, Ajay; Faraji, Amir H.; Flickinger, John C.; Lunsford, L. Dade
Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent early (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.
Full Text Available Introduction: The tinnitus has become a common otological complaint. Another complaint is found in bearers of the tinnitus is the hyperacusis. Objective: Analyze the characteristics of tinnitus and hyperacusis in normal hearing individuals with associated complaints of tinnitus and hyperacusis. Method: 25 normal hearing individuals who complained of hyperacusis and tinnitus were surveyed in this form of cross-sectional study. They were questioned about the location and type of the tinnitus. The evaluation of the tinnitus was made using the Brazilian Tinnitus Handicap Inventory and acuphenometry. A questionnaire was made about the hyperacusis covering aspects such as: sounds considered uncomfortable, sensations in the presence of such sounds, and difficulty understanding speech in noise. Results: Of the 25 individuals, 64% were women and 36% men. Regarding tinnitus, 84% referred to bilateral location and 80% high pitch. The most common degree found was light (44%. The women presented tinnitus degree statistically superior to those of men. The strong intensity sounds and the reactions of irritation, anxiety and the need to move away from the sound were the most mentioned. From the analyzed individuals, 68% referred to difficulty understanding speech in noise and 12% reported using hearing protection. The most found frequencies at the acuphenometry were 6 and 8 KHz. Conclusion: Normal hearing individuals who complain of tinnitus and hyperacusis present mainly high pitch tinnitus, located bilaterally and light degree. The sounds considered uncomfortable were the high intensity ones and the most cited reaction to sound was irritation. The difficulty to understand speech in noise was reported by most of the individuals.
Voss, Susan E; Herrmann, Barbara S; Horton, Nicholas J; Amadei, Elizabeth A; Kujawa, Sharon G
The objective is to develop methods to utilize newborn reflectance measures for the identification of middle-ear transient conditions (e.g., middle-ear fluid) during the newborn period and ultimately during the first few months of life. Transient middle-ear conditions are a suspected source of failure to pass a newborn hearing screening. The ability to identify a conductive loss during the screening procedure could enable the referred ear to be either (1) cleared of a middle-ear condition and recommended for more extensive hearing assessment as soon as possible, or (2) suspected of a transient middle-ear condition, and if desired, be rescreened before more extensive hearing assessment. Reflectance measurements are reported from full-term, healthy, newborn babies in which one ear referred and one ear passed an initial auditory brainstem response newborn hearing screening and a subsequent distortion product otoacoustic emission screening on the same day. These same subjects returned for a detailed follow-up evaluation at age 1 month (range 14 to 35 days). In total, measurements were made on 30 subjects who had a unilateral refer near birth (during their first 2 days of life) and bilateral normal hearing at follow-up (about 1 month old). Three specific comparisons were made: (1) Association of ear's state with power reflectance near birth (referred versus passed ear), (2) Changes in power reflectance of normal ears between newborn and 1 month old (maturation effects), and (3) Association of ear's newborn state (referred versus passed) with ear's power reflectance at 1 month. In addition to these measurements, a set of preliminary data selection criteria were developed to ensure that analyzed data were not corrupted by acoustic leaks and other measurement problems. Within 2 days of birth, the power reflectance measured in newborn ears with transient middle-ear conditions (referred newborn hearing screening and passed hearing assessment at age 1 month) was significantly
Lee, Gary Jek Chong; Lim, Ming Yann; Kuan, Angeline Yi Wei; Teo, Joshua Han Wei; Tan, Hui Guang; Low, Wong Kein
Noise-induced hearing loss (NIHL) is a preventable condition, and much has been done to protect workers from it. However, thus far, little attention has been given to leisure NIHL. The purpose of this study is to determine the possible music listening preferences and habits among young people in Singapore that may put them at risk of developing leisure NIHL. In our study, the proportion of participants exposed to > 85 dBA for eight hours a day (time-weighted average) was calculated by taking into account the daily number of hours spent listening to music and by determining the average sound pressure level at which music was listened to. A total of 1,928 students were recruited from Temasek Polytechnic, Singapore. Of which, 16.4% of participants listened to portable music players with a time-weighted average of > 85 dBA for 8 hours. On average, we found that male students were more likely to listen to music at louder volumes than female students (p students in our study listened to louder music than the Chinese students (p leisure NIHL from music delivered via earphones. As additional risks due to exposure to leisure noise from other sources was not taken into account, the extent of the problem of leisure NIHL may be even greater. There is a compelling need for an effective leisure noise prevention program among young people in Singapore.
Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; He Guangqiang
Purpose: To detect the cerebral functional location when normal subjects listened to a story in English as a second language. Methods: 14 normal young students of the medical collage of Zhejiang University, 22-24 years old, 8 male and 6 female. The first they underwent a 99mTc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in English as a second language on a tap for 20 minters. The content of the story is about the deeds of life of a well-known physicist, Aiyinsitan. They were also asked to pay special attention to the name of the personage in the story, what time and place did the story stated. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Their hearing was fell into bad, middle, and good according to the restate content. Results: To compare the rest state, during listen to the story in Chinese and asked to remember the content of story the superior temporal were activated in all 14 subjects, among them, dual in 4 cases, right in 5 cases, and left in 5 cases. The midtemporal (right in 5 cases), inferior temporal (right in 2 cases and left in 3 cases), and pre-temporal (in 1 case) were activated too. The auditory associated areas in frontal lobes were activated in different level, among them left post-inferior frontal (Broca's area) in 8 cases, right post-inferior frontal (Broca's area) in 3 cases, superior frontal in 6 cases (dual in 3 and right in 3), pre-inferior frontal and/or medial frontal lobes in 9 cases (dual in 6 and right in 3). Other regions that were activated included the parietal lobes (right in 4 and left in 1), the occipital lobes (dual in 4,right in 2 and left in 4)and pre-cingulated gyms (in 1 case). According to the hearing in sequence (bad, middle and good), the activated rate of the occipital lobes is decreasing (100%,75% and 57
Pals, Carina; Sarampalis, Anastasios; Başkent, Deniz
Purpose: Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method: Nineteen normal-hearing
Watson, Charles S.; Kidd, Gary R.
In the present investigation, sensory-perceptual abilities of one thousand young adults with normal hearing are being evaluated with a range of auditory, visual, and cognitive measures. Four auditory measures were derived from factor-analytic analyses of previous studies with 18-20 speech and non-speech variables [G. R. Kidd et al., J. Acoust. Soc. Am. 108, 2641 (2000)]. Two measures of visual acuity are obtained to determine whether variation in sensory skills tends to exist primarily within or across sensory modalities. A working memory test, grade point average, and Scholastic Aptitude Test scores (Verbal and Quantitative) are also included. Preliminary multivariate analyses support previous studies of individual differences in auditory abilities (e.g., A. M. Surprenant and C. S. Watson, J. Acoust. Soc. Am. 110, 2085-2095 (2001)] which found that spectral and temporal resolving power obtained with pure tones and more complex unfamiliar stimuli have little or no correlation with measures of speech recognition under difficult listening conditions. The current findings show that visual acuity, working memory, and intellectual measures are also very poor predictors of speech recognition ability, supporting the independence of this processing skill. Remarkable performance by some exceptional listeners will be described. [Work supported by the Office of Naval Research, Award No. N000140310644.
Zeitooni, Mehrnaz; Mäki-Torkko, Elina; Stenfelt, Stefan
The purpose of this study is to evaluate binaural hearing ability in adults with normal hearing when bone conduction (BC) stimulation is bilaterally applied at the bone conduction hearing aid (BCHA) implant position as well as at the audiometric position on the mastoid. The results with BC stimulation are compared with bilateral air conduction (AC) stimulation through earphones. Binaural hearing ability is investigated with tests of spatial release from masking and binaural intelligibility level difference using sentence material, binaural masking level difference with tonal chirp stimulation, and precedence effect using noise stimulus. In all tests, results with bilateral BC stimulation at the BCHA position illustrate an ability to extract binaural cues similar to BC stimulation at the mastoid position. The binaural benefit is overall greater with AC stimulation than BC stimulation at both positions. The binaural benefit for BC stimulation at the mastoid and BCHA position is approximately half in terms of decibels compared with AC stimulation in the speech based tests (spatial release from masking and binaural intelligibility level difference). For binaural masking level difference, the binaural benefit for the two BC positions with chirp signal phase inversion is approximately twice the benefit with inverted phase of the noise. The precedence effect results with BC stimulation at the mastoid and BCHA position are similar for low frequency noise stimulation but differ with high-frequency noise stimulation. The results confirm that binaural hearing processing with bilateral BC stimulation at the mastoid position is also present at the BCHA implant position. This indicates the ability for binaural hearing in patients with good cochlear function when using bilateral BCHAs.
Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.
Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Pellico, Linda Honan; Duffy, Thomas C; Fennie, Kristopher P; Swan, Katharine A
Inspection/observation and listening/auscultation are essential skills for health care providers. Given that observational and auditory skills take time to perfect, there is concern about accelerated students' ability to attain proficiency in a timely manner. This article describes the impact of music auditory training (MAT) for nursing students in an accelerated master's entry program on their competence in detecting heart, lung, and bowel sounds. During the first semester, a two-hour MAT session with focused attention on pitch, timbre, rhythm, and masking was held for the intervention group; a control group received traditional instruction only. Students in the music intervention group demonstrated significant improvement in hearing bowel, heart, and lung sounds (p < .0001). The ability to label normal and abnormal heart sounds doubled; interpretation of normal and abnormal lung sounds improved by 50 percent; and bowel sounds interpretation improved threefold, demonstrating the effect of an adult-oriented, creative, yet practical method for teaching auscultation.
Jin, Huiyuan; Liu, Haitao
Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.
This article explores a technique that can be used for almost any kind of classroom listening practice and with all kinds of classes. It seems to work well both in exam preparation and in regular textbook listening exercises.
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
Schindwolf, Isabel; Vatti, Marianna; Santurette, Sébastien
Most natural sounds contain frequency fluctuations over time such as changes in their fundamental frequency, non-periodic speech formant transitions, or periodic fluctuations like musical vibrato. These are sometimes characterized as frequency modulation (FM) with a given excursion (FMe) and rate......, this study investigated the effects of age and SNHL on FMe and FMr difference limens (DLs) for reference values typical of frequency fluctuations observed in speech and music signals....
spectrograms of these phrases were generated by a List 13 Processing Language (LISP) on a Symbolics 3670 artificial intelligence computer (see Figure 10). The...speech and the amount of difference varies with the type of vocoder. 26 ADPC INTELIGIBILITY AND TOE OF MAING 908 78- INTELLIGIBILITY 48 LI OS NORMA 30
Most, Tova; Michaelis, Hilit
Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…
Most, Tova; Aviner, Chen
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…
Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.
Full Text Available Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR on listening effort, as a function of working memory capacity (WMC and updating ability (UA. The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing MLL. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech-fragments and vocal sounds in the background noise.
Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing
The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word
Auinger, Alice Barbara; Riss, Dominik; Liepins, Rudolfs; Rader, Tobias; Keck, Tilman; Keintzel, Thomas; Kaider, Alexandra; Baumgartner, Wolf-Dieter; Gstoettner, Wolfgang; Arnoldner, Christoph
It has been shown that patients with electric acoustic stimulation (EAS) perform better in noisy environments than patients with a cochlear implant (CI). One reason for this could be the preserved access to acoustic low-frequency cues including the fundamental frequency (F0). Therefore, our primary aim was to investigate whether users of EAS experience a release from masking with increasing F0 difference between target talker and masking talker. The study comprised 29 patients and consisted of three groups of subjects: EAS users, CI users and normal-hearing listeners (NH). All CI and EAS users were implanted with a MED-EL cochlear implant and had at least 12 months of experience with the implant. Speech perception was assessed with the Oldenburg sentence test (OlSa) using one sentence from the test corpus as speech masker. The F0 in this masking sentence was shifted upwards by 4, 8, or 12 semitones. For each of these masker conditions the speech reception threshold (SRT) was assessed by adaptively varying the masker level while presenting the target sentences at a fixed level. A statistically significant improvement in speech perception was found for increasing difference in F0 between target sentence and masker sentence in EAS users (p = 0.038) and in NH listeners (p = 0.003). In CI users (classic CI or EAS users with electrical stimulation only) speech perception was independent from differences in F0 between target and masker. A release from masking with increasing difference in F0 between target and masking speech was only observed in listeners and configurations in which the low-frequency region was presented acoustically. Thus, the speech information contained in the low frequencies seems to be crucial for allowing listeners to separate multiple sources. By combining acoustic and electric information, EAS users even manage tasks as complicated as segregating the audio streams from multiple talkers. Preserving the natural code, like fine-structure cues in
Johnson, Earl; Ricketts, Todd; Hornsby, Benjamin
This study examined the effects of extending high-frequency bandwidth, for both a speech signal and a background noise, on the acceptable signal-to-noise ratio (SNR) of listeners with mild sensorineural hearing loss through utilization of the Acceptable Noise Level (ANL) procedure. In addition to extending high-frequency bandwidth, the effects of reverberation time and background noise type and shape were also examined. The study results showed a significant increase in the mean ANL (i.e. participants requested a better SNR for an acceptable listening situation) when high-frequency bandwidth was extended from 3 to 9 kHz and from 6 to 9 kHz. No change in the ANL of study participants was observed as a result of isolated modification to reverberation time or background noise stimulus. An interaction effect, however, of reverberation time and background noise stimulus was demonstrated. These findings may have implications for future design of hearing aid memory programs for listening to speech in the presence of broadband background noise.
Full Text Available Background and Aim: Reading skill is one of the most important necessities of students' learning in everyday life. This skill is referred to the ability of comprehension, comment and conclusion from texts and receiving the meaning of the massage which is composed. Educational development in any student has a direct relation with the ability of the comprehension. This study is designed to investigate the effects of hearing loss on reading comprehension in hearing-impaired students compared to normal-hearing ones.Methods: Seventeen hearing-impaired students in 4th year of primary exceptional schools in Karaj, Robatkarim and Shahriyar, Iran, were enrolled in this cross-sectional study. Seventeen normal-hearing students were randomly selected from ordinary schools next to exceptional ones as control group. They were compared for different levels of reading comprehension using the international standard booklet (PIRLS 2001. Results: There was a significant difference in performance between hearing-impaired and normal- hearing students in different levels of reading comprehension (p<0.05.Conclusion: Hearing loss has negative effects on different levels of reading comprehension, so in exceptional centers, reconsideration in educational planning in order to direct education from memorizing to comprehension and deeper layers of learning seems necessary.
Sun Da; Xu Wei; Zhang Hongwei; Liu Hongbiao; Liu Qichang
Purpose: To detect the cerebral functional location when normal subjects listened to a story in unfamiliar Japanese. Methods: 7 normal young students of the medical collage of Zhejiang University, 22-24 years old, 4 male and 3 female. The first they underwent a 99mTc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in unfamiliar Japanese carefully on a tap for 20 minters. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listen to the story in unfamiliar Japanese the right superior temporal in 5 cases, left superior temporal in 2 cases, right inferior temporal in 2 cases, and left inferior temporal in 1 case were activated. Among them, dual temporal were activated in 2 cases, only right temporal in 4 cases and left temporal in 1 case. Although they were no asked to remember the plot of the story, the frontal lobes were activated lightly in all 9 subjects. Among them dual inferior frontal and/or medial frontal lobes (3 cases), right inferior frontal and/or medial frontal lobes (2 cases), left inferior frontal (5 cases), right inferior frontal (1 case), right superior frontal (3 cases) were activated. The were activated in 6 subjects, and dual occipital in 5 cases, left occipital in 1 case. Other regions that were activated included parietal lobes (right in 2 cases and left in 1 case), and left occipital lobes (in 1 case) were activated. Conclusion: During listened to the story in unfamiliar Japanese the auditory association cortex in the superior temporal and some right midtemporal (it is more in right than in left) were activated. The frontal lobes were activated widely too, and mainly in left inferior frontal lobes (Broca's area), and in the frontal eye fields and the superolateral prefrontal cortex. It is consistent with the
Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.
Full Text Available Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts, because sign language is the primary communicative skills for many deaf people. The current body of research only covers the detailed linguistic features of deaf or hard-of-hearing students. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students’ exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.
Brännström, K Jonas; Zunic, Edita; Borovac, Aida; Ibertsson, Tina
The acceptable noise level (ANL) test is a method for quantifying the amount of background noise that subjects accept when listening to speech. Large variations in ANL have been seen between normal-hearing subjects and between studies of normal-hearing subjects, but few explanatory variables have been identified. To explore a possible relationship between a Swedish version of the ANL test, working memory capacity (WMC), and auditory evoked potentials (AEPs). ANL, WMC, and AEP were tested in a counterbalanced order across subjects. Twenty-one normal-hearing subjects participated in the study (14 females and 7 males; aged 20-39 yr with an average of 25.7 yr). Reported data consists of age, pure-tone average (PTA), most comfortable level (MCL), background noise level (BNL), ANL (i.e., MCL - BNL), AEP latencies, AEP amplitudes, and WMC. Spearman's rank correlation coefficient was calculated between the collected variables to investigate associations. A principal component analysis (PCA) with Varimax rotation was conducted on the collected variables to explore underlying factors and estimate interactions between the tested variables. Subjects were also pooled into two groups depending on their results on the WMC test, one group with a score lower than the average and one with a score higher than the average. Comparisons between these two groups were made using the Mann-Whitney U-test with Bonferroni correction for multiple comparisons. A negative association was found between ANL and WMC but not between AEP and ANL or WMC. Furthermore, ANL is derived from MCL and BNL, and a significant positive association was found between BNL and WMC. However, no significant associations were seen between AEP latencies and amplitudes and the demographic variables, MCL, and BNL. The PCA identified two underlying factors: One that contained MCL, BNL, ANL, and WMC and another that contained latency for wave Na and amplitudes for waves V and Na-Pa. Using the variables in the first factor
Nielsen, Jens Bo; Dau, Torsten
Objective : A Danish version of the hearing in noise test (HINT) has been developed and evaluated in normal-hearing (NH) and hearing-impaired (HI) listeners. The speech material originated from Nielsen & Dau (2009) where a sentence-based intelligibility equalization method was presented. Design...
Bess, Fred H.; Hornsby, Benjamin W.Y.
Anecdotal reports of fatigue after sustained speech-processing demands are common among adults with hearing loss; however, systematic research examining hearing loss-related fatigue is limited, particularly with regard to fatigue among children with hearing loss (CHL). Many audiologists, educators, and parents have long suspected that CHL…
Tanaka, Chiemi; Nguyen-Huynh, Anh; Loera, Katherine; Stark, Gemaine; Reiss, Lina
The Hybrid cochlear implant (CI), also known as Electro-Acoustic Stimulation (EAS), is a new type of CI that preserves residual acoustic hearing and enables combined cochlear implant and hearing aid use in the same ear. However, 30-55% of patients experience acoustic hearing loss within days to months after activation, suggesting that both surgical trauma and electrical stimulation may cause hearing loss. The goals of this study were to: 1) determine the contributions of both implantation surgery and EAS to hearing loss in a normal-hearing guinea pig model; 2) determine which cochlear structural changes are associated with hearing loss after surgery and EAS. Two groups of animals were implanted (n = 6 per group), with one group receiving chronic acoustic and electric stimulation for 10 weeks, and the other group receiving no direct acoustic or electric stimulation during this time frame. A third group (n = 6) was not implanted, but received chronic acoustic stimulation. Auditory brainstem response thresholds were followed over time at 1, 2, 6, and 16 kHz. At the end of the study, the following cochlear measures were quantified: hair cells, spiral ganglion neuron density, fibrous tissue density, and stria vascularis blood vessel density; the presence or absence of ossification around the electrode entry was also noted. After surgery, implanted animals experienced a range of 0-55 dB of threshold shifts in the vicinity of the electrode at 6 and 16 kHz. The degree of hearing loss was significantly correlated with reduced stria vascularis vessel density and with the presence of ossification, but not with hair cell counts, spiral ganglion neuron density, or fibrosis area. After 10 weeks of stimulation, 67% of implanted, stimulated animals had more than 10 dB of additional threshold shift at 1 kHz, compared to 17% of implanted, non-stimulated animals and 0% of non-implanted animals. This 1-kHz hearing loss was not associated with changes in any of the cochlear measures
Kidd, Gerald, Jr.
Purpose: Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This…
Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P
Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.
Full Text Available The aim of the following study was to examine the relationship between working memory capacity (WMC, executive functions (EFs and perceived effort (PE after completing a work-related task in quiet and in noise in employees with aided hearing impairment (HI and normal hearing. The study sample consisted of 20 hearing-impaired and 20 normally hearing participants. Measures of hearing ability, WMC and EFs were tested prior to performing a work-related task in quiet and in simulated traffic noise. PE of the work-related task was also measured. Analysis of variance was used to analyze within- and between-group differences in cognitive skills, performance on the work-related task and PE. The presence of noise yielded a significantly higher PE for both groups. However, no significant group differences were observed in WMC, EFs, PE and performance in the work-related task. Interestingly, significant negative correlations were only found between PE in the noise condition and the ability to update information for both groups. In summary, noise generates a significantly higher PE and brings explicit processing capacity into play, irrespective of hearing. This suggest that increased PE involves other factors such as type of task that is to be performed, performance in the cognitive skill required solving the task at hand and whether noise is present. We therefore suggest that special consideration in hearing care should be made to the individual′s prerequisites on these factors in the labor market.
Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; Bao Chengkan
Purpose: To detect the cerebral functional location when normal subjects listened to a story in Chinese. Methods: 9 normal young students of the medical collage of Zhejiang University, 23-24 years old, 5 male and 4 female. The first they underwent a 99m Tc-ECD brain imaging at rest using a dual-head gamma camera with fan beam collimators. After 2-4 days they were asked to listen a story in Chinese on a tap for 20 minters. The related an emotional story about a young president of a radio station and his girlfriend as well as his audience and fan, a young girl. They were also asked to pay special attention to the name of the personage in the story, what time and site did the story open out. They were also asked to imagine the imagination of the story. 99m Tc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listen to the story in Chinese and asked to remember the imagination of story the right superior temporal in 5 cases, left superior temporal in 3 cases, right mid temporal in 2 cases were activated. Among them, dual temporal were activated in l case, only right temporal in 6 cases and left temporal in 2 cases. It is very interesting that the inferior frontal and/or medial frontal lobes were activated lightly in all 9 subjects. Among them dual frontal lobes were activated in 5 subjects, and only right frontal in 3 cases and left frontal in 1 case. The occipital lobes were activated in 6 subjects, and dual occipital in 5 cases, left occipital in 1 case. Other regions that were activated included pre-cingulated gyrus (in 1 case), and left thalamic (in 1 case). Conclusion: During listened to the story in Chinese and asked remember the plot of the story the auditory association cortex in the superior temporal (it is more in right than in left) and some right mid temporal were activated. The inferior frontal and
I. Vogel (Ineke)
textabstractNoise-induced hearing loss (NIHL) is a significant social and public-health problem. Long-term exposure to high-volume levels will cause permanent hearing loss after 5-10 years. With the massive spread in the popularity of portable MP3 players, exposure to high sound levels has increased
Aspenlieder, Erin; Kloet, Marie Vander
What we hear at universities and in public conversations is that there is a crisis in graduate student education and employment. We are interested here in the (re)circulation of the discourses of crisis and responsibility. What do graduate students hear about their education, their career prospects, and their responsibilities? How does work in…
Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul
The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (Pbinaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.
Bouserhal, Rachel E.; Bockstael, Annelies; MacDonald, Ewen; Falk, Tiago H.; Voix, Jérémie
Purpose: Studying the variations in speech levels with changing background noise level and talker-to-listener distance for talkers wearing hearing protection devices (HPDs) can aid in understanding communication in background noise. Method: Speech was recorded using an intra-aural HPD from 12 different talkers at 5 different distances in 3…
Nguyen, Huong Thi Thien
The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiment 1 was…
Sun Da; Zhan Hongwei; Xu Wei; Liu Hongbiao; Bao Chengkan
Objectives: To compare the cerebral functional location in normal subjects during listening to a story in Chinese (native language), English (learned language) or Japanese (unfamiliar language). Methods: 9, 14,7 normal young students were asked to listen an emotional story in Chinese, the deeds of life of Aiyinsitan in English, and a dialogue in unfamiliar Japanese on a tap for 20 minters respectively. They were also asked to pay special attention to the name of the personage, time and site during listening Chinese or English story. 99mTc-ECD was administered in the first 3 minutes during they listened the story. The brain imaging was performed in 30 60 minutes after the tracer was administered. The results were compared with their brain imaging at rest respectively. Results: During listened to the story in Chinese, learned English, and unfamiliar Japanese, the auditory association cortex in the dual superior temporal and some midtemporal were activated. The inferior frontal and/or medial frontal lobes were activated too, special during listening to familiar language, and asked to remember the plot of the story, such as Chinese and English. But to compare with listening English, the activity in right frontal lobe was higher than in left during listened to the Chinese. During listened to unfamiliar Japanese, the frontal lobes were activated widely too. Conclusions: The results of our study shows that besides the auditory association cortex in the superior temporal and midtemporal, language can activates, the left inferior frontal (Broca s area), and in right and left frontal eye field, midtemporal, and superior frontal lobes were activated by language too. These regions in frontal have a crucial role in the decoding of familiar spoken language. And the attempt to decode unfamiliar spoken languages activates more auditory association areas. The left hemisphere is dominance hemisphere for language. But in our study, right temporal and frontal lobes were activated more
Daniel, David B.; Woody, William Douglas
This study examined the retention of students who listened to podcasts of a primary source to the retention of students who read the source as text. We also assessed students' preferences and study habits. Quiz scores revealed that the podcast group performed more poorly than did students who read the text. Although students initially preferred…
Laverty, Megan J.
In this essay Megan J. Laverty argues that Jean-Jacques Rousseau's conception of humane communication and his proposal for teaching it have implications for our understanding of the role of listening in education. She develops this argument through a close reading of Rousseau's most substantial work on education, "Emile: Or, On Education". Laverty…
Rapport, Frances L; Boisvert, Isabelle; McMahon, Catherine M; Hutchings, Hayley A
Introduction In the UK, it is estimated that a disabling hearing loss (HL) affects 1 in 6 people. HL has functional, economic and social-emotional consequences for affected individuals. Intervention for HL focuses on improving access to the auditory signal using hearing aids or cochlear implants. However, even if sounds are audible and speech is understood, individuals with HL often report increased effort when listening. Listening effort (LE) may be measured using self-reported measures such as patient-reported outcome measures (PROMs). PROMs are validated questionnaires completed by patients to measure their perceptions of their own functional status and well-being. When selecting a PROM for use in research or clinical practice, it is necessary to appraise the evidence of a PROM’s acceptability to patients, validity, responsiveness and reliability. Methods and analysis A systematic review of studies evaluating the measurement properties of PROMs available to measure LE in HL will be undertaken. MEDLINE, EMBASE, CINAHL, PsychINFO and Web of Science will be searched electronically. Reference lists of included studies, key journals and the grey literature will be hand-searched to identify further studies for inclusion. Two reviewers will independently complete title, abstract and full-text screening to determine study eligibility. Data on the characteristics of each study and each PROM will be extracted. Methodological quality of the included studies will be appraised using the COnsensus-based Standards for the selection of health Measurement INstruments, the quality of included PROMs appraised and the credibility of the evidence assessed. A narrative synthesis will summarise extracted data. Ethics and dissemination Ethical permission is not required, as this study uses data from published research. Dissemination will be through publication in peer-reviewed journals, conference presentations and the lead author’s doctoral dissertation. Findings may inform the
Tai, Yihsin; Husain, Fatima T
Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.
Mondelli, Maria Fernanda Capoani Garcia; dos Santos, Marina de Marchi; José, Maria Renata
ABSTRACT INTRODUCTION: Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. OBJECTIVE: To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. METHODS: The study included 30 adu...
Paul, Brandon T; Waheed, Sajal; Bruce, Ian C; Roberts, Larry E
Noise exposure and aging can damage cochlear synapses required for suprathreshold listening, even when cochlear structures needed for hearing at threshold remain unaffected. To control for effects of aging, behavioral amplitude modulation (AM) detection and subcortical envelope following responses (EFRs) to AM tones in 25 age-restricted (18-19 years) participants with normal thresholds, but different self-reported noise exposure histories were studied. Participants with more noise exposure had smaller EFRs and tended to have poorer AM detection than less-exposed individuals. Simulations of the EFR using a well-established cochlear model were consistent with more synaptopathy in participants reporting greater noise exposure.
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
Full Text Available Background . Currently, significant changes have occurred in the character of sound exposure, along with the properties of the group affected by it. Thus, primary care physicians have to keep in mind that a sizable group of young adults comprises groups in which the prevalence of hearing loss is increasing. Objectives . The goal of the following study was to determine the auditory ability of the students attending the Medical University in Bialystok and to analyze their risky and protective behaviors relating to music consumption. Material and methods . In total, 230 students (age: 18–26 years completed a questionnaire about general personal information and their music-listening habits. Thereafter, pure tone audiometry at standard frequencies (0.25 kHz–8 kHz was performed. Results . Hearing loss was more frequent in subjects who listened to music at higher volumes (‘very loud’ – 22.2%, ‘loud’ – 3.9%, ‘not very loud’ – 2.1%, ‘quiet’ – 9.1%, p = 0.046. Hearing loss was more prevalent among those students who were living in a city with more than 50,000 inhabitants before starting higher education compared to the remaining subjects (7.95% vs. 0.97%, p = 0.025. Conclusions . The study demonstrated that surprisingly few medical students suffer from hearing loss or a noise induced threshold shift. There is no correlation between risky behavior such as a lengthy daily duration of listening to music or the type of headphone used and hearing loss. Hearing screening tests connected with education are indicated in the group of young adults due to the accumulative character of hearing damage.
Full Text Available Whereas the language development of children with sensorineural hearing impairment (SNHI has repeatedly been shown to differ from that of peers with normal hearing (NH, few studies have used an experimental approach to investigate the consequences on everyday communicative interaction. This mini review gives an overview of a range of studies on children with SNHI and NH exploring intra- and inter-individual cognitive and linguistic systems during communication.Over the last decade, our research group has studied the conversational strategies of Swedish speaking children and adolescents with SNHI and NH using referential communication, an experimental analogue to problem-solving in the classroom. We have established verbal and nonverbal control and validation mechanisms, related to working memory capacity (WMC and phonological short term memory (PSTM. We present main findings and future directions relevant for the field of cognitive hearing science and for the clinical and school-based management of children and adolescents with SNHI.
Kim, Se-Hyung; Cho, Yang-Sun; Kim, Hye Jeong; Kim, Hyung-Jin
Despite recent technological advances in diagnostic methods including imaging technology, it is often difficult to establish a preoperative diagnosis of conductive hearing loss (CHL) in patients with an intact tympanic membrane (TM). Especially, in patients with a normal temporal bone computed tomography (TBCT), preoperative diagnosis is more difficult. We investigated middle ear disorders encountered in patients with CHL involving an intact TM and normal TBCT. We also analyzed the surgical results with special reference to the pathology. We reviewed the medical records of 365 patients with intact TM, who underwent exploratory tympanotomy for CHL. Fifty nine patients (67 ears, eight bilateral surgeries) had a normal preoperative TBCT findings reported by neuro-radiologists. Demographic data, otologic history, TM findings, preoperative imaging findings, intraoperative findings, and pre- and postoperative audiologic data were obtained and analyzed. Exploration was performed most frequently in the second and fifth decades. The most common postoperative diagnosis was stapedial fixation with non-progressive hearing loss. The most commonly performed hearing-restoring procedure was stapedotomy with piston wire prosthesis insertion. Various types of hearing-restoring procedures during exploration resulted in effective hearing improvement, especially with better outcome in the ossicular chain fixation group. In patients with CHL who have intact TM and normal TBCT, we should consider an exploratory tympanotomy for exact diagnosis and hearing improvement. Information of the common operative findings from this study may help in preoperative counseling.
Gluth, Michael B; Nelson, Erik G
We sought to establish that the decline of vestibular ganglion cell counts uniquely correlates with spiral ganglion cell counts, cochlear hair cell counts, and hearing phenotype in individuals with presbycusis. The relationship between aging in the vestibular system and aging in the cochlea is a topic of ongoing investigation. Histopathologic age-related changes the vestibular system may mirror what is seen in the cochlea, but correlations with hearing phenotype and the impact of presbycusis are not well understood. Vestibular ganglion cells, spiral ganglion cells, and cochlear hair cells were counted in specimens from individuals with presbycusis and normal hearing. These were taken from within a large collection of processed human temporal bones. Correlations between histopathology and hearing phenotype were investigated. Vestibular ganglion cell counts were positively correlated with spiral ganglion cell counts and cochlear hair cell counts and were negatively correlated with hearing phenotype. There was no statistical evidence on linear regression to suggest that the relationship between age and cell populations differed significantly according to whether presbycusis was present or not. Superior vestibular ganglion cells were more negatively correlated with age than inferior ganglion cells. No difference in vestibular ganglion cells was noted based on sex. Vestibular ganglion cell counts progressively deteriorate with age, and this loss correlates closely with changes in the cochlea, as well as hearing phenotype. However, these correlations do not appear to be unique in individuals with presbycusis as compared with those with normal hearing.
Ruytjens, Liesbet; Albers, Frans; van Dijk, Pim; Wit, Hero; Willemsen, Antoon
In the past, researchers investigated silent lipreading in normal hearing subjects with functional neuroimaging tools and showed how the brain processes visual stimuli that are normally accompanied by an auditory counterpart. Previously, we showed activation differences between males and females in
Full Text Available Objectives: Phonemic awareness skills have a significant impact on children speech and language. The purpose of this study was investigating the phonemic awareness skills of children with cochlear implant and normal hearing peers in primary school. Methods: phonemic awareness subscales of phonological awareness test were administered to 30 children with cochlear implantation at the first to sixth grades of primary school and 30 children with normal hearing who were matched in age with cochlear implant group. All of children were between 6 to 11 years old. Children with cochlear implant had at least 1 to 2 years of implant experience and they were over 5 years when they receive implantation. Children with cochlear implant were selected from Special education centers in Tehran and children with normal hearing were recruited from primary schools in Tehran. The phonemic awareness skills were assessed in both groups. Results: The results showed that the Mean scores of phonemic awareness skills in cochlear implant children were significantly lower than children with normal hearing (P<.0001. Discussion: children with cochlear implant, despite Cochlear implantation prosthesis, had lower performance in phonemic awareness when compared with normal hearing children. Therefore, due to importance of phonemic awareness skills in learning of literacy skills, and defects of these skills in children with cochlear implant, these skills should be assessed carefully in children with cochlear implant and rehabilitative interventions should be considered.
Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline
In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.
... primarily useful in improving the hearing and speech comprehension of people who have hearing loss that results ... and you can change the program for different listening environments—from a small, quiet room to a ...
Full Text Available Background: Reading is known as one of the most important learning tools. Research results consistently have shown that even a mild hearing impairment could affect the reading skills. Due to the reported differences in reading comprehension skills between hearing impaired students and their normal hearing peers, this research was conducted to compare the differences between the two groups. The other aim was to find any changes in the reading ability of hearing impaired group during elementary school. Methods: This study is a cross-sectional (descriptive–analytic one in which reading comprehension ability of 91 students with severe and profound hearing impairment (33 girls and 58 boys from 2nd up to 5th grade of exceptional schools were compared with 50 2nd grade normal hearing students in Ahvaz, Iran. The first section of Diagnostic Reading Test (Shirazi – Nilipour, 2004 was used in this study. Then the mean reading scores of hearing impaired students in each grade was compared with control group using SPSS 13 with Mann Whitney test. Results: There was a significant difference between average scores of hearing impaired students (boys and girls in 2nd to 5th grade with normal hearing students of 2nd grade (P<0.001. Reading comprehension scores of students with hearing impairment in higher grades had improved slightly, but it was still lower than that of the normal hearing students in the 2nd grade. Conclusion: It appears that reading comprehension skill of students with significant hearing impairment near the end of elementary school years becomes weaker than normal hearing students in the second grade. Therefore, it is essential to find and resolve the underlying reasons of this condition by all professionals who work in the field of education and rehabilitation of these students.
Breinbauer, Hayo A; Anabalón, Jose L; Gutierrez, Daniela; Cárcamo, Rodrigo; Olivares, Carla; Caro, Jorge
Our goal was to assess the impact of personal music players, earphones, and music styles on output, the subject's preferred listening levels, and outline recommendations for the prevention of music-induced hearing loss. Experimental study. Personal music players' output capabilities and volunteers' preferred output levels were assessed in different settings. Based on current noise-induced hearing loss exposure limits, recommendations were outlined. On three different devices and earphone types and 10 music styles, free field equivalent sound pressure output levels were assessed by applying a microphone probe inside the auditory canal. Forty-five hearing-healthy volunteers were asked to select preferred listening levels in different background noise scenarios. Sound pressure output reached 126 dB. No difference was found between device types, whereas earbud and supra-aural earphones showed significantly lower outputs than in-ear earphones (P music style groups were identified with as much as 14.4 dB difference between them. In silence, 17.8% of volunteers spontaneously selected a listening level above 85 dB. With 90 dB background noise, 40% selected a level above 94 dB. Earphone attenuation capability was found to correlate significantly with preferred level reductions (r = 0.585, P < .001). In-ear and especially supra-aural earphones reduced preferred listening levels the most. Safe-use recommendations were outlined, whereas selecting the lowest volume setting comfortable remained the main suggestion. High background noise attenuating earphones may help in reducing comfortable listening levels and should be preferred. A risk table was elaborated, presenting time limits before reaching a risky exposure. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Gifford, René H; Davis, Timothy J; Sunderhaus, Linsey W; Menapace, Christine; Buck, Barbara; Crosson, Jillian; O'Neill, Lori; Beiter, Anne; Segel, Phil
The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically. Assessments were completed in the R-SPACE sound-simulation system which includes a semidiffuse restaurant noise originating from eight loudspeakers placed circumferentially about the subject's head. AzBio sentences were presented at 67 dBA with signal to noise ratio varying between +10 and 0 dB determined individually to yield approximately 50 to 60% correct for the CI-alone condition with full CI bandwidth. Listening conditions for all subjects included CI alone, bimodal (CI + contralateral hearing aid), and bilateral-aided electric and acoustic stimulation (EAS; CI + bilateral hearing aid). Low-frequency cutoffs both below and above the original "clinical software recommendation" frequency were tested for all patients, in all conditions. Subjects estimated listening difficulty for all conditions using listener ratings based on a visual analog scale. Three primary findings were that (1) there was statistically significant benefit of preserved acoustic hearing in the implanted ear for most overlap conditions, (2) the default clinical software recommendation rarely yielded the highest level of speech recognition (1 of 13 ears), and (3) greater EAS overlap than that provided by the clinical recommendation yielded significant improvements in speech understanding. For standard-electrode CI recipients with preserved hearing, spectral overlap of acoustic and electric stimuli yielded significantly better speech understanding and less listening effort in a laboratory-based, restaurant
East, Martin; King, Chris
In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…
Full Text Available Background and Aim: Learning language is acquired in early childhood and gradually developed by new words and new structures. Hearing sense is the most important acquisition for learning this skill. Hearing disorders are barriers for natural language learning. The purpose of this study was to investigate the relationship between writing sentences and perception of written sentences in hearing-impaired and normal-hearing students.Methods: A cross-sectional study was conducted among thirty hearing-impaired students with hearing loss of 70-90 dB and thirty normal hearing students. They were selected from 3rd grade primary school students in Hamadan, a large city in Western Iran. The language skills and non language information was assessed by questionnaire, Action Picture Test, and Sentence Perception Test.Results: Results showed that there was a significant relation between writing sentences and perception of written sentences in hearing impaired students (p<0.001, (r=0.8. This significant relation was seen in normal-hearing students as well (p<0.001, (r=0.7.Conclusion: Disability of hearing-impaired students in verbal communication is not only related to articulation and voice disorders but also is related to their disability to explore and use of language rules. They suffer lack of perception of written sentences, and they are not skilled to convey their feelings and thoughts in order to presenting themselves by using language structures.
Hearing levels are threatened by modern life--headsets for music, rock concerts, traffic noises, etc. It is crucial we know our hearing levels so that we can draw attention to potential problems. This exercise requires that students receive a hearing screening for their benefit as well as for making the connection of hearing to listening.
A. V. Pashkov
Full Text Available Diagnosis of hearing level in small children with conductive hearing loss associated with congenital craniofacial abnormalities, particularly with agenesis of external ear and external auditory meatus is a pressing issue. Conventional methods of assessing hearing in the first years of life, i. e. registration of brainstem auditory evoked responses to acoustic stimuli in the event of air conduction, does not give an indication of the auditory analyzer’s condition due to potential conductive hearing loss in these patients. This study was aimed at assessing potential of diagnosing the auditory analyzer’s function with registering brainstem auditory evoked responses (BAERs to acoustic stimuli transmitted by means of a bone vibrator. The study involved 17 children aged 3–10 years with normal hearing. We compared parameters of registering brainstem auditory evoked responses (peak V depending on the type of stimulus transmission (air/bone in children with normal hearing. The data on thresholds of the BAERs registered to acoustic stimuli in the event of air and bone conduction obtained in this study are comparable; hearing thresholds in the event of acoustic stimulation by means of a bone vibrator correlates with the results of the BAERs registered to the stimuli transmitted by means of air conduction earphones (r = 0.9. High correlation of thresholds of BAERs to the stimuli transmitted by means of a bone vibrator with thresholds of BAERs registered when air conduction earphones were used helps to assess auditory analyzer’s condition in patients with any form of conductive hearing loss.
Full Text Available Objectives: A large number of congenitally deaf children are born annually. If not treated, this will have destructive effects on their language and speech development, educational achievements and future occupation. In this study it has been tried to determine the level of language skills in children with Cochlear Implants (CI in comparison with Normal Hearing (NH age-mates. Methods: Test of Language Development was administered to 30 pre-lingual, severe-to-profound CI children between the ages of 5 to 8. The obtained scores were compared to a Persian database from scores of normally hearing children with the same age range. Results: Results indicated that in spite of great advancements in different areas of language after hearing gain, CI children still lag behind their hearing age-mates in almost all aspects of language skills. Discussion: Based on the results, it is suggested that children with average or above average cognitive skills who use CI have the potential to produce and understand language comparable to their normally hearing peers.
Maes, Leen; De Kegel, Alexandra; Van Waelvelde, Hilde; Dhooge, Ingeborg
Vertigo and imbalance are often underestimated in the pediatric population, due to limited communication abilities, atypical symptoms, and relatively quick adaptation and compensation in children. Moreover, examination and interpretation of vestibular tests are very challenging, because of difficulties with cooperation and maintenance of alertness, and because of the sometimes nauseatic reactions. Therefore, it is of great importance for each vestibular laboratory to implement a child-friendly test protocol with age-appropriate normative data. Because of the often masked appearance of vestibular problems in young children, the vestibular organ should be routinely examined in high-risk pediatric groups, such as children with a hearing impairment. Purposes of the present study were (1) to determine age-appropriate normative data for two child-friendly vestibular laboratory techniques (rotatory and collic vestibular evoked myogenic potential [cVEMP] test) in a group of children without auditory or vestibular complaints, and (2) to examine vestibular function in a group of children presenting with bilateral hearing impairment. Forty-eight typically developing children (mean age 8 years 0 months; range: 4 years 1 month to 12 years 11 months) without any auditory or vestibular complaints as well as 39 children (mean age 7 years 8 months; range: 3 years 8 months to 12 years 10 months) with a bilateral sensorineural hearing loss were included in this study. All children underwent three sinusoidal rotations (0.01, 0.05, and 0.1 Hz at 50 degrees/s) and bilateral cVEMP testing. No significant age differences were found for the rotatory test, whereas a significant increase of N1 latency and a significant threshold decrease was noticeable for the cVEMP, resulting in age-appropriate normative data. Hearing-impaired children demonstrated significantly lower gain values at the 0.01 Hz rotation and a larger percentage of absent cVEMP responses compared with normal-hearing children
Marsella, Pasquale; Scorpecci, Alessandro; Cartocci, Giulia; Giannantonio, Sara; Maglione, Anton Giulio; Venuti, Isotta; Brizi, Ambra; Babiloni, Fabio
Deaf subjects with hearing aids or cochlear implants generally find it challenging to understand speech in noisy environments where a great deal of listening effort and cognitive load are invested. In prelingually deaf children, such difficulties may have detrimental consequences on the learning process and, later in life, on academic performance. Despite the importance of such a topic, currently, there is no validated test for the assessment of cognitive load during audiological tasks. Recently, alpha and theta EEG rhythm variations in the parietal and frontal areas, respectively, have been used as indicators of cognitive load in adult subjects. The aim of the present study was to investigate, by means of EEG, the cognitive load of pediatric subjects affected by asymmetric sensorineural hearing loss as they were engaged in a speech-in-noise identification task. Seven children (4F and 3M, age range = 8-16 years) affected by asymmetric sensorineural hearing loss (i.e. profound degree on one side, mild-to-severe degree on the other side) and using a hearing aid only in their better ear, were included in the study. All of them underwent EEG recording during a speech-in-noise identification task: the experimental conditions were quiet, binaural noise, noise to the better hearing ear and noise to the poorer hearing ear. The subjects' Speech Recognition Thresholds (SRT) were also measured in each test condition. The primary outcome measures were: frontal EEG Power Spectral Density (PSD) in the theta band and parietal EEG PSD in the alpha band, as assessed before stimulus (word) onset. No statistically significant differences were noted among frontal theta power levels in the four test conditions. However, parietal alpha power levels were significantly higher in the "binaural noise" and in the "noise to worse hearing ear" conditions than in the "quiet" and "noise to better hearing ear" conditions (p cognitive load during effortful listening. Significantly higher
Full Text Available Purpose: This study examines whether cognitive function, as measured by the subtests of the Woodcock–Johnson III (WCJ-III assessment, predicts listening-effort performance during dual tasks across the adults of varying ages. Materials and Methods: Participants were divided into two groups. Group 1 consisted of 14 listeners (number of females = 11 who were 41–61 years old [mean = 53.18; standard deviation (SD = 5.97]. Group 2 consisted of 15 listeners (number of females = 9 who were 63–81 years old (mean = 72.07; SD = 5.11. Participants were administered the WCJ-III Memory for Words, Auditory Working Memory, Visual Matching, and Decision Speed subtests. All participants were tested in each of the following three dual-task experimental conditions, which were varying in complexity: (1 auditory word recognition + visual processing, (2 auditory working memory (word + visual processing, and (3 auditory working memory (sentence + visual processing in noise. Results: A repeated measures analysis of variance revealed that task complexity significantly affected the performance measures of auditory accuracy, visual accuracy, and processing speed. Linear regression revealed that the cognitive subtests of the WCJ-III test significantly predicted performance across dependent variable measures. Conclusion: Listening effort is significantly affected by task complexity, regardless of age. Performance on the WCJ-III test may predict listening effort in adults and may assist speech-language pathologist (SLPs to understand challenges faced by participants when subjected to noise.
McCreery, Ryan W; Stelmachowicz, Patricia G
Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected
Full Text Available Hearing impaired and normal hearing individuals were compared in two within-participant office noise conditions (high noise: 60 L Aeq and low noise: 30 L Aeq . Performance, subjective fatigue, and physiological stress were tested during working on a simulated open-plan office. We also tested two between-participants restoration conditions following the work period with high noise (nature movie or continued office noise. Participants with a hearing impairment (N = 20 were matched with normal hearing participants (N = 18 and undertook one practice session and two counterbalanced experimental sessions. In each experimental session they worked for two hours with basic memory and attention tasks. We also measured physiological stress indicators (cortisol and catecholamines and self-reports of mood and fatigue. The hearing impaired participants were more affected by high noise than the normal hearing participants, as shown by impaired performance for tasks that involve recall of semantic information. The hearing impaired participants were also more fatigued by high noise exposure than participants with normal hearing, and they tended to have higher stress hormone levels during the high noise compared to the low noise condition. Restoration with a movie increased performance and motivation for the normal hearing participants, while rest with continued noise did not. For the hearing impaired participants, continued noise during rest increased motivation and performance, while the movie did not. In summary, the impact of noise and restorative conditions varied with the hearing characteristics of the participants. The small sample size does however encourage caution when interpreting the results.
Christensen, Anders Tornvig; Ordoñez Pizarro, Rodrigo Eduardo; Hammershøi, Dorte
, a custom-built low-frequency acoustic probe was put to use in 21 normal-hearing human subjects (of 34 recruited). Distortion-product otoacoustic emission (DPOAE) was measured in the enclosed ear canal volume as the response to two simultaneously presented tones with frequencies f1 and f2. The stimulus...
Makagon, Maja M.; Funayama, E. Sumie; Owren, Michael J.
Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations. PMID:18646991
Most, Tova; Michaelis, Hilit
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer
The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…
Full Text Available Introduction: Tinnitus, the perception of the sound in the absence of an external acoustic source, disrupts the daily life 1 out of every 200 adults, yet its physiological basis remains largely a mystery. The generation of tinnitus is commonly linked with the impaired functioning of the outer hair cells (OHC inside the cochlea. Otoacoustic emissions are the objective test used to assess their activity. Objective: The objective of the investigation was to study the features of Distortion product OtoAcoustic emissions (DPOAE in a group of tinnitus patients with normal hearing and to find out whether there is any difference in DPOAE findings in the tinnitus patients with normal hearing and in persons with normal hearing with no complaint of tinnitus. Materials and Methods: The participants consisted of two groups. The subject group consisted of 16 ears of patients, in which 6 subjects were having tinnitus in both ears while 4 subjects were having tinnitus only in one ear. All subjects were aged between 20 to 60 years with complaint of tinnitus with audiometrically normal hearing. Control group was comprised of 16 audiometrically normal hearing ears of persons who were age and gender matched with the subject groups and had no complaint of tinnitus. Both the subject group as well as control group was subjected for DPOAE test. Findings of both the groups were compared using the unpaired t test. Result and conclusion: It was observed that the amplitudes of DPOAE were significantly lower in tinnitus patients than that of persons without complaint of tinnitus, at a frequency of 1281-1560, 5120-6250, 7243-8837 Hz, which imply that decrease of DPOAEs amplitudes may be related to the presence of tinnitus. It can be concluded that there is association between tinnitus and reduced OHC activity which indicate the OHC of cochlea are involved in the generation of tinnitus.
Jacquin-Courtois, S; Rode, G; Pavani, F; O'Shea, J; Giard, M H; Boisson, D; Rossetti, Y
Unilateral neglect is a disabling syndrome frequently observed following right hemisphere brain damage. Symptoms range from visuo-motor impairments through to deficient visuo-spatial imagery, but impairment can also affect the auditory modality. A short period of adaptation to a rightward prismatic shift of the visual field is known to improve a wide range of hemispatial neglect symptoms, including visuo-manual tasks, mental imagery, postural imbalance, visuo-verbal measures and number bisection. The aim of the present study was to assess whether the beneficial effects of prism adaptation may generalize to auditory manifestations of neglect. Auditory extinction, whose clinical manifestations are independent of the sensory modalities engaged in visuo-manual adaptation, was examined in neglect patients before and after prism adaptation. Two separate groups of neglect patients (all of whom exhibited left auditory extinction) underwent prism adaptation: one group (n = 6) received a classical prism treatment ('Prism' group), the other group (n = 6) was submitted to the same procedure, but wore neutral glasses creating no optical shift (placebo 'Control' group). Auditory extinction was assessed by means of a dichotic listening task performed three times: prior to prism exposure (pre-test), upon prism removal (0 h post-test) and 2 h later (2 h post-test). The total number of correct responses, the lateralization index (detection asymmetry between the two ears) and the number of left-right fusion errors were analysed. Our results demonstrate that prism adaptation can improve left auditory extinction, thus revealing transfer of benefit to a sensory modality that is orthogonal to the visual, proprioceptive and motor modalities directly implicated in the visuo-motor adaptive process. The observed benefit was specific to the detection asymmetry between the two ears and did not affect the total number of responses. This indicates a specific effect of prism adaptation on
Gordon-Hickey, Susan; Moore, Robert E.; Estis, Julie M.
Purpose: To evaluate the effect of different speech conditions on background noise acceptance. A total of 23 stimulus pairings, differing in primary talker gender (female, male, conventional), number of background talkers (1, 4, 12), and gender composition of the background noise (female, male, mixed) were used to evaluate background noise…
Mahnaz Aliakbari Dehkordi
Full Text Available Background and Aim: Stress is associated with life satisfaction and also development of some physical diseases. Birth of a disabled child with mental or physical disability (especially deaf or blind children, impose an enormous load of stress on their parents especially the mothers. This study compared stress levels of mothers with hearing impaired children and mothers of normal children or with other disabilities.Methods: In this study, cluster random sampling was performed in Karaj city. 120 mothers in four groups of having a child with mental retardation, low vision, hearing impairment and with normal children were included. Family inventory of life events (FILE of Mc Cubbin et al. was used to determine stress level in four groups of mothers.Results: The results of this research indicated a significant difference (p<0.05 between stress levels of mothers with hearing impaired children and mothers of other disabled and normal children in subscales of intra-family stress, finance and business strains, stress of job transitions, stress of illness and family care and family members "in and out''. There was no difference between compared groups in other subscales.Conclusion: Since deafness is a hidden inability, the child with hearing impairment has a set of social and educational problems causing great stress for parents, especially to mother. In order to decrease mother’s stress, it is suggested to provide more family consultation, adequate social support and to run educational classes for parents to practice stress coping strategies.
Hennig, Tais Regina
Full Text Available Introduction: Tinnitus and hyperacusis are increasingly frequent audiological symptoms that may occur in the absence of the hearing involvement, but it does not offer a lower impact or bothering to the affected individuals. The Medial Olivocochlear System helps in the speech recognition in noise and may be connected to the presence of tinnitus and hyperacusis. Objective: To evaluate the speech recognition of normal-hearing individual with and without complaints of tinnitus and hyperacusis, and to compare their results. Method: Descriptive, prospective and cross-study in which 19 normal-hearing individuals were evaluated with complaint of tinnitus and hyperacusis of the Study Group (SG, and 23 normal-hearing individuals without audiological complaints of the Control Group (CG. The individuals of both groups were submitted to the test List of Sentences in Portuguese, prepared by Costa (1998 to determine the Sentences Recognition Threshold in Silence (LRSS and the signal to noise ratio (S/N. The SG also answered the Tinnitus Handicap Inventory for tinnitus analysis, and to characterize hyperacusis the discomfort thresholds were set. Results: The CG and SG presented with average LRSS and S/N ratio of 7.34 dB NA and -6.77 dB, and of 7.20 dB NA and -4.89 dB, respectively. Conclusion: The normal-hearing individuals with or without audiological complaints of tinnitus and hyperacusis had a similar performance in the speech recognition in silence, which was not the case when evaluated in the presence of competitive noise, since the SG had a lower performance in this communication scenario, with a statistically significant difference.
Mao, Yitao; Xu, Li
The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance. Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors. Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study. Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise. There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.
Ritvik P. Mehta
Full Text Available The objective of the study was to assess the safety of the HyperSound® Audio System (HSS, a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered preexposure and post-exposure test design. We investigated primary and secondary outcome measures: i temporary threshold shift (TTS, defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs >10 dB at two or more frequencies; ii presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.
Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L
The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce
Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258
Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce
Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).
Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G
This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.
Caspersz, Donella; Stasinska, Ania
Listening is not the same as hearing. While hearing is a physiological process, listening is a conscious process that requires us to be mentally attentive (Low & Sonntag, 2013). The obvious place for scholarship about listening is in communication studies. While interested in listening, the focus of this study is on effective listening.…
Jepsen, Morten Løve; Dau, Torsten
To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...
Many people with difficulties following conversations in noisy settings have “clinically normal” audiograms, that is, tone thresholds better than 20 dB HL from 0.1 to 8 kHz. This review summarizes the possible causes of such difficulties, and examines established as well as promising new psychoacoustic and electrophysiologic approaches to differentiate between them. Deficits at the level of the auditory periphery are possible even if thresholds remain around 0 dB HL, and become probable when they reach 10 to 20 dB HL. Extending the audiogram beyond 8 kHz can identify early signs of noise-induced trauma to the vulnerable basal turn of the cochlea, and might point to “hidden” losses at lower frequencies that could compromise speech reception in noise. Listening difficulties can also be a consequence of impaired central auditory processing, resulting from lesions affecting the auditory brainstem or cortex, or from abnormal patterns of sound input during developmental sensitive periods and even in adulthood. Such auditory processing disorders should be distinguished from (cognitive) linguistic deficits, and from problems with attention or working memory that may not be specific to the auditory modality. Improved diagnosis of the causes of listening difficulties in noise should lead to better treatment outcomes, by optimizing auditory training procedures to the specific deficits of individual patients, for example. PMID:28002080
Bouserhal, Rachel E.; Bockstael, Annelies; MacDonald, Ewen
Purpose: Studying the variations in speech levels with changing background noise level and talker-to-listener distance for talkers wearing hearing protection devices (HPDs) can aid in understanding communication in background noise. Method: Speech was recorded using an intra-aural HPD from 12...... complements the existing model presented by Pelegrín-García, Smits, Brunskog, and Jeong (2011) and expands on it by taking into account the effects of occlusion and background noise level on changes in speech sound level. Conclusions: Three models of the relationship between vocal effort, background noise...
Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.
Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752
Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel
The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high
Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G
This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both
Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai
This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.
McGarrigle, Ronan; Dawes, Piers; Stewart, Andrew J; Kuchinsky, Stefanie E; Munro, Kevin J
Stress and fatigue from effortful listening may compromise well-being, learning, and academic achievement in school-aged children. The aim of this study was to investigate the effect of a signal-to-noise ratio (SNR) typical of those in school classrooms on listening effort (behavioral and pupillometric) and listening-related fatigue (self-report and pupillometric) in a group of school-aged children. A sample of 41 normal-hearing children aged 8-11years performed a narrative speech-picture verification task in a condition with recommended levels of background noise ("ideal": +15dB SNR) and a condition with typical classroom background noise levels ("typical": -2dB SNR). Participants showed increased task-evoked pupil dilation in the typical listening condition compared with the ideal listening condition, consistent with an increase in listening effort. No differences were found between listening conditions in terms of performance accuracy and response time on the behavioral task. Similarly, no differences were found between listening conditions in self-report and pupillometric markers of listening-related fatigue. This is the first study to (a) examine listening-related fatigue in children using pupillometry and (b) demonstrate physiological evidence consistent with increased listening effort while listening to spoken narratives despite ceiling-level task performance accuracy. Understanding the physiological mechanisms that underpin listening-related effort and fatigue could inform intervention strategies and ultimately mitigate listening difficulties in children. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available Background and Aim: Stress is the source of many problems in human-kind lives and threatens people's life constantly. Having hearing-impaired child, not only causes stress in parents, but also affects their marital satisfaction. The purpose of this study was comparing the stress and marital satisfaction status between the normal and hearing-impaired children's parents.Methods: This was a causal-comparative study. Eighty parents of normal children and 80 parents of hearing-impaired children were chosen from rehabilitation centers and kindergartens in city of Tabriz, Iran by available and clustering sampling method. All parents were asked to complete the Friedrich's source of stress and Enrich marital satisfaction questionnaires.Results: Parents of hearing-impaired children endure more stress than the normal hearing ones (p<0.001. The marital satisfaction of hearing-impaired children's parents was lower than the parents of normal hearing children, too (p<0.001.Conclusion: Having a hearing-impaired child causes stress and threatens the levels of marital satisfaction. This requires much more attention and a distinct planning for parents of handicap children to reduce their stress.
Kim, Se-Hyung; Cho, Yang-Sun; Chu, Ho-Suk; Jang, Jeon-Yeob; Chung, Won-Ho; Hong, Sung Hwa
In patients with progressive conductive hearing loss and a normal tympanic membrane (TM), and with soft tissue density in the middle ear cavity (MEC) on temporal bone computed tomography (TBCT) scan, open-type congenital cholesteatoma (OCC) should be highly suspected and a proper surgical plan that includes mastoid exploration and second-stage operation is required. The clinical presentation of OCC is very similar to congenital ossicular anomaly (COA) presenting with a conductive hearing loss with intact TM. Therefore, it is challenging to make a correct preoperative diagnosis in patients with OCC. We evaluated the clinical characteristics of OCC compared with those of COA to find diagnostic clues useful in diagnosis of OCC. The medical records of 12 patients with surgically proven OCC and 14 patients with surgically proven COA were reviewed for demographic data, otologic history, preoperative TBCT findings, intraoperative findings, and pre- and postoperative audiologic data. There was no difference between OCC and COA based on demographic data, preoperative hearing, and ossicular status on TBCT. However, the presence of progressive hearing loss, soft tissue density in the MEC on TBCT scan, and the need for mastoid surgery and second-stage operation were significantly more frequent in OCC patients.
Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.
Garinis, Angela C.; Glattke, Theodore; Cone, Barbara K.
Purpose: The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Method: Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in…
Percy-Smith, L.; Caye-Thomasen, P.; Gudman, M.
Objective: The purpose of this study was to make a quantitative comparison of parameters of self-esteem and social well-being between children with cochlear implants and normal-hearing children. Material and methods: Data were obtained from 164 children with cochlear implant (CI) and 2169 normal......-hearing children (NH). Parental questionnaires, used in a national survey assessing the self-esteem and well-being of normal-hearing children, were applied to the cochlear implanted group, in order to allow direct comparisons. Results: The children in the CI group rated significantly higher on questions about well...... overall self-esteem or number of friends. The two groups of children scored similarly on being confident, independent, social, not worried and happy. Conclusion: Children with cochlear implant score equal to or better than their normal-hearing peers on matters of self-esteem and social well-being. (C...
Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv
In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The
Wiefferink, Carin H; Rieffe, Carolien; Ketelaar, Lizet; Frijns, Johan H M
The purpose of the present study was to compare children with a cochlear implant and normal hearing children on aspects of emotion regulation (emotion expression and coping strategies) and social functioning (social competence and externalizing behaviors) and the relation between emotion regulation and social functioning. Participants were 69 children with cochlear implants (CI children) and 67 normal hearing children (NH children) aged 1.5-5 years. Parents answered questionnaires about their children's language skills, social functioning, and emotion regulation. Children also completed simple tasks to measure their emotion regulation abilities. Cochlear implant children had fewer adequate emotion regulation strategies and were less socially competent than normal hearing children. The parents of cochlear implant children did not report fewer externalizing behaviors than those of normal hearing children. While social competence in normal hearing children was strongly related to emotion regulation, cochlear implant children regulated their emotions in ways that were unrelated with social competence. On the other hand, emotion regulation explained externalizing behaviors better in cochlear implant children than in normal hearing children. While better language skills were related to higher social competence in both groups, they were related to fewer externalizing behaviors only in cochlear implant children. Our results indicate that cochlear implant children have less adequate emotion-regulation strategies and less social competence than normal hearing children. Since they received their implants relatively recently, they might eventually catch up with their hearing peers. Longitudinal studies should further explore the development of emotion regulation and social functioning in cochlear implant children. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Chen, Pei-Hua; Liu, Ting-Wei
Telepractice provides an alternative form of auditory-verbal therapy (eAVT) intervention through videoconferencing; this can be of immense benefit for children with hearing loss, especially those living in rural or remote areas. The effectiveness of eAVT for the language development of Mandarin-speaking preschoolers with hearing loss was…
(CCL), Center for Creative Leadership
Active listening is a person's willingness and ability to hear and understand. At its core, active listening is a state of mind that involves paying full and careful attention to the other person, avoiding premature judgment, reflecting understanding, clarifying information, summarizing, and sharing. By learning and committing to the skills and behaviors of active listening, leaders can become more effective listeners and, over time, improve their ability to lead.
Full Text Available For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch. Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.
In everyday life, the speech we listen to is often mixed with many other sound sources as well as reverberation. In such a situation, normal-hearing listeners are able to effortlessly segregate a single voice out of the background, which is commonly known as the 'cocktail party effect'. Conversely......, hearing-impaired people have great difficulty understanding speech when more than one person is talking, even when reduced audibility has been fully compensated for by a hearing aid. As with the hearing impaired, the performance of automatic speech recognition systems deteriorates dramatically...... with additional sound sources. The reasons for these difficulties are not well understood. Only by obtaining a clearer understanding of the auditory system’s coding strategies will it be possible to design intelligent compensation algorithms for hearing devices. This presentation highlights recent concepts...
Habicht, Julia; Behler, Oliver; Kollmeier, Birger
In contrast to the effects of hearing loss, the effects of hearing aid (HA) experience on speech-in-noise (SIN) processing are underexplored. Using an eye-tracking paradigm that allows determining how fast a participant can grasp the meaning of a sentence presented in noise together with two pict...... support the idea that HA experience positively influences the ability to process SIN quickly and that it reduces the recruitment of brain regions outside the core speech-comprehension network....
... listen to TV or your music player, play videogames, or use your phone. Talk to your audiologist ... your audiologist several times, but it's worth the benefit of being able to hear your friends and ...
Blake S. Wilson
Full Text Available Background: The cochlear implant has become the standard of care for severe or worse losses in hearing and indeed has produced the first substantial restoration of a lost or absent human sense using a medical intervention. However, the devices are not perfect and many efforts to narrow the remaining gaps between prosthetic and normal hearing are underway. Objective: To assess the present status of cochlear implants and to describe possibilities for improving them. Results: The present-day devices work well in quiet conditions for the great majority of users. However, not all users have high levels of speech reception in quiet and nearly all users struggle with speech reception in typically noisy acoustic environments. In addition, perception of sounds more complex than speech, such as most music, is generally poor unless residual hearing at low frequencies can be stimulated acoustically in conjunction with the electrical stimuli provided by the implant. Possibilities for improving the present devices include increasing the spatial specificity of neural excitation by reducing masking effects or with new stimulus modes; prudent pruning of interfering or otherwise detrimental electrodes from the stimulation map; a further relaxation in the criteria for implant candidacy, based on recent evidence from persons with high levels of residual hearing and to allow many more people to benefit from cochlear implants; and “top down” or “brain centric” approaches to implant designs and applications. Conclusions: Progress in the development of the cochlear implant and related treatments has been remarkable but room remains for improvements. The future looks bright as there are multiple promising possibilities for improvements and many talented teams are pursuing them. Keywords: Auditory prosthesis, Cochlear implant, Cochlear prosthesis, Deafness, Neural prosthesis
Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F
Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition
Kim, Ja-Hee; Lee, Jae Hee; Lee, Ho-Ki
The goal of the present study was to examine whether Acceptable Noise Levels (ANLs) would be lower (greater acceptance of noise) in binaural listening than in monaural listening condition and also whether meaningfulness of background speech noise would affect ANLs for directional microphone hearing aid users. In addition, any relationships between the individual binaural benefits on ANLs and the individuals' demographic information were investigated. Fourteen hearing aid users (mean age, 64 years) participated for experimental testing. For the ANL calculation, listeners' most comfortable listening levels and background noise level were measured. Using Korean ANL material, ANLs of all participants were evaluated under monaural and binaural amplification with a counterbalanced order. The ANLs were also compared across five types of competing speech noises, consisting of 1- through 8-talker background speech maskers. Seven young normal-hearing listeners (mean age, 27 years) participated for the same measurements as a pilot testing. The results demonstrated that directional hearing aid users accepted more noise (lower ANLs) with binaural amplification than with monaural amplification, regardless of the type of competing speech. When the background speech noise became more meaningful, hearing-impaired listeners accepted less amount of noise (higher ANLs), revealing that ANL is dependent on the intelligibility of the competing speech. The individuals' binaural advantages in ANLs were significantly greater for the listeners with longer experience of hearing aids, yet not related to their age or hearing thresholds. Binaural directional microphone processing allowed hearing aid users to accept a greater amount of background noise, which may in turn improve listeners' hearing aid success. Informational masking substantially influenced background noise acceptance. Given a significant association between ANLs and duration of hearing aid usage, ANL measurement can be useful for
Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo
The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Prasad, Seema Gorur; Patil, Gouri Shanker; Mishra, Ramesh Kumar
Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.
Erica Endo Amemiya
Full Text Available CONTEXT AND OBJECTIVE: Nouns and verbs indicate actions in oral communication. However, hearing impairment can compromise the acquisition of oral language to such an extent that appropriate use of these can be challenging. The objective of this study was to compare the use of nouns and verbs in the oral narrative of hearing-impaired and hearing children. DESIGN AND SETTING: Analytical cross-sectional study at the Department of Speech-Language and Hearing Sciences, Universidade Federal de São Paulo. METHODS: Twenty-one children with moderate to profound bilateral neurosensory hearing impairment and twenty-one with normal hearing (controls were matched according to sex, school year and school type. A board showing pictures was presented to each child, to elicit a narrative and measure their performance in producing nouns and verbs. RESULTS: Twenty-two (52.4% of the subjects were males. The mean age was 8 years (standard deviation, SD = 1.5. Comparing averages between the groups of boys and girls, we did not find any significant difference in their use of nouns, but among verbs, there was a significant difference regarding use of the imperative (P = 0.041: more frequent among boys (mean = 2.91. There was no significant difference in the use of nouns and verbs between deaf children and hearers, in relation to school type. Regarding use of the indicative, there was a nearly significant trend (P = 0.058. CONCLUSION: Among oralized hearing-impaired children who underwent speech therapy, their performance regarding verbs and noun use was similar to that of their hearing counterparts.
Bouserhal, Rachel E; Macdonald, Ewen N; Falk, Tiago H; Voix, Jérémie
Speech production in noise with varying talker-to-listener distance has been well studied for the open ear condition. However, occluding the ear canal can affect the auditory feedback and cause deviations from the models presented for the open-ear condition. Communication is a main concern for people wearing hearing protection devices (HPD). Although practical, radio communication is cumbersome, as it does not distinguish designated receivers. A smarter radio communication protocol must be developed to alleviate this problem. Thus, it is necessary to model speech production in noise while wearing HPDs. Such a model opens the door to radio communication systems that distinguish receivers and offer more efficient communication between persons wearing HPDs. This paper presents the results of a pilot study aimed to investigate the effects of occluding the ear on changes in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Twelve participants with a mean age of 28 participated in this study. Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded quiet condition. This implies that a model can be developed to better understand speech production for the occluded ear.
Peter T. Johannesen
Full Text Available The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN and a time-reversed two-talker masker (R2TM. Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a cochlear gain loss was unrelated to intelligibility, (b residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers.
Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh
A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.
Yu, Jyaehyoung; Lee, Donguk; Han, Woojae
Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on "double noise exposures" such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s). The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB), music, game] for 30 min using a tablet personal computer with an earphone. The participants' earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed.
Full Text Available Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on "double noise exposures" such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s. The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB, music, game] for 30 min using a tablet personal computer with an earphone. The participants′ earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.
Sentence Recognition Prediction for Hearing-impaired Listeners in Stationary and Fluctuation Noise With FADE: Empowering the Attenuation and Distortion Concept by Plomp With a Quantitative Processing Model.
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut
Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH
Spataro, Sandra E.; Bloch, Janel
Listening is a critical communication skill and therefore an essential element of management education. "Active" listening surpasses passive listening or simple hearing to establish a deeper connection between speaker and listener, as the listener gives the speaker full attention via inquiry, reflection, respect, and empathy. This…
A. Goedegebure (Andre)
textabstractHearing-aid users often continue to have problems with poor speech understanding in difficult acoustical conditions. Another generally accounted problem is that certain sounds become too loud whereas other sounds are still not audible. Dynamic range compression is a signal processing
Full Text Available Introduction: Noonan syndrome (NS is a heterogeneous genetic disease that affects many parts of the body. It was named after Dr. Jacqueline Anne Noonan, a paediatric cardiologist.Case Report: We report audiological tests and auditory brainstem response (ABR findings in a 5-year old Malay boy with NS. Despite showing the marked signs of NS, the child could only produce a few meaningful words. Audiological tests found him to have bilateral mild conductive hearing loss at low frequencies. In ABR testing, despite having good waveform morphology, the results were atypical. Absolute latency of wave V was normal but interpeak latencies of wave’s I-V, I-II, II-III were prolonged. Interestingly, interpeak latency of waves III-V was abnormally shorter.Conclusion:Abnormal ABR results are possibly due to abnormal anatomical condition of brainstem and might contribute to speech delay.
Full Text Available The purpose of this research was two-fold. Firstly to develop a music perception test for hearing aid users and secondly to evaluate the influence of non-linear frequency compression (NFC on music perception with the use of the self-compiled test. This article focuses on the description of the development and validation of a music perception test. To date, the main direction in frequency lowering hearing aid studies has been in relation to speech perception abilities. With improvements in hearing aid technology, interest in musical perception as a dimension that could improve hearing aid users’ quality of life grew. The Music Perception Test (MPT was designed to evaluate different aspects of rhythm, timbre, pitch and melody. The development of the MPT could be described as design based. Phase 1 of the study included test development and recording while Phase 2 entailed presentation of stimuli to normal hearing listeners (n=15 and hearing aid users (n=4. Based on the findings of Phase 2, item analysis was performed to eliminate or change stimuli that resulted in high error rates. During Phase 3 the adapted version of the test was performed on a smaller group of normal hearing listeners (n=4 and twenty hearing aid users. Results proved that normal hearing adults as well as adults using hearing aids were able to complete all the sub-tests of the MPT although hearing aid users scored less on the various sub-tests than normal hearing listeners. For the rhythm section of the MPT normal hearing listeners scored on average 93.8% versus 75.5% of hearing aid users and 83% for the timbre section compared to 62.3% by hearing aid users. Normal hearing listeners obtained an average score of 86.3% for the pitch section and 88.2% for the melody section compared to the 70.8% and 61.9% respectively obtained by hearing aid users. This implicates that the MPT can be used successfully for assessment of music perception in hearing aid users within the South African
Shi, Lu-Feng; Morozova, Natalia
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic
Le Prell, Colleen G; Dell, Shawna; Hensley, Brittany; Hall, James W; Campbell, Kathleen C M; Antonelli, Patrick J; Green, Glenn E; Miller, James M; Guire, Kenneth
One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is the availability of an established clinical paradigm with real-world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal-hearing human subjects. Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93 to 95 (n = 10), 98 to 100 (n = 11), or 100 to 102 (n = 12) dBA in-ear exposure level for a period of 4 hr. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured before and after music exposure. Postmusic tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and 1 week later. Changes in thresholds after the lowest-level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a "notch" configuration, with the largest changes observed at 4 kHz (mean = 6.3 ± 3.9 dB; range = 0-14 dB). Recovery was largely complete within the first 4 hr postexposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1 week postexposure. These data provide insight into the variability of TTS induced by music-player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function after digital music-player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be taken to fully inform potential subjects in
Le Prell, C. G.; Dell, S.; Hensley, B.; Hall, J. W.; Campbell, K. C. M.; Antonelli, P. J.; Green, G. E.; Miller, J. M.; Guire, K.
Objectives One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is availability of an established clinical paradigm with real world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal hearing human subjects. Design Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93–95 (n=10), 98–100 (n=11), or 100–102 (n=12) dBA in-ear exposure level for a period of four hours. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured prior to and after music exposure. Post-music tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and one week later. Results Changes in thresholds after the lowest level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a “notch” configuration, with the largest changes observed at 4 kHz (mean=6.3±3.9dB; range=0–13 dB). Recovery was largely complete within the first 4 hours post-exposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1-week post-exposure. Conclusions These data provide insight into the variability of TTS induced by music player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function following digital music player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be
Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen
The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.
Full Text Available Background and Aim: While most of the people with tinnitus have some degrees of hearing impairment, a small percent of patients admitted to ear, nose and throat clinics or hearing evaluation centers are those who complain of tinnitus despite having normal hearing thresholds. This study was performed to better understanding of the reasons of probable causes of tinnitus and to investigate possible changes in the auditory brainstem function in normal-hearing patients with chronic tinnitus.Methods: In this comparative cross-sectional, descriptive and analytic study, 52 ears (26 with and 26 without tinnitus were examined. Components of the auditory brainstem response (ABR including wave latencies and wave amplitudes were determined in the two groups and analyzed using appropriate statistical methods.Results: The mean differences between the absolute latencies of waves I, III and V was less than 0.1 ms between the two groups that was not statistically significant. Also, the interpeak latency values of waves I-III, III-V and I-V in both groups had no significant difference. Only, the V/I amplitude ratio in the tinnitus group was significantly higher (p=0.04.Conclusion: The changes observed in amplitude of waves, especially in the latter ones, can be considered as an indication of plastic changes in neuronal activity and its possible role in generation of tinnitus in normal-hearing patients.
Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A
Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability
Santurette, Sébastien; Dau, Torsten
The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.
Full Text Available Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched regarding their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT. The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC. In a post hoc analysis the
Sørensen, Anna Josefine; Weisser, Adam; MacDonald, Ewen
Normal conversation requires interlocutors to monitor the ongoing acoustic signal to judge when it is appropriate to start talking. Categorical thresholds for gaps and overlaps in turn-taking interactions were measured for normalhearing and hearing-impaired listeners in both quiet and multitalker...... babble (+6 dB SNR). The slope of the categorization functions were significantly shallower for hearing impaired listeners and in the presence of background noise. Moreover, the categorization threshold for overlaps increased in background noise....
Sommers, Mitchell S
The purpose of this summary is to examine changes in listening comprehension across the adult lifespan and to identify factors associated with individual differences in listening comprehension. In this article, the author reports on both cross-sectional and longitudinal changes in listening comprehension. Despite significant declines in both sensory and cognitive abilities, listening comprehension remains relatively unchanged in middle-aged listeners (between the ages of 40 and 60 years) compared with young listeners. These results are discussed with respect to possible compensatory factors that maintain listening comprehension despite impaired hearing and reduced cognitive capacities.
Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth
consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment.
Wang, M D; Reed, C M; Bilger, R C
It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.
One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even...... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS......) at the output of the inner-ear (cochlear) filters. The purpose of this work was to investigate these aspects in detail. One chapter studies relations between frequency selectivity, TFS processing, and speech reception in listeners with normal and impaired hearing, using behavioral listening experiments. While...
Mendelsohn, David J.
Review of research on trends in teaching second-language listening focuses primarily on strategy instruction and a strategy-based approach but also refers to developments in terms of listening and "high-tech contexts," interactive listening, and academic listening. Classroom listening textbooks are discussed, with attention to the mismatch between…
Sahli, Sanem; Belgin, Erol
The purpose of this study is to compare the levels of self-esteem of adolescents with cochlear implants (before and after cochlear implantation) and the ones who have normal hearing. For this purpose, Rosenberg self-esteem scale is applied upon the study group which consists of 30 adolescents with cochlear implant between the ages of 12-19 and upon the control group which consists of 60 adolescents having the similar characteristics. The scale is used to evaluate the level of self-esteem of adolescents with cochlear implant and with normal hearing. At the end of the application, the scores of these two groups which they got according to their answers were compared statistically. When the results were examined, there seemed to be no significant difference statistically between the self-esteem values of the cochlear implant group and the control group. Apart from this, there seemed to be significant difference statistically between the self-esteem values of the before cochlear implantation and control group. In this study, we examined changes in the level of self-esteem according to different variables. As a result, it was found out that in both groups levels of self-esteem was higher for adolescents who had had preschool education, had brothers/sisters, high level of income, whose mother was working and whose father and mother had higher levels of education. On the other hand, the birth sequence and the child's father's profession did not seem to have any effect on the child's level of self-esteem. As a result of these findings, it was thought that cochlear implantation had a positive effect on life quality and it was suggested that the adolescents and their families should get assistance from experts about the characteristics and principles of approaching the child in this period. The adolescent should be directed towards social activities and courses, their positive sides should be supported and further studies should be carried out with different case groups on
Nielsen, Maja Kirstine E.; Poulsen, Torben
Hearing threshold levels and equal loudness level contours of 1/3-octave noise bands at 40 phons and 60 phon were measured for 27 normal hearing listeners in an approximately diffuse sound field. The threshold data in the frequency range 125 Hz to 1 kHz were 3-6 dB higher than the values given...
Morris, David Jackson
Cochlear Implant (CI) listeners can do well when attending to speech in quiet, yet challenging listening situations are more problematic. Previous studies have shown that fluctuations in the noise do not yield better speech recognition scores for CI listeners as they can do for normal hearing (NH...... derived from non-scripted Danish speech. The F0 temporal midpoint of the initial syllable was varied stepwise in semitones. Competing signals of modulated white noise and speech shaped noise at 0 dB and 12 dB SNR, were added to the tokens prior to 8-channel noiseexcited vocoder processing. Stimuli were...
The purpose of the present study was to replicate and extend to word recognition previous findings of reduced magnitude and reliability of laterality effects when exogenous cueing was used in a dichotic listening task with syllable pairs. Twenty right-handed undergraduate students with normal hearing (10 females, 10 males) completed a dichotic…
Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.
Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…
Farzaneh Zamiri Abdollahi
Full Text Available Objectives: Otoacoustic emissions (OAEs are sounds that originate in cochlea and are measured in external auditory canal and provide a simple, efficient and non-invasive objective indicator of healthy cochlear function. Olivo cochlear bundle (OCB or auditory efferent system is a neural feedback pathway which originated from brain stem and terminated in the inner ear and can be evaluated non-invasively by applying a contralateral acoustic stimulus and simultaneously measuring reduction of OAEs amplitude. In this study gender differences in TEOAE amplitude and suppression of TEOAE were investigated. Methods: This study was performed at Akhavan rehabilitation centre belonging to the University of Social welfare and rehabilitation sciences, Tehran, Iran in 2011. 60 young adults (30 female and 30 male between 21 and 27 years old (mean=24 years old, SD=1.661 with normal hearing criteria were selected. Right ear of all cases were tested to neutralize side effect if there is any. Results: According to Independent t-test, TEOAE amplitude was significantly greater in females with mean value of 24.98 dB (P<0.001 and TEOAE suppression was significantly greater in males with mean value of 2.07 dB (P<0.001. Discussion: This study shows that there is a significant gender difference in adult’s TEOAE (cochlear mechanisms and TEOAE suppression (auditory efferent system. The exact reason for these results is not clear. According to this study different norms for males and females might be necessary.
Petersen, Bjørn; Hansen, Mads; Sørensen, Stine Derdau
Cochlear implant users differ significantly from their normal hearing peers when it comes to perception of music. Several studies have shown that structural features – such as rhythm, timbre, and pitch – are transmitted less accurately through an implant. However, we cannot predict personal...... music less post-implantation than prior to their hearing loss. Nevertheless, a large majority of CI listeners either prefer music over not hearing music at all or find music as pleasant as they recall it before their hearing loss, or more so....... enjoyment of music solely as a function of accuracy of perception. But can music be pleasant with a cochlear implant at all? Our aim here was to gather information of both music enjoyment and listening habits before the onset of hearing loss and post-operation from a large, representative sample of Danish...
Lee, Ji Young; Lee, Jin Tae; Heo, Hye Jeong; Choi, Chul-Hee; Choi, Seong Hee; Lee, Kyungjae
Background and Objectives People usually converse in real-life background noise. They experience more difficulty understanding speech in noise than in a quiet environment. The present study investigated how speech recognition in real-life background noise is affected by the type of noise, signal-to-noise ratio (SNR), and age. Subjects and Methods Eighteen young adults and fifteen middle-aged adults with normal hearing participated in the present study. Three types of noise [subway noise, vacu...
Liu, Danzheng; Shi, Lu-Feng
This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.
Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.
Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…
Freshour, Frank W.
Research indicates that people spend roughly 45 to 65 percent of their waking moments listening to other persons. To help administrators improve their listening effectiveness, a format to develop a profile of personal listening styles is provided. The strengths and weaknesses of six different listening styles are explored along with ways to…
Vecchiato, G; Maglione, A G; Scorpecci, A; Malerba, P; Graziani, I; Cherubino, P; Astolfi, L; Marsella, P; Colosimo, A; Babiloni, Fabio
The perception of the music in cochlear implanted (CI) patients is an important aspect of their quality of life. In fact, the pleasantness of the music perception by such CI patients can be analyzed through a particular analysis of EEG rhythms. Studies on healthy subjects show that exists a particular frontal asymmetry of the EEG alpha rhythm which can be correlated with pleasantness of the perceived stimuli (approach-withdrawal theory). In particular, here we describe differences between EEG activities estimated in the alpha frequency band for a monolateral CI group of children and a normal hearing one during the fruition of a musical cartoon. The results of the present analysis showed that the alpha EEG asymmetry patterns related to the normal hearing group refers to a higher pleasantness perception when compared to the cerebral activity of the monolateral CI patients. In fact, the present results support the statement that a monolateral CI group could perceive the music in a less pleasant way when compared to normal hearing children.
Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza
Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.
Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram
The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.
Millman, Rebecca E.; Mattys, Sven L.
Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise rati...
Millman, Rebecca E.; Mattys, Sven L.
Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…
Full Text Available Following a distinction John Mowitt draws between hearing (and phonics, and listening (and sonics, this article argues that the dominant notion of listening to sound was determined by the disciplinary framework of South African history and by the deployment of a cinematic documentary apparatus, both of which have served to disable the act of listening. The conditions of this hearing, and a deafness to a reduced or bracketed listening (Chion via Schaeffer that would enable us to think the post in post-apartheid differently, is thus at the centre of our concerns here. We stage a series of screenings of expected possible soundtracks for Simon Gush's film and installation Red, simultaneously tracking the ways that sound - and particularly music and dialogue - can be shown to hold a certain way of thinking both the political history of South Africa and the politics of South African history. We conclude by listening more closely to hiss and murmur in the soundtrack to Red and suggest this has major implications for considering ways of thinking and knowing.
Madsen, Sara M K; Moore, Brian C J
The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems. © The Author(s) 2014.
Sara M. K. Madsen
Full Text Available The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems.
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained
Over the past four decades, there has been increasing interest in the effects of music listening on hearing. The purpose of this paper is to review published studies that detail the noise levels, the potential effects (e.g. noise-induced hearing loss), and the perceptions of those affected by music exposure in occupational and non-occupational settings. The review employed Medline, PubMed, PsychINFO, and the World Wide Web to find relevant studies in the scientific literature. Considered in this review are 43 studies concerning the currently most significant occupational sources of high-intensity music: rock and pop music playing and employment at music venues, as well as the most significant sources of non-occupational high-intensity music: concerts, dicotheques (clubs), and personal music players. Although all of the activities listed above have the potential for hearing damage, the most serious threat to hearing comes from prolonged exposures to amplified live music (concerts). The review concludes that more research is needed to clarify the hearing loss risks of music exposure from personal music players and that current scientific literature clearly recognizes an unmet hearing health need for more education regarding the risks of loud music exposure and the benefits of wearing hearing protection, for more hearing protection use by those at risk, and for more regulations limiting music intensity levels at music entertainment venues.
D'Souza, Mary; Zhu, Xiaoxia; Frisina, Robert D.
Presbycusis – age-related hearing loss – is the number one communicative disorder and one of the top three chronic medical condition of our aged population. High-throughput technologies potentially can be used to identify differentially expressed genes that may be better diagnostic and therapeutic targets for sensory and neural disorders. Here we analyzed gene expression for a set of GABA receptors in the cochlea of aging CBA mice using the Affymetrix GeneChip MOE430A. Functional phenotypic hearing measures were made, including auditory brainstem response (ABR) thresholds and distortion-product otoacoustic emission (DPOAE) amplitudes (four age groups). Four specific criteria were used to assess gene expression changes from RMA normalized microarray data (40 replicates). Linear regression models were used to fit the neurophysiological hearing measurements to probe-set expression profiles. These data were first subjected to one-way ANOVA, and then linear regression was performed. In addition, the log signal ratio was converted to fold change, and selected gene expression changes were confirmed by relative real-time PCR. Major findings: expression of GABA-A receptor subunit α6 was upregulated with age and hearing loss, whereas subunit α1 was repressed. In addition, GABA-A receptor associated protein like-1 and GABA-A receptor associated protein like-2 were strongly downregulated with age and hearing impairment. Lastly, gene expression measures were correlated with pathway/network relationships relevant to the inner ear using Pathway Architect, to identify key pathways consistent with the gene expression changes observed. PMID:18455804
Subjective assessments made by test persons were compared to results from a number of objective measurement and calculation methods for the assessment of low frequency noise. Eighteen young persons with normal hearing listened to eight environmental low frequency noises and evaluated the annoyance...
Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu
Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.
Jill B Firszt
Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type
Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423
Morrison M Steel
Full Text Available Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz. Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.
Steel, Morrison M; Papsin, Blake C; Gordon, Karen A
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.
Cai, Yuexin; Zhao, Fei; Chen, Yuebo; Liang, Maojin; Chen, Ling; Yang, Haidi; Xiong, Hao; Zhang, Xueyuan; Zheng, Yiqing
The purpose of this study was to investigate the effect of symmetrical, asymmetrical and unilateral hearing impairment on music quality perception. Six validated music pieces in the categories of classical music, folk music and pop music were used to assess music quality in terms of its 'pleasantness', 'naturalness', 'fullness', 'roughness' and 'sharpness'. 58 participants with sensorineural hearing loss [20 with unilateral hearing loss (UHL), 20 with bilateral symmetrical hearing loss (BSHL) and 18 with bilateral asymmetrical hearing loss (BAHL)] and 29 normal hearing (NH) subjects participated in the present study. Hearing impaired (HI) participants had greater difficulty in overall music quality perception than NH participants. Participants with BSHL rated music pleasantness and naturalness to be higher than participants with BAHL. Moreover, the hearing thresholds of the better ears from BSHL and BAHL participants as well as the hearing thresholds of the worse ears from BSHL participants were negatively correlated to the pleasantness and naturalness perception. HI participants rated the familiar music pieces higher than unfamiliar music pieces in the three music categories. Music quality perception in participants with hearing impairment appeared to be affected by symmetry of hearing loss, degree of hearing loss and music familiarity when they were assessed using the music quality rating test (MQRT). This indicates that binaural symmetrical hearing is important to achieve a high level of music quality perception in HI listeners. This emphasizes the importance of provision of bilateral hearing assistive devices for people with asymmetrical hearing impairment.
Full Text Available Background and Aim: Families who have a child with hearing deficiency deal with different challenges, and mothers have a greater responsibility towards these children because of their traditional role of caregiver; so, they deal with more psychological problems. The aim of this study was to compare the psychological well-being and coping styles in mothers of deaf and normal children.Methods: In this cross-sectional and post event study (causal-comparative method, 30 mothers of deaf students and 30 mothers of normal students from elementary schools of Ardabil, Iran, were selected using available sampling. The Ryff psychological well-being (1989 and Billings and Moos coping styles (1981 questionnaires were used in this study. The data were analyzed using MANOVA test.Results: We found that in mother's of deaf children, psychological well-being and its components was significantly lower than mothers of normal children (p<0.01 and p<0.05, respectively. There was a significant difference between two groups in terms of cognitive coping style, too (p<0.01. However, mothers of deaf children used less cognitive coping style.Conclusions: It seems that child's hearing loss affects on mothers psychological well-being and coping styles; this effect can be visible as psychological problems and lower use of adaptive coping styles.
Gaarden, Marianne; Lorensen, Marlene Ringgaard
Based on new empirical studies this essay explores how churchgoers listen to sermons in regard to the theological notion that “faith comes from hearing.” Through Bakhtinian theories presented by Lorensen and empirical findings presented by Gaarden, the apparently masked agency in preaching......) create new meaning and understanding. It is not a room that the listener or the preacher can control or occupy, but a room in which both engage....
de Kok, I.A.
The thesis explores individual differences in listening behavior and how these differences can be used in the development and evaluation of listener response prediction models for embodied conversational agents. The thesis starts with introducing methods to collect multiple perspectives on listening
Ekaterina Nemtchinova's book "Teaching Listening" explores different approaches to teaching listening in second language classrooms. Presenting up-to-date research and theoretical issues associated with second language listening, Nemtchinova explains how these new findings inform everyday teaching and offers practical suggestions…
Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian
Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words
Full Text Available Context: Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. Aims: This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST. Settings and Design: In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Subjects and Methods: Statistically significant difference was found in studied (hearing-impaired and control group (normal children. In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical Analysis Used: Statistical analysis was carried out utilizing Z-test. Results: Statistically significant difference was found in studied (hearing-impaired and control group (normal children. In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Conclusions: Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.
... the sounds you want to hear. Assistive listening devices bring certain sounds directly to your ears. This can ... a small room or on a stage. Other devices can bring the sound from your TV, radio, or music ...
Christensen, Flemming; Martin, Geoff; Minnaar, Pauli
A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....
Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M
Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.
Marculino, Carolina Finetti; Rabelo, Camila Maia; Schochat, Eliane
To establish the standard criteria for the Gaps-in-Noise (GIN) test in 9-year-old normal-hearing children; to obtain the mean gap detection thresholds; and to verify the influence of the variables gender and ear on the gap detection thresholds. Forty normal-hearing individuals, 20 male and 20 female, with ages ranging from 9 years to 9 years and 11 months, were evaluated. The procedures performed were: anamnesis, audiological evaluation, acoustic immittance measures (tympanometry and acoustic reflex), Dichotic Digits Test, and GIN test. The results obtained were statistically analyzed. The results revealed similar performance of right and left ears in the population studied. There was also no difference regarding the variable gender. In the subjects evaluated, the mean gap detection thresholds were 4.4 ms for the right ear, and 4.2 ms for the left ear. The values obtained for right and left ear, as well as their standard deviations, can be used as standard criteria for 9-year-old children, regardless of ear or gender.
Kan, Alan; Litovsky, Ruth Y
Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.
Kan, Alan; Litovsky, Ruth Y.
Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. PMID:25193553
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Farideh Tangestani Zadeh
Full Text Available Background and Aim: The hearing defects in deaf and hearing-impaired students also affect their cognitive skills such as memory in addition to communication skills. Hence, the aim of this study was to compare visual working memory in deaf and hearing-impaired students with that in normal counterparts.Method: In the present study, which was a causal-comparative study using the André Rey test, 30 deaf and 30 hearing-impaired students were compared with 30 students in a normal group, and they were matched based on gender, intelligence, educational grade, and socioeconomic status.Findings: Findings show that there is significant difference between the three groups’ subjects (p0.05.Conclusion: Function of deaf or hard-of-hearing students in the visual working memory task was weaker in comparison with the normal counterparts, while the two deaf and hard-of-hearing groups have similar functions. With a better identification and understanding of the factors that affect the development of this cognitive ability, we can offer new methods of teaching and reduce many of the disadvantages of this group of people in the different fields of cognitive science.
Shim, Hyunyong; Lee, Seungwan; Koo, Miseung; Kim, Jinsook
To prevent noise induced hearing losses caused by listening to music with personal listening devices for young adults, this study was aimed to measure output levels of an MP3 and to identify preferred listening levels (PLLs) depending on earphone types, music genres, and listening durations. Twenty-two normal hearing young adults (mean=18.82, standard deviation=0.57) participated. Each participant was asked to select his or her most PLLs when listened to Korean ballade or dance music with an earbud or an over-the-ear earphone for 30 or 60 minutes. One side of earphone was connected to the participant's better ear and the other side was connected to a sound level meter via a 2 or 6 cc-couplers. Depending on earphone types, music genres, and listening durations, loudness A-weighted equivalent (LAeq) and loudness maximum time-weighted with A-frequency sound levels in dBA were measured. Neither main nor interaction effects of the PLLs among the three factors were significant. Overall output levels of earbuds were about 10-12 dBA greater than those of over-the-ear earphones. The PLLs were 1.73 dBA greater for earbuds than over-the-ear earphones. The average PLL for ballad was higher than for dance music. The PLLs at LAeq for both music genres were the greatest at 0.5 kHz followed by 1, 0.25, 2, 4, 0.125, 8 kHz in the order. The PLLs were not different significantly when listening to Korean ballad or dance music as functions of earphone types, music genres, and listening durations. However, over-the-ear earphones seemed to be more suitable to prevent noise induce hearing loss when listening to music, showing lower PLLs, possibly due to isolation from the background noise by covering ears.
Full Text Available In this study, we investigated the relation between the use of hearing aids at the initial stages of hearing loss and age-related changes in the auditory and cognitive abilities of elderly persons. 12 healthy elderly persons participated in an annual auditory and cognitive longitudinal examination for three years. According to their hearing level, they were divided into 3 subgroups - the normal hearing group, the hearing loss without hearing aids group, and the hearing loss with hearing aids group. All the subjects underwent 4 tests: pure-tone audiometry, syllable intelligibility test, dichotic listening test (DLT, and Wechsler Adult Intelligence Scale-Revised (WAIS-R Short Forms. Comparison between the 3 groups revealed that the hearing loss without hearing aids group showed the lowest scores for the performance tasks, in contrast to the hearing level and intelligibility results. The other groups showed no significant difference in the WAIS-R subtests. This result indicates that prescription of a hearing aid during the early stages of hearing loss is related to the retention of cognitive abilities in such elderly people. However, there were no statistical significant correlations between the auditory and cognitive tasks.
Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Pickett, Erin J.; Fitzgibbons, Peter J.
This study examined the ability of older and younger listeners to perceive contrastive syllable stress in unaccented and Spanish-accented cognate bi-syllabic English words. Younger listeners with normal hearing, older listeners with normal hearing, and older listeners with hearing impairment judged recordings of words that contrasted in stress that conveyed a noun or verb form (e.g., CONduct/conDUCT), using two paradigms differing in the amount of semantic support. The stimuli were spoken by four speakers: one native English speaker and three Spanish-accented speakers (one moderately and two mildly accented). The results indicate that all listeners showed the lowest accuracy scores in responding to the most heavily accented speaker and the highest accuracy in judging the productions of the native English speaker. The two older groups showed lower accuracy in judging contrastive lexical stress than the younger group, especially for verbs produced by the most accented speaker. This general pattern of performance was observed in the two experimental paradigms, although performance was generally lower in the paradigm without semantic support. The findings suggest that age-related difficulty in adjusting to deviations in contrastive bi-syllabic lexical stress produced with a Spanish accent may be an important factor limiting perception of accented English by older people. PMID:27036250
Ketelaar, Lizet; Wiefferink, Carin H; Frijns, Johan H M; Broekhof, Evelien; Rieffe, Carolien
Moral emotions such as shame, guilt and pride are the result of an evaluation of the own behavior as (morally) right or wrong. The capacity to experience moral emotions is thought to be an important driving force behind socially appropriate behavior. The relationship between moral emotions and social behavior in young children has not been studied extensively in normally hearing (NH) children, let alone in those with a hearing impairment. This study compared young children with hearing impairments who have a cochlear implant (CI) to NH peers regarding the extent to which they display moral emotions, and how this relates to their social functioning and language skills. Responses of 184 NH children and 60 children with CI (14-61 months old) to shame-/guilt- and pride-inducing events were observed. Parents reported on their children's social competence and externalizing behavior, and experimenters observed children's cooperative behavior. To examine the role of communication in the development of moral emotions and social behavior, children's language skills were assessed. Results show that children with CI displayed moral emotions to a lesser degree than NH children. An association between moral emotions and social functioning was found in the NH group, but not in the CI group. General language skills were unrelated to moral emotions in the CI group, yet emotion vocabulary was related to social functioning in both groups of children. We conclude that facilitating emotion language skills has the potential to promote children's social functioning, and could contribute to a decrease in behavioral problems in children with CI specifically. Future studies should examine in greater detail which factors are associated with the development of moral emotions, particularly in children with CI. Some possible directions for future research are discussed.
Smith, Donald E. P.; And Others
Repeated impedance measures were given over five weeks to 11 autistic, 20 learning-disabled, and 20 normal children. A repeated measures analysis of variance led to the conclusion that fluctuating, negative middle ear pressure greater than normal characterizes both autistic and learning-disabled children with the more abnormal pressures typical in…
Hughes, Sarah E; Hutchings, Hayley A; Rapport, Frances L; McMahon, Catherine M; Boisvert, Isabelle
Individuals with hearing loss often report a need for increased effort when listening, particularly in challenging acoustic environments. Despite audiologists' recognition of the impact of listening effort on individuals' quality of life, there are currently no standardized clinical measures of listening effort, including patient-reported outcome measures (PROMs). To generate items and content for a new PROM, this qualitative study explored the perceptions, understanding, and experiences of listening effort in adults with severe-profound sensorineural hearing loss before and after cochlear implantation. Three focus groups (1 to 3) were conducted. Purposive sampling was used to recruit 17 participants from a cochlear implant (CI) center in the United Kingdom. The participants included adults (n = 15, mean age = 64.1 years, range 42 to 84 years) with acquired severe-profound sensorineural hearing loss who satisfied the UK's national candidacy criteria for cochlear implantation and their normal-hearing significant others (n = 2). Participants were CI candidates who used hearing aids (HAs) and were awaiting CI surgery or CI recipients who used a unilateral CI or a CI and contralateral HA (CI + HA). Data from a pilot focus group conducted with 2 CI recipients were included in the analysis. The data, verbatim transcripts of the focus group proceedings, were analyzed qualitatively using constructivist grounded theory (GT) methodology. A GT of listening effort in cochlear implantation was developed from participants' accounts. The participants provided rich, nuanced descriptions of the complex and multidimensional nature of their listening effort. Interpreting and integrating these descriptions through GT methodology, listening effort was described as the mental energy required to attend to and process the auditory signal, as well as the effort required to adapt to, and compensate for, a hearing loss. Analyses also suggested that listening effort for most participants was
Full Text Available ABSTRACT: Introduction & Objective: A common auditory complaint of multiple sclerosis patients, is misunderstanding speech in the presence of background noise. Evidence from animal and human studies has suggested that the medial olivocochlear bundle may play an important role in hearing noise. The medial olivocochlear bundle function can be evaluated by the suppression effect of transient otoacoustic emission in response to contralateral acoustic stimulation. The present study was conducted to investigate the suppression effect of transient otoacoustic emission in multiple sclerosis patients. Materials & Methods: This analytical case-control study was conducted on 34 multiple sclerosis patients (24 female, 10 male, aged 20-50 years and 34 controls matched for age and gender in Faculty of Rehabilitation, Tehran University of Medical Sciences in 2006. All cases were selected in simple random manner. The suppression effect of transient otoacoustic emission was evaluated by comparing the transient otoacoustic emission levels with and without contralateral acoustic stimulation. Data were analyzed using SPSS software and independent T- test. Results:There was no significant difference in transient otoacoustic emission levels of two groups, but a significantly reduced suppression effect of transient otoacoustic emission was found in multiple sclerosis patients, in compare with the controls. Conclusion: Outer hair cells activity in multiple sclerosis patients was normal but these patients presented low activity of the medial olivocochlear bundle system which could affect their ability to hear in the presence of background noise.
Sharon M Abel
Full Text Available Integrated hearing protection systems are designed to enhance free field and radio communications during military operations while protecting against the damaging effects of high-level noise exposure. A study was conducted to compare the effect of increasing the radio volume on the intelligibility of speech over the radios of two candidate systems, in-ear and muff-style, in 85-dBA speech babble noise presented free field. Twenty normal-hearing, English-fluent subjects, half male and half female, were tested in same gender pairs. Alternating as talker and listener, their task was to discriminate consonant-vowel-consonant syllables that contrasted either the initial or final consonant. Percent correct consonant discrimination increased with increases in the radio volume. At the highest volume, subjects achieved 79% with the in-ear device but only 69% with the muff-style device, averaged across the gender of listener/talker pairs and consonant position. Although there was no main effect of gender, female listener/talkers showed a 10% advantage for the final consonant and male listener/talkers showed a 1% advantage for the initial consonant. These results indicate that normal hearing users can achieve reasonably high radio communication scores with integrated in-ear hearing protection in moderately high-level noise that provides both energetic and informational masking. The adequacy of the range of available radio volumes for users with hearing loss has yet to be determined.
Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf
. However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....
Bouserhal, Rachel E.; MacDonald, Ewen; Falk, Tiago H.
in voice level and fundamental frequency in noise and with varying talker-to-listener distance. Study sample: Twelve participants with a mean age of 28 participated in this study. Results: Compared to existing data, results show a trend similar to the open ear condition with the exception of the occluded...
Versfeld, Niek J.; Dreschler, Wouter A.
A conventional measure to determine the ability to understand speech in noisy backgrounds is the so-called speech reception threshold (SRT) for sentences. It yields the signal-to-noise ratio (in dB) for which half of the sentences are correctly perceived. The SRT defines to what degree speech must be audible to a listener in order to become just intelligible. There are indications that elderly listeners have greater difficulty in understanding speech in adverse listening conditions than young listeners. This may be partly due to the differences in hearing sensitivity (presbycusis), hence audibility, but other factors, such as temporal acuity, may also play a significant role. A potential measure for the temporal acuity may be the threshold to which speech can be accelerated, or compressed in time. A new test is introduced where the speech rate is varied adaptively. In analogy to the SRT, the time-compression threshold (or TCT) then is defined as the speech rate (expressed in syllables per second) for which half of the sentences are correctly perceived. In experiment I, the TCT test is introduced and normative data are provided. In experiment II, four groups of subjects (young and elderly normal-hearing and hearing-impaired subjects) participated, and the SRT's in stationary and fluctuating speech-shaped noise were determined, as well as the TCT. The results show that the SRT in fluctuating noise and the TCT are highly correlated. All tests indicate that, even after correction for the hearing loss, elderly normal-hearing subjects perform worse than young normal-hearing subjects. The results indicate that the use of the TCT test or the SRT test in fluctuating noise is preferred over the SRT test in stationary noise.
Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
Prentiss, Sandra M; Friedland, David R; Nash, John J; Runge, Christina L
Cochlear implants have shown vast improvements in speech understanding for those with severe to profound hearing loss; however, music perception remains a challenge for electric hearing. It is unclear whether the difficulties arise from limitations of sound processing, the nature of a damaged auditory system, or a combination of both. To examine music perception performance with different acoustic and electric hearing configurations. Chord discrimination and timbre perception were tested in subjects representing four daily-use listening configurations: unilateral cochlear implant (CI), contralateral bimodal (CIHA), bilateral hearing aid (HAHA) and normal-hearing (NH) listeners. A same-different task was used for discrimination of two chords played on piano. Timbre perception was assessed using a 10-instrument forced-choice identification task. Fourteen adults were included in each group, none of whom were professional musicians. The number of correct responses was divided by the total number of presentations to calculate scores in percent correct. Data analyses were performed with Kruskal-Wallis one-way analysis of variance and linear regression. Chord discrimination showed a narrow range of performance across groups, with mean scores ranging between 72.5% (CI) and 88.9% (NH). Significant differences were seen between the NH and all hearing-impaired groups. Both the HAHA and CIHA groups performed significantly better than the CI groups, and no significant differences were observed between the HAHA and CIHA groups. Timbre perception was significantly poorer for the hearing-impaired groups (mean scores ranged from 50.3-73.9%) compared to NH (95.2%). Significantly better performance was observed in the HAHA group as compared to both groups with electric hearing (CI and CIHA). There was no significant difference in performance between the CIHA and CI groups. Timbre perception was a significantly more difficult task than chord discrimination for both the CI and CIHA
Aalfs, CM; Oosterwijk, JC; VanSchooneveld, MJ; Begeman, CJ; Wabeke, KB; Hennekam, RCM
Two unrelated, adult females with normal intelligence are described. They show a similar clinical picture with a long and narrow face, congenital cataract, microphthalmia, microcornea, a high nasal bridge, a short nose, a broad nasal tip, a long philtrum, bilateral hearing loss, persistent primary
Huttunen, Kerttu; Ryder, Nuala
This study explored the use of mental state and emotion terms and other evaluative expressions in the story generation of 65 children (aged 2-8 years) with normal hearing (NH) and 11 children (aged 3-7 years) using a cochlear implant (CI). Children generated stories on the basis of sets of sequential pictures. The stories of the children with CI…
Fitzpatrick, Elizabeth M; Gaboury, Isabelle; Durieux-Smith, Andrée; Coyle, Doug; Whittingham, JoAnne; Nassrallah, Flora
Children with unilateral hearing loss (UHL) are being diagnosed at younger ages because of newborn hearing screening. Historically, they have been considered at risk for difficulties in listening and language development. Little information is available on contemporary cohorts of children identified in the early months of life. We examined auditory and language acquisition outcomes in a contemporary cohort of early-identified children with UHL and compared their outcomes at preschool age with peers with mild bilateral loss and with normal hearing. As part of the Mild and Unilateral Hearing Loss in Children Study, we collected auditory and spoken language outcomes on children with unilateral, bilateral hearing loss and with normal hearing over a four-year period. This report provides a cross-sectional analysis of results at age 48 months. A total of 120 children (38 unilateral and 31 bilateral mild, 51 normal hearing) were enrolled in the study from 2010 to 2015. Children started the study at varying ages between 12 and 36 months of age and were followed until age 36-48 months. The median age of identification of hearing loss was 3.4 months (IQR: 2.0, 5.5) for unilateral and 3.6 months (IQR: 2.7, 5.9) for the mild bilateral group. Families completed an intake form at enrolment to provide baseline child and family-related characteristics. Data on amplification fitting and use were collected via parent questionnaires at each annual assessment interval. This study involved a range of auditory development and language measures. For this report, we focus on the end of follow-up results from two auditory development questionnaires and three standardized speech-language assessments. Assessments included in this report were completed at a median age of 47.8 months (IQR: 38.8, 48.5). Using ANOVA, we examined auditory and language outcomes in children with UHL and compared their scores to children with mild bilateral hearing loss and those with normal hearing. On most
Examines the problem of acoustics in school classrooms; the problems it creates for student learning, particularly for students with hearing problems; and the impediments to achieving acceptable acoustical levels for school classrooms. Acoustic guidelines are explored and some remedies for fixing sound problems are highlighted. (GR)
Ho Won, Jong; Jones, Gary L.; Drennan, Ward R.; Jameyson, Elyse M.; Rubinstein, Jay T.
Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an “ideal observer,” showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. PMID:21973363
Won, Jong Ho; Jones, Gary L; Drennan, Ward R; Jameyson, Elyse M; Rubinstein, Jay T
Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an "ideal observer," showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. © 2011 Acoustical Society of America
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
Auditory development in children with hearing loss, including the perception of prosody, depends on having adequate input from cochlear implants and/or hearing aids. Lack of adequate auditory stimulation can lead to delayed speech and language development. Nevertheless, prosody perception and production in people with hearing loss have received less attention than other aspects of language. The perception of auditory information conveyed through prosody using variations in the pitch, amplitude, and duration of speech is not usually evaluated clinically. This study (1) compared prosody perception and production abilities in children with hearing loss and children with normal hearing; and (2) investigated the effect of age, hearing level, and musicality on prosody perception. Participants were 16 children with hearing loss and 16 typically developing controls matched for age and gender. Fifteen of the children with hearing loss were tested while using amplification (n = 9 hearing aids, n = 6 cochlear implants). Six receptive subtests of the Profiling Elements of Prosody in Speech-Communication (PEPS-C), the Child Paralanguage subtest of Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA 2), and Contour and Interval subtests of the Montreal Battery of Evaluation of Amusia (MBEA) were used. Audio recordings of the children's reading samples were rated using a perceptual prosody rating scale by nine experienced listeners who were blinded to the children's hearing status. Thirty two children, 16 with hearing loss (mean age = 8.71 yr) and 16 age- and gender-matched typically developing children with normal hearing (mean age = 8.87 yr). Assessments were completed in one session lasting 1-2 hours in a quiet room. Test items were presented using a laptop computer through loudspeaker at a comfortable listening level. For children with hearing loss using hearing instruments, all tests were completed with hearing devices set at their everyday listening setting. All PEPS
Carla Gentile Matas
Full Text Available OBJETIVO: Avaliar a estabilidade dos parâmetros dos potenciais evocados auditivos em adultos normais. MÉTODOS: Foram submetidos à avaliação audiológica e eletrofisiológica (potencial evocado auditivo de tronco encefálico - PEATE, potencial evocado auditivo de média latência - PEAML e potencial cognitivo - P300 49 indivíduos normais, de 18 a 40 anos (25 do gênero feminino e 24 do gênero masculino. Realizou-se reavaliação três meses após a avaliação. RESULTADOS: Foram observadas diferenças entre os gêneros na avaliação para as latências das ondas III e V e interpicos I-III e I-V do PEATE e amplitude N2-P3 do P300. Não foram verificadas diferenças significativas para os parâmetros do PEATE, PEAML (latência das ondas Na, Pa e amplitude Na - Pa e P300 (latência da onda P300 entre os resultados obtidos na avaliação e reavaliação. CONCLUSÃO: Exceção feita à amplitude N2-P3, observou-se estabilidade dos parâmetros de PEATE, PEAML e P300 em adultos normais após período de três meses.PURPOSE: To evaluate the stability of parameters of auditory evoked potentials in normal adults. METHODS: Forty-nine normal subjects with ages from 18 to 40 years (25 females and 24 males were submitted to audiological and electrophysiological hearing evaluation (auditory brainstem response - ABR, middle latency response - MLR, and cognitive potential - P300. Subjects were reassessed three months after the initial evaluation. RESULTS: Significant differences were observed between genders regarding the wave latencies III and V and the interpeaks I-III and I-IV of ABR, and the amplitude N2-P3 of the P300. No differences were found between the results of initial and final assessments for the parameters of the ABR, MLR (Na, Pa latencies and Na-Pa amplitude and P300 (P300 latency. CONCLUSION: Except for the N2-P3 amplitude, it was observed stability of the parameters of ABR, MLR and P300 in normal adults after a period of three months.
Miller, Christi W.; Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly
Purpose: This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method: Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2…
Fuller, Christina; Free, Rolien; Maat, Bert; Baskent, Deniz
In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech
Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.
Auer, E T; Bernstein, L E; Coulter, D C
Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.
Gilley, Phillip M; Sharma, Mridula; Purdy, Suzanne C
We sought to examine whether oscillatory EEG responses to a speech stimulus in both quiet and noise were different in children with listening problems than in children with normal hearing. We employed a high-resolution spectral-temporal analysis of the cortical auditory evoked potential in response to a 150 ms speech sound /da/ in quiet and 3 dB SNR in 21 typically developing children (mean age=10.7 years, standard deviation=1.7) and 44 children with reported listening problems (LP) with absence of hearing loss (mean age=10.3 years, standard deviation=1.6). Children with LP were assessed for auditory processing disorder (APD) by which 24 children had APD, and 20 children did not. Peak latencies, magnitudes, and frequencies were compared between these groups. Children with LP had frequency shifts in the theta, and alpha bands (plistening problems in this population of children. Published by Elsevier Ireland Ltd.
Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng
Conductive hearing loss simulations have attempted to estimate the speech-understanding difficulties of children with otitis media with effusion (OME). However, the validity of this approach has not been evaluated. The research aim of the present study was to investigate whether a simple, frequency-specific, attenuation-based simulation of OME-related hearing loss was able to reflect the actual effects of conductive hearing loss on speech perception. Forty-one school-age children with OME-related hearing loss were recruited. Each child with OME was matched with a same sex and age counterpart with normal hearing to make a participant pair. Pure-tone threshold differences at octave frequencies from 125 to 8000 Hz for every participant pair were used as the simulation attenuation levels for the normal-hearing children. Another group of 41 school-age otologically normal children were recruited as a control group without actual or simulated hearing loss. The Mandarin Hearing in Noise Test was utilized, and sentence recall accuracy at four signal to noise ratios (SNR) considered representative of classroom-listening conditions were derived, as well as reception thresholds for sentences (RTS) in quiet and in noise using adaptive protocols. The speech perception in quiet and in noise of children with simulated OME-related hearing loss was significantly poorer than that of otologically normal children. Analysis showed that RTS in quiet of children with OME-related hearing loss and of children with simulated OME-related hearing loss was significantly correlated and comparable. A repeated-measures analysis suggested that sentence recall accuracy obtained at 5-dB SNR, 0-dB SNR, and -5-dB SNR was similar between children with actual and simulated OME-related hearing loss. However, RTS in noise in children with OME was significantly better than that for children with simulated OME-related hearing loss. The present frequency-specific, attenuation-based simulation method reflected
Smith, Spencer B; Cone, Barbara
To determine if active listening modulates the strength of the medial olivocochlear (MOC) reflex in children. Click-evoked otoacoustic emissions (CEOAEs) were recorded from the right ear in quiet and in four test conditions: one with contralateral broadband noise (BBN) only, and three with active listening tasks wherein attention was directed to speech embedded in contralateral BBN. Fifteen typically-developing children (ranging in age from 8 to14 years) with normal hearing. CEOAE levels were reduced in every condition with contralateral acoustic stimulus (CAS) when compared to preceding quiet conditions. There was an additional systematic decrease in CEOAE level with increased listening task difficulty, although this effect was very small. These CEOAE level differences were most apparent in the 8-18 ms region after click onset. Active listening may change the strength of the MOC reflex in children, although the effects reported here are very subtle. Further studies are needed to verify that task difficulty modulates the activity of the MOC reflex in children.
Johnson, Cheryl DeConde
Emphasis on classroom listening has gained importance for all children and especially for those with hearing loss and special listening needs. The rationale can be supported from trends in educational placements, the Response to Intervention initiative, student performance and accountability, the role of audition in reading, and improvement in hearing technologies. Speech-language pathologists have an instrumental role advocating for the accommodations that are necessary for effective listening for these children in school. To identify individual listening needs and make relevant recommendations for accommodations, a classroom listening assessment is suggested. Components of the classroom listening assessment include observation, behavioral assessment, self-assessment, and classroom acoustics measurements. Together, with a strong rationale, the results can be used to implement a plan that results in effective classroom listening for these children. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S
Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.
Keilmann, Annerose; Friese, Barbara; Lässig, Anne; Hoffmann, Vanessa
The introduction of neonatal hearing screening and the increasingly early age at which children can receive a cochlear implant has intensified the need for a validated questionnaire to assess the speech production of children aged 0‒18. Such a questionnaire has been created, the LittlEARS ® Early Speech Production Questionnaire (LEESPQ). This study aimed to validate a second, revised edition of the LEESPQ. Questionnaires were returned for 362 children with normal hearing. Completed questionnaires were analysed to determine if the LEESPQ is reliable, prognostically accurate, internally consistent, and if gender or multilingualism affects total scores. Total scores correlated positively with age. The LEESPQ is reliable, accurate, and consistent, and independent of gender or lingual status. A norm curve was created. This second version of the LEESPQ is a valid tool to assess the speech production development of children with normal hearing, aged 0‒18, regardless of their gender. As such, the LEESPQ may be a useful tool to monitor the development of paediatric hearing device users. The second version of the LEESPQ is a valid instrument for assessing early speech production of children aged 0‒18 months.
Zaar, Johannes; Schmitt, Nicola; Derleth, Ralph-Peter
–1064] that combines an auditory processing front end with a correlation-based template-matching back end. In terms of HA processing, effects of strong nonlinear frequency compression and impulse-noise suppression were measured in 10 NH listeners using consonant-vowel stimuli. Regarding CI processing, the consonant......This study investigated the influence of hearing-aid (HA) and cochlear-implant (CI) processing on consonant perception in normal-hearing (NH) listeners. Measured data were compared to predictions obtained with a speech perception model [Zaar and Dau (2017). J. Acoust. Soc. Am. 141, 1051...... perception data from DiNino et al. [(2016). J. Acoust. Soc. Am. 140, 4404-4418] were considered, which were obtained with noise-vocoded vowel-consonant-vowel stimuli in 12 NH listeners. The inputs to the model were the same stimuli as were used in the corresponding experiments. The model predictions obtained...
Manning, Candice; Mermagen, Timothy; Scharine, Angelique
Military personnel are at risk for hearing loss due to noise exposure during deployment (USACHPPM, 2008). Despite mandated use of hearing protection, hearing loss and tinnitus are prevalent due to reluctance to use hearing protection. Bone conduction headsets can offer good speech intelligibility for normal hearing (NH) listeners while allowing the ears to remain open in quiet environments and the use of hearing protection when needed. Those who suffer from tinnitus, the experience of perceiving a sound not produced by an external source, often show degraded speech recognition; however, it is unclear whether this is a result of decreased hearing sensitivity or increased distractibility (Moon et al., 2015). It has been suggested that the vibratory stimulation of a bone conduction headset might ameliorate the effects of tinnitus on speech perception; however, there is currently no research to support or refute this claim (Hoare et al., 2014). Speech recognition of words presented over air conduction and bone conduction headsets was measured for three groups of listeners: NH, sensorineural hearing impaired, and/or tinnitus sufferers. Three levels of speech-to-noise (SNR = 0, -6, -12 dB) were created by embedding speech items in pink noise. Better speech recognition performance was observed with the bone conduction headset regardless of hearing profile, and speech intelligibility was a function of SNR. Discussion will include study limitations and the implications of these findings for those serving in the military. Published by Elsevier B.V.
Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the
... Does a hearing problem cause you difficulty when listening to TV or radio? Do you feel that any difficulty with your hearing limits or hampers your personal or social life? Does a hearing problem cause you difficulty ...
Lundbeck, Micha; Grimm, Giso; Hohmann, Volker
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverb...
Smith, Sherri L; Pichora-Fuller, M Kathleen
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners' auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding.
Lambert, Justin; Ghadry-Tavi, Rouzbeh; Knuff, Kate; Jutras, Marc; Siever, Jodi; Mick, Paul; Roque, Carolyn; Jones, Gareth; Little, Jonathan; Miller, Harry; Van Bergen, Colin; Kurtz, Donna; Murphy, Mary Ann; Jones, Charlotte Ann
Hearing loss (HL) is a disability associated with poorer health-related quality of life including an increased risk for loneliness, isolation, functional fitness declines, falls, hospitalization and premature mortality. The purpose of this pilot trial is to determine the feasibility and acceptability of a novel intervention to reduce loneliness, improve functional fitness, social connectedness, hearing and health-related quality of life in older adults with HL. This 10-week, single-blind, pilot randomized control trial (RCT) will include a convenience sample of ambulatory adults aged 65 years or older with self-reported HL. Following baseline assessments, participants will be randomized to either intervention (exercise, health education, socialization and group auditory rehabilitation (GAR)) or control (GAR only) groups. The intervention group will attend a local YMCA twice a week and the control group once a week. Intervention sessions will include 45 min of strengthening, balance and resistance exercises, 30 min of group walking at a self-selected pace and 60 min of interactive health education or GAR. The control group will attend 60-min GAR sessions. GAR sessions will include education about hearing, hearing technologies, enhancing communication skills, and psychosocial support. Pre-post trial data collection and measures will include: functional fitness (gait speed, 30-s Sit to Stand Test), hearing and health-related quality of life, loneliness, depression, social participation and social support. At trial end, feasibility (recruitment, randomization, retention, acceptability) and GAR will be evaluated. Despite evidence suggesting that HL is associated with declines in functional fitness, there are no studies aimed at addressing functional fitness declines associated with the disability of HL. This pilot trial will provide knowledge about the physical, mental and social impacts on health related to HL as a disability. This will inform the feasibility of a
... signals normally to the brain. In addition, hearing losses are classified according to the degree of severity: • Mild, • Moderate, • Severe, • Profound. Hearing losses are also classified according to the sound frequency ...
Hassager, Henrik Gert; Wiinberg, Alan; Dau, Torsten
This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners’ spatial perception in a reverberant environment. Three compression schemes—independent compression at each ear, linked compression between the two ears, and “spatially ideal......” compression operating solely on the dry source signal—were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane. Linear processing was considered as the reference condition. The results showed that both...... independent and linked compression resulted in more diffuse and broader sound images as well as internalization and image splits, whereby more image splits were reported for the noise bursts than for speech. Only the spatially ideal compression provided the listeners with a spatial percept similar...
Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.
Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the
Als, Nicoline Bjerggaard; Jensen, Charlotte Thodberg; Jensen, Rasmus
Sound exposure is one of the primary causes of preventable hearing loss. Traditionally, sound exposure has been associated to industrial settings, and as such, treated as an occupational safety issue leading to international standards regulating sound exposure to improve working conditions. High...
Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.
The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021
Shi, Lu-Feng; Koenig, Laura L
Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN
Pichora-Fuller, M K; Schneider, B A; Daneman, M
Two experiments using the materials of the Revised Speech Perception in Noise (SPIN-R) Test [Bilger et al., J. Speech Hear. Res. 27, 32-48 (1984)] were conducted to investigate age-related differences in the identification and the recall of sentence-final words heard in a babble background. In experiment 1, the level of the babble was varied to determine psychometric functions (percent correct word identification as a function of S/N ratio) for presbycusics, old adults with near-normal hearing, and young normal-hearing adults, when the sentence-final words were either predictable (high context) or unpredictable (low context). Differences between the psychometric functions for high- and low-context conditions were used to show that both groups of old listeners derived more benefit from supportive context than did young listeners. In experiment 2, a working memory task [Daneman and Carpenter, J. Verb. Learn. Verb. Behav. 19, 450-466 (1980)] was added to the SPIN task for young and old adults. Specifically, after listening to and identifying the sentence-final words for a block of n sentences, the subjects were asked to recall the last n words that they had identified. Old subjects recalled fewer of the items they had perceived than did young subjects in all S/N conditions, even though there was no difference in the recall ability of the two age groups when sentences were read. Furthermore, the number of items recalled by both age groups was reduced in adverse S/N conditions. The resutls were interpreted as supporting a processing model in which reallocable processing resources are used to support auditory processing when listening becomes difficult either because of noise, or because of age-related deterioration in the auditory system. Because of this reallocation, these resources are unavailable to more central cognitive processes such as the storage and retrieval functions of working memory, so that "upstream" processing of auditory information is adversely
Simon, Helen J; Divenyi, Pierre L; Lotze, Al
The effects of varying interaural time delay (ITD) and interaural intensity difference (IID) were measured in normal-hearing sighted and congenitally blind subjects as a function of eleven frequencies and at sound pressure levels of 70 and 90 dB, and at a sensation level of 25 dB (sensation level refers to the pressure level of the sound above its threshold for the individual subject). Using an 'acoustic' pointing paradigm, the subject varied the IID of a 500 Hz narrow-band (100 Hz) noise (the 'pointer') to coincide with the apparent lateral position of a 'target' ITD stimulus. ITDs of 0, +/-200, and +/-400 micros were obtained through total waveform delays of narrow-band noise, including envelope and fine structure. For both groups, the results of this experiment confirm the traditional view of binaural hearing for like stimuli: non-zero ITDs produce little perceived lateral displacement away from 0 IID at frequencies above 1250 Hz. To the extent that greater magnitude of lateralization for a given ITD, presentation level, and center frequency can be equated with superior localization abilities, blind listeners appear at least comparable and even somewhat better than sighted subjects, especially when attending to signals in the periphery. The present findings suggest that blind listeners are fully able to utilize the cues for spatial hearing, and that vision is not a mandatory prerequisite for the calibration of human spatial hearing.
Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 10. Listening to the Shape of a Drum - You Cannot Hear the Shape of a Drum! S Kesavan. General Article Volume 3 Issue 10 October 1998 pp 49-58. Fulltext. Click here to view fulltext PDF. Permanent link:
Turunen-Rise, I; Flottorp, G; Tvete, O
Playing various types of music on five selected personal cassette players (PCPs), A-weighted sound pressure levels (SPLs), together with octave band spectrum, were measured on KEMAR (Knowles Electronics Manikin for Acoustic Research). Maximum and equivalent SPLs were measured for various types of music, PCPs and for different gain (volume) settings. The measured SPL-values on KEMAR ear were transformed to field values outside the ear canal by means of corrections based on KEMAR's ear canal resonance curve--in order to compare measured values with the Norwegian national noise risk criteria. Temporary threshold shift (TTS) was measured after listening to PCP music for one hour in order to obtain additional information about possible risk of hearing damage. TTS values are presented for six subjects when playing two different pop music cassettes on one type PCP. Our analysis indicates that the risk for permanent noise-induced hearing loss from listening to PCP is very small for normal listening conditions.
Thorup, Nicoline; Santurette, Sébastien; Jørgensen, Søren
by default. This study aimed at identifying clinically relevant tests that may serve as an informative addition to the audiogram and which may relate more directly to HA satisfaction than the audiogram does. METHODS: A total of 29 HI and 26 normal-hearing listeners performed tests of spectral and temporal...... their audiogram. Measures of temporal resolution or speech perception in both stationary and fluctuating noise could be relevant measures to consider in an extended auditory profile. FUNDING: The study was supported by Grosserer L.F. Foghts Fond. TRIAL REGISTRATION: The protocol was approved by the Science Ethics...
Smith, Sherri L.; Pichora-Fuller, M. Kathleen
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769
Vogel, I.; Brug, J.; Ploeg, C.P.B. van der; Raat, H.
Analogue to occupational noise-induced hearing loss, MP3-induced hearing loss may be evolving into a significant social and public health problem. To inform prevention strategies and interventions, this study investigated correlates of adolescents' risky MP3-player listening behavior primarily
Vogel, Ineke; Brug, Johannes; Van Der Ploeg, Catharina P. B.; Raat, Hein
Analogue to occupational noise-induced hearing loss, MP3-induced hearing loss may be evolving into a significant social and public health problem. To inform prevention strategies and interventions, this study investigated correlates of adolescents' risky MP3-player listening behavior primarily informed by protection motivation theory. We invited…
Full Text Available Introduction: Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL. Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL.Methods: 56 listeners (60-72 y/o, 35 participants with ARHL and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training and no-training group. Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1 Speech-in-noise (2 time compressed speech and (3 competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results: Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions: ARHL did not preclude auditory perceptual learning but there was little generalization to
Shera, Christopher A.
Otoacoustic emissions demonstrate that the ear creates sound while listening to sound, offering a promising acoustic window on the mechanics of hearing in awake, listening human beings. That window is clouded, however, by an incomplete knowledge of wave reflection and transmission, both forth and back within the cochlea and through the middle ear. This thesis "does windows," addressing wave propagation and scattering on both sides of the middle ear. A summary of highlights follows. Measurements of the cochlear input impedance in cat are used to identify a new symmetry in cochlear mechanics-termed "tapering symmetry" after its geometric interpretation in simple models-that guarantees that the wavelength of the traveling wave changes slowly with position near the stapes. Waves therefore propagate without reflection through the basal turns of the cochlea. Analytic methods for solving the cochlear wave equations using a perturbative scattering series are given and used to demonstrate that, contrary to common belief, conventional cochlear models exhibit negligible internal reflection whether or not they accurately represent the tapering symmetries of the inner ear. Frameworks for the systematic "deconstruction" of eardrum and middle-ear transduction characteristics are developed and applied to the analysis of noninvasive measurements of middle-ear and cochlear mechanics. A simple phenomenological model of inner-ear compressibility that correctly predicts hearing thresholds in patients with missing or disarticulated middle-ear ossicles is developed and used to establish an upper bound on cochlear compressibility several orders of magnitude smaller than that provided by direct measurements. Accurate measurements of stimulus frequency evoked otoacoustic emissions are performed and used to determine the form and frequency variation of the cochlear traveling-wave ratio noninvasively. Those measurements are inverted to obtain the spatial distribution of mechanical
Fatima T Husain
Full Text Available We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI. Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN, bilateral hearing loss without tinnitus (HL, and normal hearing without tinnitus (NH. We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone.
Full Text Available It is well-established that hearing loss does not only lead to a reduction of hearing sensitivity. Large individual differences are typically observed among listeners with hearing impairment in a wide range of suprathreshold auditory measures. In many cases, audiometric thresholds cannot fully account for such individual differences, which make it challenging to find adequate compensation strategies in hearing devices. How to characterize, model, and compensate for individual hearing loss were the main topics of the fifth International Symposium on Auditory and Audiological Research (ISAAR, held in Nyborg, Denmark, in August 2015. The following collection of papers results from some of the work that was presented and discussed at the symposium.
Bess, Fred H.; Gustafson, Samantha J.; Corbett, Blythe A.; Lambert, E. Warren; Camarata, Stephen M.; Hornsby, Benjamin W. Y.
Objectives: It has long been speculated that effortful listening places children with hearing loss at risk for fatigue. School-age children with hearing loss experiencing cumulative stress and listening fatigue on a daily basis might undergo dysregulation of hypothalamic-pituitary-adrenal (HPA) axis activity resulting in elevated or flattened…
Full Text Available Low Birth weight infants are at risk of many problems. Therefore their outcome must evaluate in different ages especially in school age. In this study we determined prevalence of ophthalmic, hearing, speaking and school readiness problems in children who were born low birth weight and compared them with normal birth weight children. In a cross-sectional and retrospective study, all Primary School children referred to special educational organization center for screening before entrance to school were elected in Mashhad, Iran. In this study 2400 children enrolled to study and were checked for ophthalmic, hearing, speaking and school readiness problems by valid instrument. Data were analyzed by SPSS 11.5. This study showed that 8.3% of our population had birth weight less than 2500 gram. Visual impairment in LBW (Low Birth Weight and NBW (Normal Birth Weight was 8.29% vs. 5.74% and there was statistically significant difference between them (P=0.015. Hearing problem in LBW and NBW was 2.1% vs. 1.3 and it was not statistically significant. Speaking problem in LBW and NBW was 2.6% vs. 2.2% and it was not statistically significant. School readiness problem in LBW and NBW was 12.4% vs. 5.8% and it was statistically significant (P<0.001. According to the results, neurological problems in our society is more than other society and pay attention to this problem is critical. We believe that in our country, it is necessary to provide a program to routinely evaluate LBW children.
Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman
Hansen, Mie Østergaard; Poulsen, Torben
The annoyance of noise in hearing instruments caused by electromagnetic interference from Global systems for Mobile Communication (GSM) and Digital European Cordless Telecommunication (DECT) mobile telephones has been subjectively evaluated by test subjects. The influence on speech recognition from...... the GSM and the DECT noises was also determined. The measurements involved seventeen hearing-imparied subjects. The annoyance was tested with GSM and DECT noise, each one mixed with continuous speech, a mall environment noise, or an office environment noise. Speech recognition was tested with the DANTALE...... word material mixed with GSM and DECT noise. The listening tests showed that if the noise level is acceptable so also is speech recognition. The results agree well with an investigation carried out on normal-hearing subjects. If a hearing instrument user is able to use a telephone without annoyance...
Wiinberg, Alan; Jepsen, Morten Løve; Epp, Bastian
Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss......, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly...... of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity...
Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming
signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...
Active music listening is a creative activity in that the listener constructs a uniquely personal musical experience. Most approaches to teaching music listening emphasize a conceptual approach in which students learn to identify various characteristics of musical sound. Unfortunately, this type of listening is rarely done outside of schools. This…
Sorkin, Donna L; Gates-Ulanet, Patricia; Mellon, Nancy K
Pediatric hearing loss changed more in the past two decades than it had in the prior 100 years with children now identified in the first weeks of life and fit early with amplification. Dramatic improvements in hearing technology allow children the opportunity to listen, speak and read on par with typically hearing peers. National laws mandate that public and private schools, workplaces, and anywhere people go must be accessible to individuals with disabilities. In 2015, most children with hearing loss attended mainstream schools with typically hearing peers. Psychosocial skills still present challenges for some children with hearing loss. Copyright © 2015 Elsevier Inc. All rights reserved.
Skjönsberg, Asa; Herrlin, Petra; Duan, Maoli; Johnson, Ann-Christin; Ulfendahl, Mats
A new strain of waltzing guinea pigs arose spontaneously in a guinea pig breeding facility in Germany in 1996. In addition to obvious vestibular dysfunction, the waltzing animals appear deaf already at birth. Histological analysis revealed that the waltzers lack an open scala media due to the collapse of Reissner's membrane onto the surface of the hearing organ. Subsequent breeding has shown that this strain has a recessive mode of inheritance. The homozygotes are deaf and display a waltzing behaviour throughout their lives while the heterozygotes show no significant signs of inner ear injury despite being carriers of this specific mutated gene of hearing impairment. However, the heterozygous animals offer the opportunity to study how hereditary factors interact with auditory stress. In the present study, the susceptibility of the carriers to noise was investigated. Auditory brainstem responses were obtained prior to and after noise exposure (4 kHz, 110 dB, 6 h). The carriers were significantly less affected by the noise as compared to control animals. This difference was still significant at 4 weeks following noise exposure. It is suggested that the heterozygous animals have an endogenous resistance to auditory stress. Copyright (c) 2005 S. Karger AG, Basel.
Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas
People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil
Mackersie, Carol L.; Cones, Heather
Background The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech recognition task that requires inhibition of competing sounds. Purpose The purpose was a) to determine if psychophysiological indices of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand and c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. Research Design A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Study Sample Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24–54). Data Collection and Analysis Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic activity (EMG) were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of tasks demand were: single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the NASA-TLX questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Results Mean performance was higher than 96
Mackersie, Carol L; Cones, Heather
The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech-recognition task that requires inhibition of competing sounds. The purpose was (a) to determine if psychophysiological indexes of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task, (b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand, and (c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24-54). Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic (EMG) activity were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of task demand were single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard, while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the National Aeronautics and Space Administration Task Load Index questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Mean performance was higher than 96% for all listening tasks. There
Decreased hearing; Deafness; Loss of hearing; Conductive hearing loss; Sensorineural hearing loss; Presbycusis ... Symptoms of hearing loss may include: Certain sounds seeming too loud Difficulty following conversations when two or more people are talking ...
... Info to Go » Hearing-Related » About Hearing About Hearing Each child who is deaf or hard of ... the ear to the brain. Implications: When the hearing mechanism is not functioning Hearing may be impacted ...
Kim, Gibbeum; Na, Wondo; Kim, Gungu; Han, Woojae; Kim, Jinsook
The present study aimed to develop and standardize a screening tool for elderly people who wish to check for themselves their level of hearing loss. The Self-assessment for Hearing Screening of the Elderly (SHSE) consisted of 20 questions based on the characteristics of presbycusis using a five-point scale: seven questions covered general issues related to sensorineural hearing loss, seven covered hearing difficulty under distracting listening conditions, two covered hearing difficulty with fast-rated speech, and four covered the working memory function during communication. To standardize SHSE, 83 elderly participants took part in the study: 25 with normal hearing, and 22, 23, and 13 with mild, moderate, and moderate-to-severe sensorineural hearing loss, respectively, according to their hearing sensitivity. All were retested 3 weeks later using the same questionnaire to confirm its reliability. In addition, validity was assessed using various hearing tests such as a sentence test with background noise, a time-compressed speech test, and a digit span test. SHSE and its subcategories showed good internal consistency. SHSE and its subcategories demonstrated high test-retest reliability. A high correlation was observed between the total scores and pure-tone thresholds, which indicated gradually increased SHSE scores of 42.24%, 55.27%, 66.61%, and 78.15% for normal hearing, mild, moderate, and moderate-to-severe groups, respectively. With regard to construct validity, SHSE showed a high negative correlation with speech perception scores in noise and a moderate negative correlation with scores of time-compressed speech perception. However, there was no statistical correlation between digit span results and either the SHSE total or its subcategories. A confirmatory factor analysis supported three factors in SHSE. We found that the developed SHSE had valuable internal consistency, test-retest reliability, and convergent and construct validity. These results suggest that
Mussoi, Bruna S S; Bentler, Ruth A
The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two
Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin
This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. To address this, the hypothesis tested here is that the positive effects of NR might be to reduce cognitive effort directed toward speech reception, making it available for other tasks. Normal-hearing individuals participated in 2 dual-task experiments, in which 1 task was to report sentences or words in noise set to various signal-to-noise ratios. Secondary tasks involved either holding words in short-term memory or responding in a complex visual reaction-time task. At low values of signal-to-noise ratio, although NR had no positive effect on speech reception thresholds, it led to better performance on the word-memory task and quicker responses in visual reaction times. Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.
... can allow many infants to develop normal language skills without delay. In infants born with hearing loss, ... therapy allow many children to develop normal language skills at the same age as their peers with ...
Roberta Maria DiDonato
Full Text Available Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual and top-down (cognitive-linguistic resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational versus clear speech technique, the listener (hearing-listening and cognitive-linguistic factors, and the environment (in competing speech babble noise versus quiet interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with ‘clear speech,’ (presented at audible levels through insertion earphones their learning efficiency, immediate and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field. This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker’s speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding.
DiDonato, Roberta M.; Surprenant, Aimée M.
Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual) and top-down (cognitive-linguistic) resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational vs. clear speech technique), the listener (hearing-listening and cognitive-linguistic factors), and the environment (in competing speech babble noise vs. quiet) interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with “clear speech,” (presented at audible levels through insertion earphones) their learning efficiency, immediate, and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field). This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker's speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding. PMID:26106353
M Charles Liberman
Full Text Available Recent work suggests that hair cells are not the most vulnerable elements in the inner ear; rather, it is the synapses between hair cells and cochlear nerve terminals that degenerate first in the aging or noise-exposed ear. This primary neural degeneration does not affect hearing thresholds, but likely contributes to problems understanding speech in difficult listening environments, and may be important in the generation of tinnitus and/or hyperacusis. To look for signs of cochlear synaptopathy in humans, we recruited college students and divided them into low-risk and high-risk groups based on self-report of noise exposure and use of hearing protection. Cochlear function was assessed by otoacoustic emissions and click-evoked electrocochleography; hearing was assessed by behavioral audiometry and word recognition with or without noise or time compression and reverberation. Both groups had normal thresholds at standard audiometric frequencies, however, the high-risk group showed significant threshold elevation at high frequencies (10-16 kHz, consistent with early stages of noise damage. Electrocochleography showed a significant difference in the ratio between the waveform peaks generated by hair cells (Summating Potential; SP vs. cochlear neurons (Action Potential; AP, i.e. the SP/AP ratio, consistent with selective neural loss. The high-risk group also showed significantly poorer performance on word recognition in noise or with time compression and reverberation, and reported heightened reactions to sound consistent with hyperacusis. These results suggest that the SP/AP ratio may be useful in the diagnosis of "hidden hearing loss" and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
Cochlear implants (CIs) are increasingly recommended to individuals with residual bilateral acoustic hearing. Although new hearing-preserving electrode designs and surgical approaches show great promise, CI recipients are still at risk to lose acoustic hearing in the implanted ear, which could prevent the ability to take advantage of binaural unmasking to aid speech recognition in noise. This study examined the tradeoff between the benefits of a CI for speech understanding in noise and the potential loss of binaural unmasking for CI recipients with some bilateral preoperative acoustic hearing. Binaural unmasking is difficult to evaluate in CI candidates because speech perception in noise is generally too poor to measure reliably in the range of signal to noise ratios (SNRs) where binaural intelligibility level differences (BILDs) are typically observed (binaural benefit, 9 out of 10 listeners tested postoperatively had performance equal to or better than their best pre-CI performance. The listener who retained functional acoustic hearing in the implanted ear also demonstrated a preserved acoustic BILD postoperatively. Approximately half of the CI candidates in this study demonstrated preoperative binaural hearing benefits for audiovisual speech perception in noise. Most of these listeners lost their acoustic hearing in the implanted ear after surgery (using nonhearing-preservation techniques), and therefore lost access to this binaural benefit. In all but one case, any loss of binaural benefit was compensated for or exceeded by an improvement in speech perception with the CI. Evidence of a preoperative BILD suggests that certain CI candidates might further benefit from hearing-preservation surgery to retain acoustic binaural unmasking, as demonstrated for the listener who underwent hearing-preservation surgery. This test of binaural audiovisual speech perception in noise could serve as a diagnostic tool to identify CI candidates who are most likely to receive
Santaolalla Montoya, Francisco; Ibargüen, Agustín Martinez; Vences, Ana Rodriguez; del Rey, Ana Sanchez; Fernandez, Jose Maria Sanchez
Exposure to recreational noise may cause injuries to the inner ear, and transient evoked (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) may identify these cochlear alterations. The goal of this study was to evaluate TEOAEs and DPOAEs as a method to diagnose early cochlear alterations in young adults exposed to MP3 player noise. We performed a prospective study of the cochlear function in normal-hearing MP3 player users by analyzing TEOAE and DPOAE incidence, amplitude, and spectral content. We gathered a sample of 40 ears from patients between 19 and 29 years old (mean age 24.09 years, SD 3.9 years). We compared the results with those of a control group of 232 ears not exposed to MP3 noise from patients aged 18 to 32 years (mean age 23.35 years, SD 2.7 years). Fifty percent of ears were from females and 50% were from males. Subjects who had used MP3 players for most years and for more hours each week exhibited a reduction in TEOAE and DPOAE incidence and amplitudes and an increase in DPOAE thresholds. TEOAEs showed a statistically significant lower incidence and amplitudes for normal-hearing subjects using MP3 players at frequencies of 2000, 3000, and 4000 Hz. DPOAE incidence was lower at 700, 1000, 1500, and 2000 Hz; the amplitudes were lower at frequencies between 1500 and 6000 Hz; and the thresholds were higher for all frequency bands, statistically significant at frequencies from 1500 to 6000 Hz, p MP3 player noise exposure may be detectable by analyzing TEOAEs and DPOAEs before the impairment becomes clinically apparent.
... exposure. “With adults it may be power tools, lawn mowers, snow blowers, and other sources of that type,” ... 110 dB, a motorcycle 95 dB, and a lawn mower 90 dB. All these have the potential to ...
Lykke Hindhede, Anette
Using disability theory as a framework and social science theories of identity to strengthen the arguments, this paper explores empirically how working-age adults confront the medical diagnosis of hearing impairment. For most participants hearing impairment threatens the stability of social...... interaction and the construction of hearing disabled identities is seen as shaped in the interaction with the hearing impaired person‟s surroundings. In order to overcome the potential stigmatisation the „passing‟ as normal becomes predominant. For many the diagnosis provokes radical redefinitions of the self....... The discursively produced categorisation and subjectivity of senescence mean that rehabilitation technologies such as hearing aids identify a particular life-style (disabled) which determines their social significance. Thus wearing a hearing aid works against the contemporary attempt to create socially ideal...
Bilsen, F.A.; Soede, W.; Berkhout, A.J.
Hearing-impaired listeners often have great difficulty understanding speech in situations with background noise (e.g., meetings, parties) . Conventional hearing aids offer insufficient directivity to significantly reduce background noise relative to the desired speech signal . Based on array
Bowman, Becki J.; Punyanunt-Carter, Narissra; Cheah, Tsui Yi; Watson, W. Joe; Rubin, Rebecca B.
Considerable research has been conducted testing Rauscher, Shaw, and Ky's (1993) Mozart Effect (ME). This study attempts to replicate, in part, research that tested the ME on listening comprehension abilities. Also included in this study is an examination of control group issues in current day research. We hypothesized that students who listen to…
Conclusion: Based on the published research and our study, we suggest setting the normal criterion levels for infants and young children in Taiwan of the tone burst auditory brainstem response to air-conducted tones as 30 dB nHL for 500 and 1000 Hz, and 25 dB nHL for 2000 and 4000 Hz.
Silber, Ronnie F.
Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story
Krumhansl, Carol Lynne
This article investigates the contexts, or “listening niches”, in which people hear popular music. The study spanned a century of popular music, divided into 10 decades, with participants born between 1940 and 1999. It asks about whether they know and like the music in each decade, and their emotional reactions. It also asks whether the music is associated with personal memories and, if so, with whom they were listening, or whether they were listening alone. Finally, it asks what styles of music they were listening to, and the music media they were listening with, in different periods of their lives. The results show a regular progression through the life span of listening with different individuals (from parents to children) and with different media (from records to streaming services). A number of effects found in previous studies were replicated, but the study also showed differences across the birth cohorts. Overall, there was a song specific age effect with preferences for music of late adolescence and early adulthood; however, this effect was stronger for the older participants. In general, music of the 1940s, 1960s, and 1980s was preferred, particularly among younger participants. Music of these decades also produced the strongest emotional responses, and the most frequent and specific personal memories. When growing up, the participants tended to listen to the older music on the older media, but rapidly shifted to the new music technologies in their late teens and early 20s. Younger listeners are currently listening less to music alone than older listeners, suggesting an important role of socially sharing music, but they also report feeling sadder when listening to music. Finally, the oldest listeners had the broadest taste, liking music that they had been exposed to during their lifetimes in different listening niches. PMID:28424637
Although listening is the skill mostly used by students in the classrooms, the desired success cannot be attained in teaching listening since this skill is shaped by multiple variables. In this research we focused on listening anxiety, listening comprehension and impact of authentic tasks on both listening anxiety and listening comprehension.…
Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph
Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-in-noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…
Full Text Available The ability to communicate, specifically the gift of hearing, is a necessity often taken for granted. A lack of sense of hearing affects the intellectual and emotional development of the human being who suffers from it. It prevents the fluid exchange of knowledge, thoughts and ideas that allow personal growth and development. This article emerges due to an interest in providing assistive technologies that can be considered to improve communication among hearing impaired and normal hearing listeners in the class-room of a higher education level in the Republic of Panama. Information has been compiled from various primary and secondary sources highlighting the communication problem facing this group of disabled people. Information about the situation of hearing impairment, laws, organizations, the reality with the higher education system, and finally, we will talk about Information and Communication Technologies (TICs that will work as technology support in order to improve communication in the classroom in higher education among normal-hearing and deaf people.
Wickelmaier, Florian Maria; Choisel, Sylvain
A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multichannel reproduced sound. Ninety-one participants filled in a web-based questionnaire. Seventy-eight of them took part in an assessment of their hearing thresholds......, their spatial hearing, and their verbal production abilities. The listeners displayed large individual differences in their performance. Forty subjects were selected based on the test results. The self-assessed listening habits and experience in the web-questionnaire could not predict the results...... of the selection procedure. Further, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....
Wickelmaier, Florian; Choisel, Sylvain
A selection procedure was devised in order to select listeners for experiments in which their main task will be to judge multi-channel reproduced sound. 91 participants filled in a web-based questionnaire. 78 of them took part in an assessment of their hearing thresholds, their spatial hearing......, and their verbal production abilities. The listeners displayed large individual differences in their performance. 40 subjects were selected based on the test results. The self-assessed listening habits and experience in the web questionnaire could not predict the results of the selection procedure. Further......, the hearing thresholds did not correlate with the spatial-hearing test. This leads to the conclusion that task-specific performance tests might be the preferable means of selecting a listening panel....
Full Text Available Listening is often listed as the most challenging language skill that the students need to learn in the language classrooms. Therefore the awareness of listening strategies and techniques, such as bottom-up and top-down processes, specific styles of listening, or various compensatory strategies, prove to facilitate the process of learning of older individuals. Indeed, older adult learners find decoding the aural input, more challenging than the younger students. Therefore, both students’ and teachers’ subjective theories and preferences regarding listening comprehension as well as the learners’ cognitive abilities should be taken into account while designing a teaching model for this age group. The aim of this paper is, thus, to draw the conclusions regarding processes, styles and strategies involved in teaching listening to older second language learners and to juxtapose them with the already existing state of research regarding age-related hearing impairments, which will serve as the basis for future research.
Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We
Niklas eRönnberg; Niklas eRönnberg; Mary eRudner; Mary eRudner; Thomas eLunner; Thomas eLunner; Thomas eLunner; Thomas eLunner; Stefan eStenfelt; Stefan eStenfelt
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise t...
Best, Virginia; Keidser, Gitte; Freeston, Katrina; Buchholz, Jörg M
Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids. A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study. Thirty listeners with normal hearing participated in this study. Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR. Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score. This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit. American Academy of Audiology.
Wolfgramm, Christine; Suter, Nicole; Göksel, Eva
Listening is regarded as a key requirement for successful communication and is fundamentally linked to other language skills. Unlike reading, it requires both hearing and processing information in real-time. We therefore propose that the ability to concentrate is a strong predictor of listening comprehension. Using structural equation modeling,…
Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.
Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…
Mohammad Ebrahim Mahdavi
Full Text Available Background and Aims: The dichotic listening subtest is an important component of the test battery for auditory processing assessment in both children and adults. A randomized dichotic digits test (RDDT was created to compensate for sensitivity weakness of double digits when detecting abnormal ear asymmetry during dichotic listening. The aim of this study was the development and intial evaluation of the Persian randomized dichotic digits test.Method: Persian digits 1-10 (except for the bisyllabic digit, 4 uttered by a native Persian language speaker were recorded in a studio. After alignment of intensity and temporal characteristics of digit waveforms, lists 1 and 2 of the RDDT were reproduced. List 1 of the test was administered at 55 dBHL on 50 right-handed normal hearing individuals (with an equal sex ratio in the age group of 18-25 years and hearing thresholds of 15 dBHL or better in audiometric frequencies.Results: Mean (standard deviation of percent-correct score for right and left ears and right ear advantage of the subjects was 94.3 (5.3, 84.8 (7.7, and 9.5 (7.0 percent, respectively. Sixty percent of the subjects showed normal results and unilateral and bilateral deficits were seen in 24 percent and 16 percent, respectively, of studied individuals.Conclusion: It seems the Persian version of RDDT test is the same as the original test as it is able to test ear asymmerty, unilateral and bilateral deficits in dichotic listening.
Asbjørnsen, Arve E; Helland, Turid
Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.
Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent
Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but
Agterberg, Martijn J H; Hol, Myrthe K S; Cremers, Cor W R J; Mylanus, Emmanuel A M; van Opstal, John; Snik, Ad F M
An important aspect of binaural hearing is the proper detection of interaural sound level differences and interaural timing differences. Assessments of binaural hearing were made in patients with acquired unilateral conductive hearing loss (UCHL, n = 11) or congenital UCHL (n = 10) after unilateral application of a bone conduction device (BCD), and in patients with bilateral conductive or mixed hearing loss after bilateral BCD application. Benefit (bilateral versus unilateral listening) was assessed by measuring directional hearing, compensation of the acoustic head shadow, binaural summation and binaural squelch. Measurements were performed after an acclimatization time of at least 10 weeks. Unilateral BCD application was beneficial, but there was less benefit in the patients with congenital UCHL as compared to patients with acquired UCHL. In adults with bilateral hearing loss, bilateral BCD application was clearly beneficial as compared to unilateral BCD application. Binaural summation was present, but binaural squelch could not be proven. To explain the poor results in the patients with congenital UCHL, two factors seemed to be important. First, a critical period in the development of binaural hearing might affect the binaural hearing abilities. Second, crossover stimulation, referring to additional stimulation of the cochlea contralateral to the BCD side, might deteriorate binaural hearing in patients with UCHL. Copyright © 2011 S. Karger AG, Basel.
Corbin, Nicole E; Buss, Emily; Leibold, Lori J
The purpose of this study was twofold: (1) to determine the effect of an acute simulated unilateral hearing loss on children's spatial release from masking in two-talker speech and speech-shaped noise, and (2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than in speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. Participants were 12 children (8.7 to 10.9 years) and 11 adults (18.5 to 30.4 years) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or -90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or -90° azimuth. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the
Cella, C. E.
This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.
Ihlefeld, Antje; Litovsky, Ruth Y
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon
The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.
Chang, Anna C-S.; Millett, Sonia
This study investigates the effects on developing L2 listening fluency through doing extended listening-focused activities after reading and listening to audio graded readers. Seventy-six EFL university students read and listened to a total of 15 graded readers in a 15-week extensive listening programme. They were divided into three groups (Group…
Ihlefeld, Antje; Chen, Yi-Wen; Sanes, Dan H
Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH) human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated) than when that envelope is flat across time (unmodulated). This perceptual benefit is called modulation masking release (MMR). When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+). In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR-). These interactions across disparate frequency ranges are thought to require central nervous system (CNS) processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL) in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.
Full Text Available Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated than when that envelope is flat across time (unmodulated. This perceptual benefit is called modulation masking release (MMR. When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+. In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR−. These interactions across disparate frequency ranges are thought to require central nervous system (CNS processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.
Thoutenhoofd, Ernst D.; Knot-Dickscheit, Jana; Rogge, Jana; van der Meer, Margriet; Schulze, Gisela; Jacobs, Gerold; van den Bogaerde, Beppie
The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on psychosocial strain resulting from impaired…
Dr. Ernst Thoutenhoofd; Jana Knot-Dickscheit; Jana Rogge; Margriet van der Meer; Gisela Schulze; Gerold Jacobs; Beppie van den Bogaerde
The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on
Thoutenhoofd, Ernst D.; Knot-Dickscheit, Jana; Rogge, Jana; van der Meer, Margriet; Schulze, Gisela C.; Jacobs, Gerold; van den Bogaerde, Beppie
The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their study. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on
Thoutenhoofd, E.D.; Knot-Dickscheit, J.; Rogge, J.; van der Meer, M.; Schulze, G.; Jacobs, G.; van den Bogaerde, B.
The students from three universities (Groningen, Oldenburg and the University of Applied Sciences in Utrecht) were surveyed on the experience of hearing and listening in their studies. Included in the online survey were established questionnaires on hearing loss, tinnitus, hyperacusis, a subscale on
Rice, Suzanne; Burbules, Nicholas C.
Background Context: Despite its significance for learning, listening has received very little attention in the philosophy of education literature. This article draws on the philosophy and educational thought of Aristotle to illuminate characteristics of good listening. The current project is exploratory and preliminary, seeking mainly to suggest…
Nogueroles López, Marta
, who presented similar level of Spanish, needs, educational and cultural background, but did not receive such a training. The listening strategies instruction consisted in integrating the development of listening strategies into a regular course of Spanish as a foreign language. Data referring...
Wright, Rose; Uchanski, Rosalie M.
Background The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users’ to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al., 2008). Gfeller’s results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best. Purpose The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI-simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks. Research Design A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to twenty NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised twenty-four musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP). Study Sample Twenty-five NH adults (22 – 56 years old), recruited from the local and research communities, participated in the study. Ten adult CI users (46 – 80
Wright, Rose; Uchanski, Rosalie M
The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users' enjoyment and perception of music from the users' demographic variables and other perceptual skills (Gfeller et al, 2008). Gfeller's results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best. The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks. A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to 20 NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised 24 musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP). Twenty-five NH adults (22-56 yr old), recruited from the local and research communities, participated in the study. Ten adult CI users (46-80 yr old), recruited from the patient population of the local adult cochlear implant
Lobarinas, Edward; Salvi, Richard; Ding, Dalian
Poorer hearing in the presence of background noise is a significant problem for the hearing impaired. Ototoxic drugs, ageing, and noise exposure can damage the sensory hair cells of the inner ear that are essential for normal hearing sensitivity. The relationship between outer hair cell (OHC) loss and progressively poorer hearing sensitivity in quiet or in competing background noise is supported by a number of human and animal studies. In contrast, the effect of moderate inner hair cell (IHC) loss or dysfunction shows almost no impact on behavioral measures of hearing sensitivity in quiet, when OHCs remain intact, but the relationship between selective IHC loss and hearing in noise remains relatively unknown. Here, a moderately high dose of carboplatin (75 mg/kg) that produced IHC loss in chinchillas ranging from 40 to 80 % had little effect on thresholds in quiet. However, when tested in the presence of competing broadband (BBN) or narrowband noise (NBN), thresholds increased significantly. IHC loss >60 % increased signal-to-noise ratios (SNRs) for tones (500-11,300 Hz) in competing BBN by 5-10 dB and broadened the masking function under NBN. These data suggest that IHC loss or dysfunction may play a significant role in listening in noise independent of OHC integrity and that these deficits may be present even when thresholds in quiet are within normal limits.
Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, Jeremy
Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion. © The Author(s) 2015.
Full Text Available Musical enjoyment for cochlear implant (CI recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic. Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear and the diotic mode (both parts in both ears. Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy. Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.
Johnson, Earl E
A major decision at the time of hearing aid fitting and dispensing is the amount of amplification to provide listeners (both adult and pediatric populations) for the appropriate compensation of sensorineural hearing impairment across a range of frequencies (e.g., 160-10000 Hz) and input levels (e.g., 50-75 dB sound pressure level). This article describes modern prescription theory for hearing aids within the context of a risk versus return trade-off and efficient frontier analyses. The expected return of amplification recommendations (i.e., generic prescriptions such as National Acoustic Laboratories-Non-Linear 2, NAL-NL2, and Desired Sensation Level Multiple Input/Output, DSL m[i/o]) for the Speech Intelligibility Index (SII) and high-frequency audibility were traded against a potential risk (i.e., loudness). The modeled performance of each prescription was compared one with another and with the efficient frontier of normal hearing sensitivity (i.e., a reference point for the most return with the least risk). For the pediatric population, NAL-NL2 was more efficient for SII, while DSL m[i/o] was more efficient for high-frequency audibility. For the adult population, NAL-NL2 was more efficient for SII, while the two prescriptions were similar with regard to high-frequency audibility. In terms of absolute return (i.e., not considering the risk of loudness), however, DSL m[i/o] prescribed more outright high-frequency audibility than NAL-NL2 for either aged population, particularly, as hearing loss increased. Given the principles and demonstrated accuracy of desensitization (reduced utility of audibility with increasing hearing loss) observed at the group level, additional high-frequency audibility beyond that of NAL-NL2 is not expected to make further contributions to speech intelligibility (recognition) for the average listener.
Snik, Ad; Agterberg, Martijn; Bosman, Arjan
Application of bilateral hearing devices in bilateral hearing loss and unilateral application in unilateral hearing loss (second ear with normal hearing) does not a priori lead to binaural hearing. An overview is presented on several measures of binaural benefits that have been used in patients with unilateral or bilateral deafness using one or two cochlear implants, respectively, and in patients with unilateral or bilateral conductive/mixed hearing loss using one or two percutaneous bone conduction implants (BCDs), respectively. Overall, according to this overview, the most significant and sensitive measure is the benefit in directional hearing. Measures using speech (viz. binaural summation, binaural squelch or use of the head shadow effect) showed minor benefits, except for patients with bilateral conductive/mixed hearing loss using two BCDs. Although less feasible in daily practise, the binaural masking level difference test seems to be a promising option in the assessment of binaural function. © 2015 S. Karger AG, Basel.
In this article, Anthony Schmidt presents results from his research on listening instruction in a second language. Schmidt reveals that throughout the history of English language teaching (ELT), most students have never been taught how to listen. It was not just listening, but the need to do this listening in conjunction with an approach that…
Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher
It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Full Text Available With advancements in modern medicine and significant improvements in life conditions in the past four decades, the elderly population is rapidly expanding. There is a growing number of those aged 100 years and older. While many changes in the human body occur with physiological aging, as many as 35% to 50% of the population aged 65 to 75 years have presbycusis. Presbycusis is a progressive sensorineural hearing loss that occurs as people get older. There are many studies of the prevalence of age-related hearing loss in the United States, Europe, and Asia. However, no audiological assessment of the population aged 100 years and older has been done. Therefore, it is not clear how well centenarians can hear. We measured middle ear impedance, pure-tone behavioral thresholds, and distortion-product otoacoustic emission from 74 centenarians living in the city of Shaoxing, China, to evaluate their middle and inner ear functions. We show that most centenarian listeners had an "As" type tympanogram, suggesting reduced static compliance of the tympanic membrane. Hearing threshold tests using pure-tone audiometry show that all centenarian subjects had varying degrees of hearing loss. More than 90% suffered from moderate to severe (41 to 80 dB hearing loss below 2,000 Hz, and profound (>81 dB hearing loss at 4,000 and 8,000 Hz. Otoacoustic emission, which is generated by the active process of cochlear outer hair cells, was undetectable in the majority of listeners. Our study shows the extent and severity of hearing loss in the centenarian population and represents the first audiological assessment of their middle and inner ear functions.
Mao, Zhongping; Zhao, Lijun; Pu, Lichun; Wang, Mingxiao; Zhang, Qian; He, David Z. Z.
With advancements in modern medicine and significant improvements in life conditions in the past four decades, the elderly population is rapidly expanding. There is a growing number of those aged 100 years and older. While many changes in the human body occur with physiological aging, as many as 35% to 50% of the population aged 65 to 75 years have presbycusis. Presbycusis is a progressive sensorineural hearing loss that occurs as people get older. There are many studies of the prevalence of age-related hearing loss in the United States, Europe, and Asia. However, no audiological assessment of the population aged 100 years and older has been done. Therefore, it is not clear how well centenarians can hear. We measured middle ear impedance, pure-tone behavioral thresholds, and distortion-product otoacoustic emission from 74 centenarians living in the city of Shaoxing, China, to evaluate their middle and inner ear functions. We show that most centenarian listeners had an “As” type tympanogram, suggesting reduced static compliance of the tympanic membrane. Hearing threshold tests using pure-tone audiometry show that all centenarian subjects had varying degrees of hearing loss. More than 90% suffered from moderate to severe (41 to 80 dB) hearing loss below 2,000 Hz, and profound (>81 dB) hearing loss at 4,000 and 8,000 Hz. Otoacoustic emission, which is generated by the active process of cochlear outer hair cells, was undetectable in the majority of listeners. Our study shows the extent and severity of hearing loss in the centenarian population and represents the first audiological assessment of their middle and inner ear functions. PMID:23755251
Full Text Available Gibbeum Kim,1 Wondo Na,1 Gungu Kim,1 Woojae Han,2 Jinsook Kim2 1Department of Speech Pathology and Audiology, Hallym University Graduate School, Chuncheon, Republic of Korea; 2Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym Universtiy, Chuncheon, Republic of Korea Purpose: The present study aimed to develop and standardize a screening tool for elderly people who wish to check for themselves their level of hearing loss. Methods: The Self-assessment for Hearing Screening of the Elderly (SHSE consisted of 20 questions based on the characteristics of presbycusis using a five-point scale: seven questions covered general issues related to sensorineural hearing loss, seven covered hearing difficulty under distracting listening conditions, two covered hearing difficulty with fast-rated speech, and four covered the working memory function during communication. To standardize SHSE, 83 elderly participants took part in the study: 25 with normal hearing, and 22, 23, and 13 with mild, moderate, and moderate-to-severe sensorineural hearing loss, respectively, according to their hearing sensitivity. All were retested 3 weeks later using the same questionnaire to confirm its reliability. In addition, validity was assessed using various hearing tests such as a sentence test with background noise, a time-compressed speech test, and a digit span test. Results: SHSE and its subcategories showed good internal consistency. SHSE and its subcategories demonstrated high test–retest reliability. A high correlation was observed between the total scores and pure-tone thresholds, which indicated gradually increased SHSE scores of 42.24%, 55.27%, 66.61%, and 78.15% for normal hearing, mild, moderate, and moderate-to-severe groups, respectively. With regard to construct validity, SHSE showed a high negative correlation with speech perception scores in noise and a moderate negative
van Leeuwen, Theo
A study of listening as active participation, focusing on the use of listening shots in films and piano and drums accompaniment in jazz music......A study of listening as active participation, focusing on the use of listening shots in films and piano and drums accompaniment in jazz music...
Pettinato, Michèle; Clerck, Ilke De; Verhoeven, Jo; Gillis, Steven
This longitudinal study examined the effect of emerging vocabulary production on the ability to produce the phonetic cues to prosodic prominence in babbled and lexical disyllables of infants with cochlear implants (CI) and normally hearing (NH) infants. Current research on typical language acquisition emphasizes the importance of vocabulary development for phonological and phonetic acquisition. Children with CI experience significant difficulties with the perception and production of prosody, and the role of possible top-down effects is, therefore, particularly relevant for this population. Isolated disyllabic babble and first words were identified and segmented in longitudinal audio-video recordings and transcriptions for nine NH infants and nine infants with CI interacting with their parents. Monthly recordings were included from the onset of babbling until children had reached a cumulative vocabulary of 200 words. Three cues to prosodic prominence, fundamental frequency (f0), intensity, and duration, were measured in the vocalic portions of stand-alone disyllables. To represent the degree of prosodic differentiation between two syllables in an utterance, the raw values for intensity and duration were transformed to ratios, and for f0, a measure of the perceptual distance in semitones was derived. The degree of prosodic differentiation for disyllabic babble and words for each cue was compared between groups. In addition, group and individual tendencies on the types of stress patterns for babble and words were also examined. The CI group had overall smaller pitch and intensity distances than the NH group. For the NH group, words had greater pitch and intensity distances than babbled disyllables. Especially for pitch distance, this was accompanied by a shift toward a more clearly expressed stress pattern that reflected the influence of the ambient language. For the CI group, the same expansion in words did not take place for pitch. For intensity, the CI group gave
Singh, Gurjit; Liskovoi, Lisa; Launer, Stefan; Russo, Frank
The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self
Liang, Maojin; Zhao, Fei; French, David; Zheng, Yiqing
Three pairs of headphones [standard iPod ear buds and two noise-canceling headphones (NCHs)] were chosen to investigate frequency characteristics of noise reduction, together with their attenuation effects on preferred listening levels (PLLs) in the presence of various types of background noise. Twenty-six subjects with normal hearing chose their PLLs in quiet, street noise, and subway noise using the three headphones and with the noise-canceling system on/off. Both sets of NCHs reduced noise levels at mid- and high-frequencies. Further noise reductions occurred in low frequencies with the noise canceling system switched on. In street noise, both NCHs had similar noise reduction effects. In subway noise, better noise reduction effects were found in the expensive NCH and with noise-canceling on. A two way repeated measures analysis of variance showed that both listening conditions and headphone styles were significant influencing factors on the PLLs. Subjects tended to increase their PLLs as the background noise level increased. Compared with ear buds, PLLs obtained from NCHs-on in the presence of background noise were reduced up to 4 dB. Therefore, proper selection and use of NCHs appears beneficial in reducing the risk of hearing damage caused by high music listening levels in the presence of background noise.
Marta Regueira Dias Prestes
Full Text Available Olimiar auditivo nem sempre prediz o desempenho em ambientes com redundância extrínseca reduzida. OBJETIVO: Investigar o relato de dificuldades de comunicação de adultos com audiograma normal e verificar o quadro subjacente por meio de avaliações comportamental e eletrofisiológica. MÉTODO: Estudo caso-controle de indivíduos com limiares normais, distribuídos em dois grupos: grupo estudo, 10 adultos com queixas auditivas de comunicação e grupo controle, 10 adultos, sem queixas. Foi medida a frequência em que os participantes apresentam dificuldades de comunicação e realizados testes de fala no silêncio e no ruído, audiometria e potencial evocado auditivo de tronco encefálico. RESULTADOS: O grupo estudo se diferenciou estatisticamente do grupo controle apenas nos escores de dificuldades de comunicação. Foi constatada uma correlação positiva entre os limiares tonais e os escores no autorrelato de dificuldade. CONCLUSÃO: A presença de queixa auditiva na ausência de alterações no audiograma não esteve associada a diferença no desempenho no reconhecimento de fala no ruído, nem nas demais avaliações. Com base na análise de correlação, observou-se que, quanto mais elevados os limiares auditivos, maiores os escores no relato de dificuldades auditivas relacionadas às situações de comunicação, mesmo os limiares variando de 0 a 25 dB.Hearing thresholds are not always predictive of performance in environments with reduced extrinsic redundancy. OBJECTIVE: To investigate the communication disorders reported by adults with normal hearing, and to assess their underlying conditions through behavioral and electrophysiological testing. METHOD: This case control study enrolled 20 adults with normal hearing thresholds and divided them into two groups: a case group with 10 adults with hearing impairment-related communication disorders and a control group with 10 adults with normal hearing. The frequency of occurrence of
Martijn Johannes Hermanus Agterberg
Full Text Available Direction-specific interactions of sound waves with the head, torso and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs and sound level (interaural level differences, or ILDs. Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD, their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, 3 kHz and broadband (BB, 0.5 – 20 kHz noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45-65 dB SPL. Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.
Ramsgaard Thomsen, Mette; Karmon, Ayelet
This paper presents the thinking and making of the architectural research probe Listener. Developed as an interdisciplinary collaboration between textile design and architecture, Listener explores how information based fabrication technologies are challenging the material practices of architecture....... The paper investigates how textile design can be understood as a model for architectural production providing new strategies for material specification and allowing the thinking of material as inherently variegated and performative. The paper traces the two fold information based strategies present...
Turunen-Rise, I; Flottorp, G; Tvete, O
Playing selected types of music on five different personal cassette players (PCPs) and using different gain (volume) settings, A-weighted maximum and equivalent sound pressure levels (SPLs) were measured on KEMAR (Knowles Electronics Manikin for Acoustic Research). The octave band SPLs were measured on KEMAR ear and transformed to field values in order to compare measured values with the Norwegian noise risk criteria. Temporary threshold shifts (TTS) measured in 6 subjects after listening to two different pop music cassettes on one PCP in two separate sessions, are presented. Based upon these studies we conclude that the risk of acquiring permanent noise-induced hearing loss (NIHL) from use of PCP is very small for what we found to be normal listening conditions.
Werfel, Krystal L.; Lund, Emily; Schuele, C. Melanie
Measures of print knowledge were compared across preschoolers with hearing loss and normal hearing. Alphabet knowledge did not differ between groups, but preschoolers with hea