WorldWideScience

Sample records for humans auditory noise

  1. Selective attention reduces physiological noise in the external ear canals of humans. I: Auditory attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069

  2. Contralateral white noise selectively changes left human auditory cortex activity in a lexical decision task.

    Science.gov (United States)

    Behne, Nicole; Wendt, Beate; Scheich, Henning; Brechmann, André

    2006-04-01

    In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.

  3. Human event-related brain potentials to auditory periodic noise stimuli.

    Science.gov (United States)

    Kaernbach, C; Schröger, E; Gunter, T C

    1998-02-06

    Periodic noise is perceived as different from ordinary non-repeating noise due to the involvement of echoic memory. Since this stimulus does not contain simple physical cues (such as onsets or spectral shape) that might obscure sensory memory interpretations, it is a valuable tool to study sensory memory functions. We demonstrated for the first time that the processing of periodic noise can be tapped by event-related brain potentials (ERPs). Human subjects received repeating segments of noise embedded in non-repeating noise. They were instructed to detect the periodicity inherent to the stimulation. We observed a central negativity time-locked on the periodic segment that correlated to the subjects behavioral performance in periodicity detection. It is argued that the ERP result indicates an enhancement of sensory-specific processing.

  4. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review

    Directory of Open Access Journals (Sweden)

    Marie-Josée Castellanos

    2016-12-01

    Full Text Available Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  5. Extra-auditory responses to long-term intermittent noise stimulation in humans.

    Science.gov (United States)

    Fruhstorfer, B; Hensel, H

    1980-12-01

    Respiration, heart rate, cutaneous blood flow, and electroencephalogram (EEG) reactions to long-term intermittent noise exposure were recorded from 13 volunteers (20-29 yr) with normal hearing and vegetative reactivity. They received daily within 1 h 12 noise stimuli (16 s 100 dB (A) white noise) for 10 or 21 days, respectively. Most subjects reported partial subjective adaptation to the noise. Heart rate adapted within a session but did not change considerably during successive days. Vascular responses did not change during one session but diminished mainly during the first 10 days. Noise responses in the EEG remained constant, but a decrease in vigilance occurred during the whole experimental series. Respiration responses were unpredictable and showed no trend within the sessions. It was concluded that certain physiological responses adapt to loud noise but that the time course of adaptation is different. Therefore a general statement about physiological noise adaptation is not possible.

  6. Auditory white noise reduces age-related fluctuations in balance.

    Science.gov (United States)

    Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R

    2016-09-06

    Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The combined effects of forward masking by noise and high click rate on monaural and binaural human auditory nerve and brainstem potentials.

    Science.gov (United States)

    Pratt, Hillel; Polyakov, Andrey; Bleich, Naomi; Mittelman, Naomi

    2004-07-01

    To study effects of forward masking and rapid stimulation on human monaurally- and binaurally-evoked brainstem potentials and suggest their relation to synaptic fatigue and recovery and to neuronal action potential refractoriness. Auditory brainstem evoked potentials (ABEPs) were recorded from 12 normally- and symmetrically hearing adults, in response to each click (50 dB nHL, condensation and rarefaction) in a train of nine, with an inter-click interval of 11 ms, that followed a white noise burst of 100 ms duration (50 dB nHL). Sequences of white noise and click train were repeated at a rate of 2.89 s(-1). The interval between noise and first click in the train was 2, 11, 22, 44, 66 or 88 ms in different runs. ABEPs were averaged (8000 repetitions) using a dwell time of 25 micros/address/channel. The binaural interaction components (BICs) of ABEPs were derived and the single, centrally located equivalent dipoles of ABEP waves I and V and of the BIC major wave were estimated. The latencies of dipoles I and V of ABEP, their inter-dipole interval and the dipole magnitude of component V were significantly affected by the interval between noise and clicks and by the serial position of the click in the train. The latency and dipole magnitude of the major BIC component were significantly affected by the interval between noise and clicks. Interval from noise and the click's serial position in the train interacted to affect dipole V latency, dipole V magnitude, BIC latencies and the V-I inter-dipole latency difference. Most of the effects were fully apparent by the first few clicks in the train, and the trend (increase or decrease) was affected by the interval between noise and clicks. The changes in latency and magnitude of ABEP and BIC components with advancing position in the click train and the interactions of click position in the train with the intervals from noise indicate an interaction of fatigue and recovery, compatible with synaptic depletion and replenishing

  10. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  11. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    Science.gov (United States)

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  12. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Science.gov (United States)

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  13. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Directory of Open Access Journals (Sweden)

    R Channing Moore

    Full Text Available Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  14. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  15. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  16. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  17. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W.J.; Willemsen, Antoon T.M.

    2007-01-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  18. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  19. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  20. Thresholding of auditory cortical representation by background noise

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  1. Thresholding of auditory cortical representation by background noise.

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.

  2. Assessment and Mitigation of the Effects of Noise on Habitability in Deep Space Environments: Report on Non-Auditory Effects of Noise

    Science.gov (United States)

    Begault, Durand R.

    2018-01-01

    This document reviews non-auditory effects of noise relevant to habitable volume requirements in cislunar space. The non-auditory effects of noise in future long-term space habitats are likely to be impactful on team and individual performance, sleep, and cognitive well-being. This report has provided several recommendations for future standards and procedures for long-term space flight habitats, along with recommendations for NASA's Human Research Program in support of DST mission success.

  3. Noise sensitivity, rather than noise level, predicts the non-auditory effects of noise in community samples: a population-based survey

    Directory of Open Access Journals (Sweden)

    Jangho Park

    2017-04-01

    Full Text Available Abstract Background Excessive noise affects human health and interferes with daily activities. Although environmental noise may not directly cause mental illness, it may accelerate and intensify the development of latent mental disorders. Noise sensitivity (NS is considered a moderator of non-auditory noise effects. In the present study, we aimed to assess whether NS is associated with non-auditory effects. Methods We recruited a community sample of 1836 residents residing in Ulsan and Seoul, South Korea. From July to November 2015, participants were interviewed regarding their demographic characteristics, socioeconomic status, medical history, and NS. The non-auditory effects of noise were assessed using the Center of Epidemiologic Studies Depression, Insomnia Severity index, State Trait Anxiety Inventory state subscale, and Stress Response Inventory-Modified Form. Individual noise levels were recorded from noise maps. A three-model multivariate logistic regression analysis was performed to identify factors that might affect psychiatric illnesses. Results Participants ranged in age from 19 to 91 years (mean: 47.0 ± 16.1 years, and 37.9% (n = 696 were male. Participants with high NS were more likely to have been diagnosed with diabetes and hyperlipidemia and to use psychiatric medication. The multivariable analysis indicated that even after adjusting for noise-related variables, sociodemographic factors, medical illness, and duration of residence, subjects in the high NS group were more than 2 times more likely to experience depression and insomnia and 1.9 times more likely to have anxiety, compared with those in the low NS group. Noise exposure level was not identified as an explanatory value. Conclusions NS increases the susceptibility and hence moderates there actions of individuals to noise. NS, rather than noise itself, is associated with an elevated susceptibility to non-auditory effects.

  4. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  5. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  6. The impact of auditory white noise on semantic priming.

    Science.gov (United States)

    Angwin, Anthony J; Wilson, Wayne J; Copland, David A; Barry, Robert J; Myatt, Grace; Arnott, Wendy L

    2018-04-10

    It has been proposed that white noise can improve cognitive performance for some individuals, particularly those with lower attention, and that this effect may be mediated by dopaminergic circuitry. Given existing evidence that semantic priming is modulated by dopamine, this study investigated whether white noise can facilitate semantic priming. Seventy-eight adults completed an auditory semantic priming task with and without white noise, at either a short or long inter-stimulus interval (ISI). Measures of both direct and indirect semantic priming were examined. Analysis of the results revealed significant direct and indirect priming effects at each ISI in noise and silence, however noise significantly reduced the magnitude of indirect priming. Analyses of subgroups with higher versus lower attention revealed a reduction to indirect priming in noise relative to silence for participants with lower executive and orienting attention. These findings suggest that white noise focuses automatic spreading activation, which may be driven by modulation of dopaminergic circuitry. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  8. Auditory white noise reduces postural fluctuations even in the absence of vision.

    Science.gov (United States)

    Ross, Jessica Marie; Balasubramaniam, Ramesh

    2015-08-01

    The contributions of somatosensory, vestibular, and visual feedback to balance control are well documented, but the influence of auditory information, especially acoustic noise, on balance is less clear. Because somatosensory noise has been shown to reduce postural sway, we hypothesized that noise from the auditory modality might have a similar effect. Given that the nervous system uses noise to optimize signal transfer, adding mechanical or auditory noise should lead to increased feedback about sensory frames of reference used in balance control. In the present experiment, postural sway was analyzed in healthy young adults where they were presented with continuous white noise, in the presence and absence of visual information. Our results show reduced postural sway variability (as indexed by the body's center of pressure) in the presence of auditory noise, even when visual information was not present. Nonlinear time series analysis revealed that auditory noise has an additive effect, independent of vision, on postural stability. Further analysis revealed that auditory noise reduced postural sway variability in both low- and high-frequency regimes (> or noise. Our results support the idea that auditory white noise reduces postural sway, suggesting that auditory noise might be used for therapeutic and rehabilitation purposes in older individuals and those with balance disorders.

  9. Background Noise Degrades Central Auditory Processing in Toddlers.

    Science.gov (United States)

    Niemitalo-Haapola, Elina; Haapala, Sini; Jansson-Verkasalo, Eira; Kujala, Teija

    2015-01-01

    Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments

  10. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  11. Signs of noise-induced neural degeneration in humans

    DEFF Research Database (Denmark)

    Holtegaard, Pernille; Olsen, Steen Østergaard

    2015-01-01

    of background noise, while leaving the processing of low-level stimuli unaffected. The purpose of this study was to investigate if signs of such primary neural damage from noise-exposure could also be found in noiseexposed human individuals. It was investigated: (1) if noise-exposed listeners with hearing......Animal studies demonstrated that noise exposure causes a primary and selective loss of auditory-nerve fibres with low spontaneous firing rate. This neuronal impairment, if also present in humans, can be assumed to affect the processing of supra-threshold stimuli, especially in the presence...... thresholds within the “normal” range perform poorer, in terms of their speech recognition threshold in noise (SRTN), and (2) if auditory brainstem responses (ABR) reveal lower amplitude of wave I in the noise-exposed listeners. A test group of noise/music-exposed individuals and a control group were...

  12. The effect of noise exposure during the developmental period on the function of the auditory system.

    Science.gov (United States)

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-09-01

    Recently, there has been growing evidence that development and maturation of the auditory system depends substantially on the afferent activity supplying inputs to the developing centers. In cases when this activity is altered during early ontogeny as a consequence of, e.g., an unnatural acoustic environment or acoustic trauma, the structure and function of the auditory system may be severely affected. Pathological alterations may be found in populations of ribbon synapses of the inner hair cells, in the structure and function of neuronal circuits, or in auditory driven behavioral and psychophysical performance. Three characteristics of the developmental impairment are of key importance: first, they often persist to adulthood, permanently influencing the quality of life of the subject; second, their manifestations are different and sometimes even contradictory to the impairments induced by noise trauma in adulthood; third, they may be 'hidden' and difficult to diagnose by standard audiometric procedures used in clinical practice. This paper reviews the effects of early interventions to the auditory system, in particular, of sound exposure during ontogeny. We summarize the results of recent morphological, electrophysiological, and behavioral experiments, discuss the putative mechanisms and hypotheses, and draw possible consequences for human neonatal medicine and noise health. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Speech-in-Noise Tests and Supra-threshold Auditory Evoked Potentials as Metrics for Noise Damage and Clinical Trial Outcome Measures.

    Science.gov (United States)

    Le Prell, Colleen G; Brungart, Douglas S

    2016-09-01

    In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.

  14. Non-auditory effects of noise in industry. V. A field study in a shipyard

    NARCIS (Netherlands)

    van Dijk, F. J.; Verbeek, J. H.; de Fries, F. F.

    1987-01-01

    Workers of a shipbuilding and a machine shop department of a shipyard, with average noise levels of 98 dB(A) and 85.5 dB(A) respectively, were compared with respect to auditory and non-auditory effects. The distribution of years of noise exposure and of age was similar in both departments. No

  15. Auditory stream segregation using amplitude modulated bandpass noise

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2015-08-01

    Full Text Available The purpose of this study was to investigate the roles of spectral overlap and amplitude modulation (AM rate for stream segregation for noise signals, as well as to test the build-up effect based on these two cues. Segregation ability was evaluated using an objective paradigm with listeners’ attention focused on stream segregation. Stimulus sequences consisted of two interleaved sets of bandpass noise bursts (A and B bursts. The A and B bursts differed in spectrum, AM-rate, or both. The amount of the difference between the two sets of noise bursts was varied. Long and short sequences were studied to investigate the build-up effect for segregation based on spectral and AM-rate differences. Results showed the following: 1. Stream segregation ability increased with greater spectral separation. 2. Larger AM-rate separations were associated with stronger segregation abilities. 3. Spectral separation was found to elicit the build-up effect for the range of spectral differences assessed in the current study. 4. AM-rate separation interacted with spectral separation suggesting an additive effect of spectral separation and AM-rate separation on segregation build-up. The findings suggest that, when normal-hearing listeners direct their attention toward segregation, they are able to segregate auditory streams based on reduced spectral contrast cues that vary by the amount of spectral overlap. Further, regardless of the spectral separation they were able to use AM-rate difference as a secondary/weaker cue. Based on the spectral differences, listeners can segregate auditory streams better as the listening duration is prolonged—i.e. sparse spectral cues elicit build-up segregation; however, AM-rate differences only appear to elicit build-up when in combination with spectral difference cues.

  16. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  17. Testing resonating vector strength: Auditory system, electric fish, and noise

    Science.gov (United States)

    Leo van Hemmen, J.; Longtin, André; Vollmayr, Andreas N.

    2011-12-01

    Quite often a response to some input with a specific frequency ν○ can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν○ we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν○ and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.

  18. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  19. Apoptotic mechanisms after repeated noise trauma in the mouse medial geniculate body and primary auditory cortex.

    Science.gov (United States)

    Fröhlich, Felix; Ernst, Arne; Strübing, Ira; Basta, Dietmar; Gröschel, Moritz

    2017-12-01

    A correlation between noise-induced apoptosis and cell loss has previously been shown after a single noise exposure in the cochlear nucleus, inferior colliculus, medial geniculate body (MGB) and primary auditory cortex (AI). However, repeated noise exposure is the most common situation in humans and a major risk factor for the induction of noise-induced hearing loss (NIHL). The present investigation measured cell death pathways using terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) in the dorsal, medial and ventral MGB (dMGB, mMGB and vMGB) and six layers of the AI (AI-1 to AI-6) in mice (NMRI strain) after a second noise exposure (double-exposure group). Therefore, a single noise exposure group has been investigated 7 (7-day-group-single) or 14 days (14-day-group-single) after noise exposure (3 h, 5-20 kHz, 115 dB SPL peak-to-peak). The double-exposure group received the same noise trauma for a second time 7 days after the initial exposure and was either TUNEL-stained immediately (7-day-group-double) or 1 week later (14-day-group-double) and data were compared to the corresponding single-trauma group as well as to an unexposed control group. It was shown that TUNEL increased immediately after the second noise exposure in AI-3 and stayed upregulated in the 14-day-group-double. A significant increase in TUNEL was also seen in the 14-day-group-double in vMGB, mMGB and AI-1. The present results show for the first time the influence of a repeated noise trauma on cell death mechanisms in thalamic and cortical structures and might contribute to the understanding of pathophysiological findings and psychoacoustic phenomena accompanying NIHL.

  20. Recent advances in research on non-auditory effects of community noise

    Directory of Open Access Journals (Sweden)

    Belojević Goran

    2016-01-01

    Full Text Available Non-auditory effects of noise on humans have been intensively studied in the last four decades. The International Commission on Biological Effects of Noise has been following scientific advances in this field by organizing international congresses from the first one in 1968 in Washington, DC, to the 11th congress in Nara, Japan, in 2014. There is already a large scientific body of evidence on the effects of noise on annoyance, communication, performance and behavior, mental health, sleep, and cardiovascular functions including relationship with hypertension and ischemic heart disease. In the last five years new issues in this field have been tackled. Large epidemiological studies on community noise have reported its relationship with breast cancer, stroke, type 2 diabetes, and obesity. It seems that noise-induced sleep disturbance may be one of the mediating factors in these effects. Given a large public health importance of the above-mentioned diseases, future studies should more thoroughly address the mechanisms underlying the reported association with community noise exposure. [Projekat Ministarstva nauke Republike Srbije, br. 175078

  1. Non-auditory effects of noise in industry. VI. A final field study in industry

    NARCIS (Netherlands)

    van Dijk, F. J.; Souman, A. M.; de Vries, F. F.

    1987-01-01

    Non-auditory effects of noise were studied among 539 male workers from seven industries. The LAeq, assessed by personal noise dosimetry, has been used to study acute effects. Various indices of total noise exposure, involving level and duration, were developed for long-term effect studies. In the

  2. Acute Noise Exposure Is Associated With Intrinsic Apoptosis in Murine Central Auditory Pathway

    Directory of Open Access Journals (Sweden)

    Moritz Gröschel

    2018-05-01

    Full Text Available Noise that is capable of inducing the hearing loss (NIHL has a strong impact on the inner ear structures and causes early and most obvious pathophysiological changes in the auditory periphery. Several studies indicated that intrinsic apoptotic cell death mechanisms are the key factors inducing cellular degeneration immediately after noise exposure and are maintained for days or even weeks. In addition, studies demonstrated several changes in the central auditory system following noise exposure, consistent with early apoptosis-related pathologies. To clarify the underlying mechanisms, the present study focused on the noise-induced gene and protein expression of the pro-apoptotic protease activating factor-1 (APAF1 and the anti-apoptotic B-cell lymphoma 2 related protein a1a (BCL2A1A in the cochlear nucleus (CN, inferior colliculus (IC and auditory cortex (AC of the murine central auditory pathway. The expression of Bcl2a1a mRNA was upregulated immediately after trauma in all tissues investigated, whereas the protein levels were significantly reduced at least in the auditory brainstem. Conversely, acute noise has decreased the expression of Apaf1 gene along the auditory pathway. The changes in APAF1 protein level were not statistically significant. It is tempting to speculate that the acoustic overstimulation leads to mitochondrial dysfunction and induction of apoptosis by regulation of proapoptotic and antiapoptotic proteins. The inverse expression pattern on the mRNA level of both genes might reflect a protective response to decrease cellular damage. Our results indicate the immediate presence of intrinsic apoptosis following noise trauma. This, in turn, may significantly contribute to the development of central structural deficits. Auditory pathway-specific inhibition of intrinsic apoptosis could be a therapeutic approach for the treatment of acute (noise-induced hearing loss to prevent irreversible neuronal injury in auditory brain structures

  3. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use

    OpenAIRE

    Gieseler, Anja; Tahden, Maike A. S.; Thiel, Christiane M.; Wagener, Kirsten C.; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically...

  4. In Vitro Studies and Preliminary Mathematical Model for Jet Fuel and Noise Induced Auditory Impairment

    Science.gov (United States)

    2015-06-01

    of JP-8 and a Fischer- Tropsch synthetic jet fuel following subacute inhalation exposure in rats. Toxicol Sci 116(1): 239-248. Gallinat, J...AFRL-RH-WP-TR-2015-0084 IN VITRO STUDIES AND PRELIMINARY MATHEMATICAL MODEL FOR JET FUEL AND NOISE INDUCED AUDITORY IMPAIRMENT...April 2014 – September 2014 4. TITLE AND SUBTITLE In Vitro Studies and Preliminary Mathematical Model for Jet Fuel and Noise Induced Auditory

  5. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  6. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  7. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  8. Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.

    Science.gov (United States)

    Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru

    2017-07-01

    Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.

  9. Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus.

    Science.gov (United States)

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M

    2016-01-01

    Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous neurophysiological studies have concentrated on understanding auditory scene analysis under ideal listening conditions. Here, we examine the effects of noise and reverberation on periodicity-based neural segregation of concurrent vowels /a/ and /i/, in the responses of single units in the guinea-pig ventral cochlear nucleus (VCN): the first processing station of the auditory brain stem. In line with human psychoacoustic data, we find reverberation significantly impairs segregation when vowels have an intonated pitch contour, but not when they are spoken on a monotone. In contrast, noise impairs segregation independent of intonation pattern. These results are informative for models of speech processing under ecologically valid listening conditions, where noise and reverberation abound.

  10. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  11. Contralateral white noise attenuates 40-Hz auditory steady-state fields but not N100m in auditory evoked fields.

    Science.gov (United States)

    Kawase, Tetsuaki; Maki, Atsuko; Kanno, Akitake; Nakasato, Nobukazu; Sato, Mika; Kobayashi, Toshimitsu

    2012-01-16

    The different response characteristics of the different auditory cortical responses under conventional central masking conditions were examined by comparing the effects of contralateral white noise on the cortical component of 40-Hz auditory steady state fields (ASSFs) and the N100 m component in auditory evoked fields (AEFs) for tone bursts using a helmet-shaped magnetoencephalography system in 8 healthy volunteers (7 males, mean age 32.6 years). The ASSFs were elicited by monaural 1000 Hz amplitude modulation tones at 80 dB SPL, with the amplitude modulated at 39 Hz. The AEFs were elicited by monaural 1000 Hz tone bursts of 60 ms duration (rise and fall times of 10 ms, plateau time of 40 ms) at 80 dB SPL. The results indicated that continuous white noise at 70 dB SPL presented to the contralateral ear did not suppress the N100 m response in either hemisphere, but significantly reduced the amplitude of the 40-Hz ASSF in both hemispheres with asymmetry in that suppression of the 40-Hz ASSF was greater in the right hemisphere. Different effects of contralateral white noise on these two responses may reflect different functional auditory processes in the cortices. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    Science.gov (United States)

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  13. [Auditory training with wide-band white noise: effects on the recruitment (III)].

    Science.gov (United States)

    Domínguez Ugidos, L J; Rodríguez Morejón, C; Vallés Varela, H; Iparraguirre Bolinaga, V; Knaster del Olmo, J

    2001-05-01

    The auditory training with wide-band white noise is a methodology for the qualitative recovery of the hearing loss in people suffering from sensorineural hearing loss. It is based on the application of a wide-band white modified noise. In a prospective study, we have assessed the modifications of the recruitment coefficient in a sample of 48 patients who have followed a program of 15 auditory training with wide-band white noise sessions. The average improvement of the recruitment coefficient expressed in percentage is a 7.7498%, which comes up to 23.5249% in the case of a binaural recruitment coefficient. From our results, it can be deduced that the auditory training with wide-band white noise reduces the recruitment. That is to say, the decrease of the recruitment in high intensities both binaurally and in all ears.

  14. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  15. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    Science.gov (United States)

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was

  16. High levels of sound pressure: acoustic reflex thresholds and auditory complaints of workers with noise exposure

    Directory of Open Access Journals (Sweden)

    Alexandre Scalli Mathias Duarte

    2015-08-01

    Full Text Available INTRODUCTION: The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints.METHODS: This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests.RESULTS: The workers' age ranged from 18 to 50 years (mean = 39.6, and noise exposure time from one to 38 years (mean = 17.3. We found that 15.1% (55 of the workers had bilateral hearing loss, 38.5% (140 had bilateral tinnitus, 52.8% (192 had abnormal sensitivity to loud sounds, and 47.2% (172 had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000 Hz bilaterally.CONCLUSION: There was no significance relationship between auditory complaints and acoustic reflexes.

  17. Computerized classification of auditory trauma: Results of an investigation on screening employees exposed to noise

    Science.gov (United States)

    Klockhoff, I.

    1977-01-01

    An automatic, computerized method was developed to classify results from a screening of employees exposed to noise, resulting in a fast and effective method of identifying and taking measures against auditory trauma. This technique also satisfies the urgent need for quick discovery of cases which deserve compensation in accordance with the Law on Industrial Accident Insurance. Unfortunately, use of this method increases the burden on the already overloaded investigatory resources of the auditory health care system.

  18. Concentrated pitch discrimination modulates auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2010-03-31

    This study examined a notion that auditory discrimination is a requisite for attention-related modulation of the auditory brainstem response (ABR) during contralateral noise exposure. Given that the right ear was exposed continuously with white noise at an intensity of 60-80 dB sound pressure level, tone pips at 80 dB sound pressure level were delivered to the left ear through either single-stimulus or oddball procedures. Participants conducted reading (ignoring task) and counting target tones (attentive task) during stimulation. The oddball but not the single-stimulus procedures elicited task-related modulations in both early (ABR) and late (processing negativity) event-related potentials simultaneously. The elicitation of the attention-related ABR modulation during contralateral noise exposure is thus considered to require auditory discrimination and have the corticofugal nature evidently.

  19. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  20. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Auditory Effects of Exposure to Noise and Solvents: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Lobato, Diolen Conceição Barros

    2014-01-01

    Full Text Available Introduction Industry workers are exposed to different environmental risk agents that, when combined, may potentiate risks to hearing. Objective To evaluate the effects of the combined exposure to noise and solvents on hearing in workers. Methods A transversal retrospective cohort study was performed through documentary analysis of an industry. The sample (n = 198 was divided into four groups: the noise group (NG, exposed only to noise; the noise and solvents group (NSG, exposed to noise and solvents; the noise control group and noise and solvents control group (CNS, no exposure. Results The NG showed 16.66% of cases suggestive of bilateral noise-induced hearing loss and NSG showed 5.26%. The NG and NSG had worse thresholds than their respective control groups. Females were less susceptible to noise than males; however, when simultaneously exposed to solvents, hearing was affected in a similar way, resulting in significant differences (p < 0.05. The 40- to 49-year-old age group was significantly worse (p < 0.05 in the auditory thresholds in the NSG compared with the CNS. Conclusion The results observed in this study indicate that simultaneous exposure to noise and solvents can damage the peripheral auditory system.

  2. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    Science.gov (United States)

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  3. Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children.

    Science.gov (United States)

    Niemitalo-Haapola, Elina; Haapala, Sini; Kujala, Teija; Raappana, Antti; Kujala, Tiia; Jansson-Verkasalo, Eira

    2017-08-16

    The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years. The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise. Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children. https://doi.org/10.23641/asha.5233939.

  4. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  5. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  6. Auditory evoked functions in ground crew working in high noise environment of Mumbai airport.

    Science.gov (United States)

    Thakur, L; Anand, J P; Banerjee, P K

    2004-10-01

    The continuous exposure to the relatively high level of noise in the surroundings of an airport is likely to affect the central pathway of the auditory system as well as the cognitive functions of the people working in that environment. The Brainstem Auditory Evoked Responses (BAER), Mid Latency Response (MLR) and P300 response of the ground crew employees working in Mumbai airport were studied to evaluate the effects of continuous exposure to high level of noise of the surroundings of the airport on these responses. BAER, P300 and MLR were recorded by using a Nicolet Compact-4 (USA) instrument. Audiometry was also monitored with the help of GSI-16 Audiometer. There was a significant increase in the peak III latency of the BAER in the subjects exposed to noise compared to controls with no change in their P300 values. The exposed group showed hearing loss at different frequencies. The exposure to the high level of noise caused a considerable decline in the auditory conduction upto the level of the brainstem with no significant change in conduction in the midbrain, subcortical areas, auditory cortex and associated areas. There was also no significant change in cognitive function as measured by P300 response.

  7. Hearing illusory sounds in noise: sensory-perceptual transformations in primary auditory cortex.

    NARCIS (Netherlands)

    Riecke, L.; Opstal, A.J. van; Goebel, R.; Formisano, E.

    2007-01-01

    A sound that is interrupted by silence is perceived as discontinuous. However, when the silence is replaced by noise, the target sound may be heard as uninterrupted. Understanding the neural basis of this continuity illusion may elucidate the ability to track sounds of interest in noisy auditory

  8. The effect of noise exposure during the developmental period on the function of the auditory system

    Czech Academy of Sciences Publication Activity Database

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-01-01

    Roč. 352, sep (2017), s. 1-11 ISSN 0378-5955 R&D Projects: GA ČR(CZ) GAP303/12/1347 Institutional support: RVO:68378041 Keywords : auditory system * development * noise exposure Subject RIV: FH - Neurology OBOR OECD: Other medical science Impact factor: 2.906, year: 2016

  9. JP-8 jet fuel can promote auditory impairment resulting from subsequent noise exposure in rats.

    Science.gov (United States)

    Fechter, Laurence D; Gearhart, Caroline; Fulton, Sherry; Campbell, Jerry; Fisher, Jeffrey; Na, Kwangsam; Cocker, David; Nelson-Miller, Alisa; Moon, Patrick; Pouyatos, Benoit

    2007-08-01

    We report on the transient and persistent effects of JP-8 jet fuel exposure on auditory function in rats. JP-8 has become the standard jet fuel utilized in the United States and North Atlantic Treaty Organization countries for military use and it is closely related to Jet A fuel, which is used in U.S. domestic aviation. Rats received JP-8 fuel (1000 mg/m(3)) by nose-only inhalation for 4 h and half of them were immediately subjected to an octave band of noise ranging between 97 and 105 dB in different experiments. The noise by itself produces a small, but permanent auditory impairment. The current permissible exposure level for JP-8 is 350 mg/m(3). Additionally, a positive control group received only noise exposure, and a fourth group consisted of untreated control subjects. Exposures occurred either on 1 day or repeatedly on 5 successive days. Impairments in auditory function were assessed using distortion product otoacoustic emissions and compound action potential testing. In other rats, tissues were harvested following JP-8 exposure for assessment of hydrocarbon levels or glutathione (GSH) levels. A single JP-8 exposure by itself at 1000 mg/m(3) did not disrupt auditory function. However, exposure to JP-8 and noise produced an additive disruption in outer hair cell function. Repeated 5-day JP-8 exposure at 1000 mg/m(3) for 4 h produced impairment of outer hair cell function that was most evident at the first postexposure assessment time. Partial though not complete recovery was observed over a 4-week postexposure period. The adverse effects of repeated JP-8 exposures on auditory function were inconsistent, but combined treatment with JP-8 + noise yielded greater impairment of auditory function, and hair cell loss than did noise by itself. Qualitative comparison of outer hair cell loss suggests an increase in outer hair cell death among rats treated with JP-8 + noise for 5 days as compared to noise alone. In most instances, hydrocarbon constituents of the fuel

  10. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  11. Auditory sensitivity in opiate addicts with and without a history of noise exposure

    Directory of Open Access Journals (Sweden)

    Vishakha Rawool

    2011-01-01

    Full Text Available Several case reports suggest that some individuals are susceptible to hearing loss from opioids. A combination of noise and opium exposure is possible in either occupational setting such as military service or recreational settings. According to the Drug Enforcement Agency of the U.S. Department of Justice, prescriptions for opiate-based drugs have skyrocketed in the past decade. Since both opium and noise independently can cause hearing loss, it is important to know the prevalence of hearing loss among individuals who are exposed to opium or both opium and noise. The purpose of this research was to evaluate auditory sensitivity in individuals with a history of opium abuse and/or occupational or nonoccupational noise exposure. Twenty-three men who reported opiate abuse served as participants in the study. Four of the individuals reported no history of noise exposure, 12 reported hobby-related noise exposure, 7 reported occupational noise exposure including 2 who also reported hobby-related noise exposure. Fifty percent (2/4 of the individuals without any noise exposure had a hearing loss confirming previous reports that some of the population is vulnerable to the ototoxic effects of opioids. The percentage of population with hearing loss increased with hobby-related (58% and occupational noise exposure (100%. Mixed MANOVA revealed a significant ear, frequency, and noise exposure interaction. Health professionals need to be aware of the possible ototoxic effects of opioids, since early detection of hearing loss from opium abuse may lead to cessation of abuse and further progression of hearing loss. The possibility that opium abuse may interact with noise exposure in determining auditory thresholds needs to be considered in noise exposed individuals who are addicted to opiates. Possible mechanisms of cochlear damage from opium abuse, possible reasons for individual susceptibility, and recommendations for future studies are presented in the article.

  12. Effects of acoustic noise on the auditory nerve compound action potentials evoked by electric pulse trains.

    Science.gov (United States)

    Nourski, Kirill V; Abbas, Paul J; Miller, Charles A; Robinson, Barbara K; Jeng, Fuh-Cherng

    2005-04-01

    This study investigated the effects of acoustic noise on the auditory nerve compound action potentials in response to electric pulse trains. Subjects were adult guinea pigs, implanted with a minimally invasive electrode to preserve acoustic sensitivity. Electrically evoked compound action potentials (ECAP) were recorded from the auditory nerve trunk in response to electric pulse trains both during and after the presentation of acoustic white noise. Simultaneously presented acoustic noise produced a decrease in ECAP amplitude. The effect of the acoustic masker on the electric probe was greatest at the onset of the acoustic stimulus and it was followed by a partial recovery of the ECAP amplitude. Following cessation of the acoustic noise, ECAP amplitude recovered over a period of approximately 100-200 ms. The effects of the acoustic noise were more prominent at lower electric pulse rates (interpulse intervals of 3 ms and higher). At higher pulse rates, the ECAP adaptation to the electric pulse train alone was larger and the acoustic noise, when presented, produced little additional effect. The observed effects of noise on ECAP were the greatest at high electric stimulus levels and, for a particular electric stimulus level, at high acoustic noise levels.

  13. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L.; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18–30), we asked whether musical experience benefits an older cohort of musicians (ages 45–65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  14. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Directory of Open Access Journals (Sweden)

    Alexandra Parbery-Clark

    Full Text Available Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30, we asked whether musical experience benefits an older cohort of musicians (ages 45-65, potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory. Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  15. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-05-11

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  16. The Effect of Noise on Human Performance: A Clinical Trial

    Directory of Open Access Journals (Sweden)

    P Nassiri

    2013-04-01

    Full Text Available Background: Noise is defined as unwanted or meaningless sound that apart from auditory adverse health effects may distract attention from cues that are important for task performance. Human performance is influenced by many job-related factors and workplace conditions including noise level. Objective: To study the effect of noise on human performance. Methods: The participants included 40 healthy male university students. The experimental design consisted of 3 (sound pressure level x 3 (noise schedule x 2 (noise type factors. To investigate occupational skill performance, some specific test batteries were used: 1 steadiness test, 2 Minnesota manual dexterity test, 3 hand tool dexterity test, and 4 two-arm coordination test. Time duration of test completion was measured as speed response; to determine error response, the time taken during committing an error by participants while performing a task was measured. Results: Speed response obtained from the 4 tests in combined conditions of noise schedule, harmonic index, and sound pressure level was highest for (intermittent, treble, 95 dB, (continuous, treble, 95 dB, (continuous, treble, 85 dB and (intermittent, treble, 95 dB, respectively. Conclusion: Treble noise was found significant in reducing human performance; also, intermittent noise, especially at high pressure levels, was responsible for worsening environmental conditions during performing a task.

  17. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    Science.gov (United States)

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  18. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...... was tested in conditions of tone-in-noise masking, intensity discrimination, spectral masking with tones and narrowband noises, forward masking with (on- and off-frequency) noise- and pure-tone maskers, and amplitude modulation detection using different noise carrier bandwidths. One of the key properties...

  19. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  20. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  1. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  2. Noise exposure and oxidative balance in auditory and extra-auditory structures in adult and developing animals. Pharmacological approaches aimed to minimize its effects.

    Science.gov (United States)

    Molina, S J; Miceli, M; Guelman, L R

    2016-07-01

    Noise coming from urban traffic, household appliances or discotheques might be as hazardous to the health of exposed people as occupational noise, because may likewise cause hearing loss, changes in hormonal, cardiovascular and immune systems and behavioral alterations. Besides, noise can affect sleep, work performance and productivity as well as communication skills. Moreover, exposure to noise can trigger an oxidative imbalance between reactive oxygen species (ROS) and the activity of antioxidant enzymes in different structures, which can contribute to tissue damage. In this review we systematized the information from reports concerning noise effects on cell oxidative balance in different tissues, focusing on auditory and non-auditory structures. We paid specific attention to in vivo studies, including results obtained in adult and developing subjects. Finally, we discussed the pharmacological strategies tested by different authors aimed to minimize the damaging effects of noise on living beings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Auditory Recognition of Familiar and Unfamiliar Subjects with Wind Turbine Noise

    Directory of Open Access Journals (Sweden)

    Luigi Maffei

    2015-04-01

    Full Text Available Considering the wide growth of the wind turbine market over the last decade as well as their increasing power size, more and more potential conflicts have arisen in society due to the noise radiated by these plants. Our goal was to determine whether the annoyance caused by wind farms is related to aspects other than noise. To accomplish this, an auditory experiment on the recognition of wind turbine noise was conducted to people with long experience of wind turbine noise exposure and to people with no previous experience to this type of noise source. Our findings demonstrated that the trend of the auditory recognition is the same for the two examined groups, as far as the increase of the distance and the decrease of the values of sound equivalent levels and loudness are concerned. Significant differences between the two groups were observed as the distance increases. People with wind turbine noise experience showed a higher tendency to report false alarms than people without experience.

  4. Auditory recognition of familiar and unfamiliar subjects with wind turbine noise.

    Science.gov (United States)

    Maffei, Luigi; Masullo, Massimiliano; Gabriele, Maria Di; Votsi, Nefta-Eleftheria P; Pantis, John D; Senese, Vincenzo Paolo

    2015-04-17

    Considering the wide growth of the wind turbine market over the last decade as well as their increasing power size, more and more potential conflicts have arisen in society due to the noise radiated by these plants. Our goal was to determine whether the annoyance caused by wind farms is related to aspects other than noise. To accomplish this, an auditory experiment on the recognition of wind turbine noise was conducted to people with long experience of wind turbine noise exposure and to people with no previous experience to this type of noise source. Our findings demonstrated that the trend of the auditory recognition is the same for the two examined groups, as far as the increase of the distance and the decrease of the values of sound equivalent levels and loudness are concerned. Significant differences between the two groups were observed as the distance increases. People with wind turbine noise experience showed a higher tendency to report false alarms than people without experience.

  5. Auditory Processing in Noise: A Preschool Biomarker for Literacy.

    Science.gov (United States)

    White-Schwoch, Travis; Woodruff Carr, Kali; Thompson, Elaine C; Anderson, Samira; Nicol, Trent; Bradlow, Ann R; Zecker, Steven G; Kraus, Nina

    2015-07-01

    Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.

  6. Evaluation of wind noise in passenger car compartment in consideration of auditory masking and sound localization; Chokaku masking to hoko chikaku wo koryoshita kazekirion hyoka

    Energy Technology Data Exchange (ETDEWEB)

    Hoshino, H. [Toyota Central Research and Development Labs., Inc., Aichi (Japan); Kato, H. [Toyota Motor Corp., Aichi (Japan)

    1998-05-01

    Discussed is a method for evaluating wind noise in passenger car compartment based on human auditory characteristics. In the study, noise in the compartment of a passenger car travelling at a constant speed is collected by use of a dummy head, and the collected noise is analyzed in view of the masking effect, directional sensation produced by binaural hearing, etc. A masked spectrum of noise in the compartment of a 6-cylinder vehicle travelling at 120km/h is analyzed, and it is found that some frequency bands, especially the band centering on 300Hz, are masked by a loud noise component falling in a low frequency band of 180Hz or lower. By use of masked spectrum analysis, the level of noise that is actually audible to human ears can be calculated. The noise level thus determined by masked spectrum analysis and the noise direction determined by a binaural signal processing model are examined, and then it is found that the noise direction is clearly determined when the noise belongs in a 450Hz band or higher where wind noise prevails. On the bases of the above-mentioned results and the directional sensation produced by binaural hearing, a `binaural wind noise evaluation method` is compiled. 20 refs., 9 figs., 1 tab.

  7. Attention-related modulation of auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2008-10-29

    As determinants facilitating attention-related modulation of the auditory brainstem response (ABR), two experimental factors were examined: (i) auditory discrimination; and (ii) contralateral masking intensity. Tone pips at 80 dB sound pressure level were presented to the left ear via either single-tone exposures or oddball exposures, whereas white noise was delivered continuously to the right ear at variable intensities (none--80 dB sound pressure level). Participants each conducted two tasks during stimulation, either reading a book (ignoring task) or detecting target tones (attentive task). Task-related modulation within the ABR range was found only during oddball exposures at contralateral masking intensities greater than or equal to 60 dB. Attention-related modulation of ABR can thus be detected reliably during auditory discrimination under contralateral masking of sufficient intensity.

  8. Background noise can enhance cortical auditory evoked potentials under certain conditions.

    Science.gov (United States)

    Papesh, Melissa A; Billings, Curtis J; Baltzell, Lucas S

    2015-07-01

    To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30dB. The syllable was presented binaurally and monaurally at two presentation rates. The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. Published by Elsevier Ireland Ltd.

  9. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  10. Structural changes in the adult rat auditory system induced by brief postnatal noise exposure

    Czech Academy of Sciences Publication Activity Database

    Ouda, Ladislav; Burianová, Jana; Balogová, Zuzana; Lu, H. P.; Syka, Josef

    2016-01-01

    Roč. 221, č. 1 (2016), s. 617-629 ISSN 1863-2653 R&D Projects: GA ČR(CZ) GCP303/11/J005; GA ČR(CZ) GAP303/12/1347; GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 Keywords : noise exposure * critical period * central auditory system Subject RIV: FH - Neurology Impact factor: 4.698, year: 2016

  11. Lack of protection against gentamicin ototoxicity by auditory conditioning with noise

    Directory of Open Access Journals (Sweden)

    Alex Strose

    2014-10-01

    Full Text Available INTRODUCTION: Auditory conditioning consists of the pre-exposure to low levels of a potential harmful agent to protect against a subsequent harmful presentation. OBJECTIVE: To confirm if conditioning with an agent different from the used to cause the trauma can also be effective. METHOD: Experimental study with 17 guinea pigs divided as follows: group Som: exposed to 85 dB broadband noise centered at 4 kHz, 30 minutes a day for 10 consecutive days; group Cont: intramuscular administration of gentamicin 160 mg/kg a day for 10 consecutive days; group Expt: conditioned with noise similarly to group Som and, after each noise presentation, received gentamicin similarly to group Cont. The animals were evaluated by distortion product otoacoustic emissions (DPOAEs, brainstem auditory evoked potentials (BAEPs and scanning electron microscopy. RESULTS: The animals that were conditioned with noise did not show any protective effect compared to the ones that received only the ototoxic gentamicin administration. This lack of protection was observed functionally and morphologically. CONCLUSION: Conditioning with 85 dB broadband noise, 30 min a day for 10 consecutive days does not protect against an ototoxic gentamicin administration of 160 mg/kg a day for 10 consecutive days in the guinea pig.

  12. Comparison of Auditory Brainstem Response in Noise Induced Tinnitus and Non-Tinnitus Control Subjects

    Directory of Open Access Journals (Sweden)

    Ghassem Mohammadkhani

    2009-12-01

    Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.

  13. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    Science.gov (United States)

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise

  14. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    Directory of Open Access Journals (Sweden)

    Ann-Cathrine Lindblad

    Full Text Available The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group

  15. Non-Monotonic Relation Between Noise Exposure Severity and Neuronal Hyperactivity in the Auditory Midbrain

    Directory of Open Access Journals (Sweden)

    Lara Li Hesse

    2016-08-01

    Full Text Available The occurrence of tinnitus can be linked to hearing loss in the majority of cases, but there is nevertheless a large degree of unexplained heterogeneity in the relation between hearing loss and tinnitus. Part of the problem might be that hearing loss is usually quantified in terms of increased hearing thresholds, which only provides limited information about the underlying cochlear damage. Moreover, noise exposure that does not cause hearing threshold loss can still lead to hidden hearing loss (HHL, i.e. functional deafferentation of auditory nerve fibres (ANFs through loss of synaptic ribbons in inner hair cells. Whilst it is known that increased hearing thresholds can trigger increases in spontaneous neural activity in the central auditory system, i.e. a putative neural correlate of tinnitus, the central effects of HHL have not yet been investigated. Here, we exposed mice to octave-band noise at 100 and 105 dB SPL, to generate HHL and permanent increases of hearing thresholds, respectively. Deafferentation of ANFs was confirmed through measurement of auditory brainstem responses and cochlear immunohistochemistry. Acute extracellular recordings from the auditory midbrain (inferior colliculus demonstrated increases in spontaneous neuronal activity (a putative neural correlate of tinnitus in both groups. Surprisingly the increase in spontaneous activity was most pronounced in the mice with HHL, suggesting that the relation between hearing loss and neuronal hyperactivity might be more complex than currently understood. Our computational model indicated that these differences in neuronal hyperactivity could arise from different degrees of deafferentation of low-threshold ANFs in the two exposure groups.Our results demonstrate that HHL is sufficient to induce changes in central auditory processing, and they also indicate a non-monotonic relationship between cochlear damage and neuronal hyperactivity, suggesting an explanation for why tinnitus might

  16. Effects of contralateral noise on the 20-Hz auditory steady state response--magnetoencephalography study.

    Directory of Open Access Journals (Sweden)

    Hajime Usubuchi

    Full Text Available The auditory steady state response (ASSR is an oscillatory brain response, which is phase locked to the rhythm of an auditory stimulus. ASSRs have been recorded in response to a wide frequency range of modulation and/or repetition, but the physiological features of the ASSRs are somewhat different depending on the modulation frequency. Recently, the 20-Hz ASSR has been emphasized in clinical examinations, especially in the area of psychiatry. However, little is known about the physiological properties of the 20-Hz ASSR, compared to those of the 40-Hz and 80-Hz ASSRs. The effects of contralateral noise on the ASSR are known to depend on the modulation frequency to evoke ASSR. However, the effects of contralateral noise on the 20-Hz ASSR are not known. Here we assessed the effects of contralateral white noise at a level of 70 dB SPL on the 20-Hz and 40-Hz ASSRs using a helmet-shaped magnetoencephalography system in 9 healthy volunteers (8 males and 1 female, mean age 31.2 years. The ASSRs were elicited by monaural 1000-Hz 5-s tone bursts amplitude-modulated at 20 and 39 Hz and presented at 80 dB SPL. Contralateral noise caused significant suppression of both the 20-Hz and 40-Hz ASSRs, although suppression was significantly smaller for the 20-Hz ASSRs than the 40-Hz ASSRs. Moreover, the greatest suppression of both 20-Hz and 40-Hz ASSRs occurred in the right hemisphere when stimuli were presented to the right ear with contralateral noise. The present study newly showed that 20-Hz ASSRs are suppressed by contralateral noise, which may be important both for characterization of the 20-Hz ASSR and for interpretation in clinical situations. Physicians must be aware that the 20-Hz ASSR is significantly suppressed by sound (e.g. masking noise or binaural stimulation applied to the contralateral ear.

  17. Implicit Talker Training Improves Comprehension of Auditory Speech in Noise

    Directory of Open Access Journals (Sweden)

    Jens Kreitewolf

    2017-09-01

    Full Text Available Previous studies have shown that listeners are better able to understand speech when they are familiar with the talker’s voice. In most of these studies, talker familiarity was ensured by explicit voice training; that is, listeners learned to identify the familiar talkers. In the real world, however, the characteristics of familiar talkers are learned incidentally, through communication. The present study investigated whether speech comprehension benefits from implicit voice training; that is, through exposure to talkers’ voices without listeners explicitly trying to identify them. During four training sessions, listeners heard short sentences containing a single verb (e.g., “he writes”, spoken by one talker. The sentences were mixed with noise, and listeners identified the verb within each sentence while their speech-reception thresholds (SRT were measured. In a final test session, listeners performed the same task, but this time they heard different sentences spoken by the familiar talker and three unfamiliar talkers. Familiar and unfamiliar talkers were counterbalanced across listeners. Half of the listeners performed a test session in which the four talkers were presented in separate blocks (blocked paradigm. For the other half, talkers varied randomly from trial to trial (interleaved paradigm. The results showed that listeners had lower SRT when the speech was produced by the familiar talker than the unfamiliar talkers. The type of talker presentation (blocked vs. interleaved had no effect on this familiarity benefit. These findings suggest that listeners implicitly learn talker-specific information during a speech-comprehension task, and exploit this information to improve the comprehension of novel speech material from familiar talkers.

  18. Background MR gradient noise and non-auditory BOLD activations: a data-driven perspective.

    Science.gov (United States)

    Haller, Sven; Homola, György A; Scheffler, Klaus; Beckmann, Christian F; Bartsch, Andreas J

    2009-07-28

    The effect of echoplanar imaging (EPI) acoustic background noise on blood oxygenation level dependent (BOLD) activations was investigated. Two EPI pulse sequences were compared: (i) conventional EPI with a pulsating sound component of typically 8-10 Hz, which is a potent physiological stimulus, and (ii) the more recently developed continuous-sound EPI, which is perceived as less distractive despite equivalent peak sound pressure levels. Sixteen healthy subjects performed an established demanding visual n-back working memory task. Using an exploratory data analysis technique (tensorial probabilistic independent component analysis; tensor-PICA), we studied the inter-session/within-subject response variability introduced by continuous-sound versus conventional EPI acoustic background noise in addition to temporal and spatial signal characteristics. The analysis revealed a task-related component associated with the established higher-level working memory and motor feedback response network, which exhibited a significant 19% increase in its average effect size for the continuous-sound as opposed to conventional EPI. Stimulus-related lower-level activations, such as primary visual areas, were not modified. EPI acoustic background noise influences much more than the auditory system per se. This analysis provides additional evidence for an enhancement of task-related, extra-auditory BOLD activations by continuous-sound EPI due to less distractive acoustic background gradient noise.

  19. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation.

    Science.gov (United States)

    Yakunina, Natalia; Kang, Eun Kyoung; Kim, Tae Su; Min, Ji-Hoon; Kim, Sam Soo; Nam, Eui-Cheol

    2015-10-01

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads.

  20. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakunina, Natalia [Kangwon National University, Institute of Medical Science, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kang, Eun Kyoung [Kangwon National University Hospital, Department of Rehabilitation Medicine, Chuncheon (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of); Min, Ji-Hoon [University of Michigan, Department of Biopsychology, Cognition, and Neuroscience, Ann Arbor, MI (United States); Kim, Sam Soo [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Radiology, Chuncheon (Korea, Republic of); Nam, Eui-Cheol [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of)

    2015-10-15

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  1. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    International Nuclear Information System (INIS)

    Yakunina, Natalia; Kang, Eun Kyoung; Kim, Tae Su; Min, Ji-Hoon; Kim, Sam Soo; Nam, Eui-Cheol

    2015-01-01

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  2. Selective attention and the auditory vertex potential. 2: Effects of signal intensity and masking noise

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.; Galambos, R.

    1975-01-01

    A randomized sequence of tone bursts was delivered to subjects at short inter-stimulus intervals with the tones originating from one of three spatially and frequency specific channels. The subject's task was to count the tones in one of the three channels at a time, ignoring the other two, and press a button after each tenth tone. In different conditions, tones were given at high and low intensities and with or without a background white noise to mask the tones. The N sub 1 component of the auditory vertex potential was found to be larger in response to attended channel tones in relation to unattended tones. This selective enhancement of N sub 1 was minimal for loud tones presented without noise and increased markedly for the lower tone intensity and in noise added conditions.

  3. The Effect of Noise on the Relationship Between Auditory Working Memory and Comprehension in School-Age Children.

    Science.gov (United States)

    Sullivan, Jessica R; Osman, Homira; Schafer, Erin C

    2015-06-01

    The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.

  4. Recent progress in the field of non-auditory health effects of noise. Trends and research needs

    NARCIS (Netherlands)

    Kluizenaar, Y. de; Matsui, T.

    2017-01-01

    With the aim to identify recent research achievements, current trends in research, remaining gaps of knowledge and priority areas of future research in the field of non-auditory health effects of noise, recent research progress was reviewed. A search was performed in PubMed (search terms “noise AND

  5. Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans.

    Science.gov (United States)

    Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E

    2018-01-03

    The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  6. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  7. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  8. A rapid appraisal of traffic policemen about auditory effects of traffic noise pollution from Ambala city

    Directory of Open Access Journals (Sweden)

    Abhishek Singh

    2015-01-01

    Full Text Available Context: Traffic policemen are at an increased risk of suffering from hazards of noise pollution because they are engaged in controlling traffic noise, particularly at heavy traffic junctions. The effect is more in this subgroup because they are continuously exposed to it. Aim: The present study was aimed at assessing the knowledge, attitude and practices of traffic policemen regarding auditory effects of traffic noise pollution in Ambala city. Settings and Design: Cross-sectional workplace survey. Materials and Methods: The present descriptive study was carried out in different traffic zones of Ambala city during April-June 2013. The study population consisted of 100 traffic policemen working in different traffic intersections of Ambala city. Statistical Analysis Used: Structured interview schedule was used to collect the data. SPSS version 17.0 was used for analysis. Interpretation of data was performed using percentages and proportions. Results: Majority (75% of the study subjects were exposed to traffic noise pollution for more than 5 years. Of the total subjects, 5% of respondents reported below-average hearing on self-assessment of hearing ability. Seventeen percent of the study population accepted that while hearing over phone they do miss some conversation. Most (98% of the traffic police did not use any personal protective equipment (PPEs like earplugs/earmuffs, and the non-availability of these PPEs (90% is the common reason for the hearing loss. Conclusions: The study concludes that traffic policemen are not much aware regarding impending auditory effects of traffic noise pollution. Duty rotation, duty scheduling and other forms of preventive modalities for exposure limitation are suggested.

  9. Brainstem auditory responses to resolved and unresolved harmonics of a synthetic vowel in quiet and noise.

    Science.gov (United States)

    Laroche, Marilyn; Dajani, Hilmi R; Prévost, François; Marcoux, André M

    2013-01-01

    This study investigated speech auditory brainstem responses (speech ABR) with variants of a synthetic vowel in quiet and in background noise. Its objectives were to study the noise robustness of the brainstem response at the fundamental frequency F0 and at the first formant F1, evaluate how the resolved/unresolved harmonics regions in speech contribute to the response at F0, and investigate the origin of the response at F0 to resolved and unresolved harmonics in speech. In total, 18 normal-hearing subjects (11 women, aged 18-33 years) participated in this study. Speech ABRs were recorded using variants of a 300 msec formant-synthesized /a/ vowel in quiet and in white noise. The first experiment employed three variants containing the first three formants F1 to F3, F1 only, and F2 and F3 only with relative formant levels following those reported in the literature. The second experiment employed three variants containing F1 only, F2 only, and F3 only, with the formants equalized to the same level and the signal-to-noise ratio (SNR) maintained at -5 dB. Overall response latency was estimated, and the amplitude and local SNR of the envelope following response at F0 and of the frequency following response at F1 were compared for the different stimulus variants in quiet and in noise. The response at F0 was more robust to noise than that at F1. There were no statistically significant differences in the response at F0 caused by the three stimulus variants in both experiments in quiet. However, the response at F0 with the variant dominated by resolved harmonics was more robust to noise than the response at F0 with the stimulus variants dominated by unresolved harmonics. The latencies of the responses in all cases were very similar in quiet, but the responses at F0 due to resolved and unresolved harmonics combined nonlinearly when both were present in the stimulus. Speech ABR has been suggested as a marker of central auditory processing. The results of this study support

  10. Post training REMs coincident auditory stimulation enhances memory in humans.

    Science.gov (United States)

    Smith, C; Weeden, K

    1990-06-01

    Sleep activity was monitored in 20 freshman college students for two consecutive nights. Subjects were assigned to 4 equal groups and all were asked to learn a complex logic task before bed on the second night. Two groups of subjects learned the task with a constant clicking noise in the background (cued groups), while two groups simply learned the task (non cued). During the night, one cued and one non cued group were presented with auditory clicks during REM sleep such as to coincide with all REMs of at least 100 microvolts. The second cued group was given auditory clicks during REM sleep, but only during the REMs "quiet" times. The second non-cued control group was never given any nighttime auditory stimulations. The cued REMs coincident group showed a significant 23% improvement in task performance when tested one week later. The non cued REMs coincident group showed only an 8.8% improvement which was not significant. The cued REMs quiet and non-stimulated control groups showed no change in task performance when retested. The results were interpreted as support for the idea that the cued auditory stimulation induced a "recall" of the learned material during the REM sleep state in order for further memory processing to take place.

  11. Intracerebral evidence of rhythm transform in the human auditory cortex.

    Science.gov (United States)

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  12. The tradeoff between signal detection and recognition rules auditory sensitivity under variable background noise conditions.

    Science.gov (United States)

    Lugli, Marco

    2015-12-07

    Animal acoustic communication commonly takes place under masked conditions. For instance, sound signals relevant for mating and survival are very often masked by background noise, which makes their detection and recognition by organisms difficult. Ambient noise (AN) varies in level and shape among different habitats, but also remarkable variations in time and space occurs within the same habitat. Variable AN conditions mask hearing thresholds of the receiver in complex and unpredictable ways, thereby causing distortions in sound perception. When communication takes place in a noisy environment, a highly sensitive system might confer no advantage to the receiver compared to a less sensitive one. The effects of noise masking on auditory thresholds and hearing-related functions are well known, and the potential role of AN in the evolution of the species' auditory sensitivity has been recognized by few authors. The mechanism of the underlying selection process has never been explored, however. Here I present a simple fitness model that seeks for the best sensitivity of a hearing system performing the detection and recognition of the sound under variable AN conditions. The model predicts higher sensitivity (i.e. lower hearing thresholds) as best strategy for species living in quiet habitats and lower sensitivity (i.e. higher hearing thresholds) as best strategy for those living in noisy habitats provided the cost of incorrect recognition is not low. The tradeoff between detection and recognition of acoustic signals appears to be a key factor determining the best level of hearing sensitivity of a species when acoustic communication is corrupted by noise. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  14. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Alva Engell

    Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  15. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Science.gov (United States)

    Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo

    2016-01-01

    Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  16. Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex.

    Science.gov (United States)

    Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L

    2017-05-24

    Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations ( r noise ) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on r noise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in r noise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations ( r noise ) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on r noise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning

  17. Differences in auditory timing between human and nonhuman primates

    NARCIS (Netherlands)

    Honing, H.; Merchant, H.

    2014-01-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such

  18. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  19. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  20. Noise Stress Induces an Epidermal Growth Factor Receptor/Xeroderma Pigmentosum-A Response in the Auditory Nerve.

    Science.gov (United States)

    Guthrie, O'neil W

    2017-03-01

    In response to toxic stressors, cancer cells defend themselves by mobilizing one or more epidermal growth factor receptor (EGFR) cascades that employ xeroderma pigmentosum-A (XPA) to repair damaged genes. Recent experiments discovered that neurons within the auditory nerve exhibit basal levels of EGFR+XPA co-expression. This finding implied that auditory neurons in particular or neurons in general have the capacity to mobilize an EGFR+XPA defense. Therefore, the current study tested the hypothesis that noise stress would alter the expression pattern of EGFR/XPA within the auditory nerve. Design-based stereology was used to quantify the proportion of neurons that expressed EGFR, XPA, and EGFR+XPA with and without noise stress. The results revealed an intricate neuronal response that is suggestive of alterations to both co-expression and individual expression of EGFR and XPA. In both the apical and middle cochlear coils, the noise stress depleted EGFR+XPA expression. Furthermore, there was a reduction in the proportion of neurons that expressed XPA-alone in the middle coils. However, the noise stress caused a significant increase in the proportion of neurons that expressed EGFR-alone in the middle coils. The basal cochlear coils failed to mobilize a significant response to the noise stress. These results suggest that EGFR and XPA might be part of the molecular defense repertoire of the auditory nerve.

  1. Noise Stress Induces an Epidermal Growth Factor Receptor/Xeroderma Pigmentosum–A Response in the Auditory Nerve

    Science.gov (United States)

    Guthrie, O’neil W.

    2017-01-01

    In response to toxic stressors, cancer cells defend themselves by mobilizing one or more epidermal growth factor receptor (EGFR) cascades that employ xeroderma pigmentosum–A (XPA) to repair damaged genes. Recent experiments discovered that neurons within the auditory nerve exhibit basal levels of EGFR+XPA co-expression. This finding implied that auditory neurons in particular or neurons in general have the capacity to mobilize an EGFR+XPA defense. Therefore, the current study tested the hypothesis that noise stress would alter the expression pattern of EGFR/XPA within the auditory nerve. Design-based stereology was used to quantify the proportion of neurons that expressed EGFR, XPA, and EGFR+XPA with and without noise stress. The results revealed an intricate neuronal response that is suggestive of alterations to both co-expression and individual expression of EGFR and XPA. In both the apical and middle cochlear coils, the noise stress depleted EGFR+XPA expression. Furthermore, there was a reduction in the proportion of neurons that expressed XPA-alone in the middle coils. However, the noise stress caused a significant increase in the proportion of neurons that expressed EGFR-alone in the middle coils. The basal cochlear coils failed to mobilize a significant response to the noise stress. These results suggest that EGFR and XPA might be part of the molecular defense repertoire of the auditory nerve. PMID:28056182

  2. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  3. The Contribution of Auditory and Cognitive Factors to Intelligibility of Words and Sentences in Noise.

    Science.gov (United States)

    Heinrich, Antje; Knight, Sarah

    2016-01-01

    Understanding the causes for speech-in-noise (SiN) perception difficulties is complex, and is made even more difficult by the fact that listening situations can vary widely in target and background sounds. While there is general agreement that both auditory and cognitive factors are important, their exact relationship to SiN perception across various listening situations remains unclear. This study manipulated the characteristics of the listening situation in two ways: first, target stimuli were either isolated words, or words heard in the context of low- (LP) and high-predictability (HP) sentences; second, the background sound, speech-modulated noise, was presented at two signal-to-noise ratios. Speech intelligibility was measured for 30 older listeners (aged 62-84) with age-normal hearing and related to individual differences in cognition (working memory, inhibition and linguistic skills) and hearing (PTA(0.25-8 kHz) and temporal processing). The results showed that while the effect of hearing thresholds on intelligibility was rather uniform, the influence of cognitive abilities was more specific to a certain listening situation. By revealing a complex picture of relationships between intelligibility and cognition, these results may help us understand some of the inconsistencies in the literature as regards cognitive contributions to speech perception.

  4. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2010-04-01

    Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.

  5. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2017-01-01

    Full Text Available Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.

  6. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz, recorded by BOLD fMRI from an area centered on Heschl's gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  7. Functional Changes in the Human Auditory Cortex in Ageing

    Science.gov (United States)

    Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef

    2015-01-01

    Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519

  8. Assessment of noise pollution in and around a sensitive zone in North India and its non-auditory impacts.

    Science.gov (United States)

    Khaiwal, Ravindra; Singh, Tanbir; Tripathy, Jaya Prasad; Mor, Suman; Munjal, Sanjay; Patro, Binod; Panda, Naresh

    2016-10-01

    Noise pollution in hospitals is recognized as a serious health hazard. Considering this, the current study aimed to map the noise pollution levels and to explore the self reported non-auditory effects of noise in a tertiary medical institute. The study was conducted in an 1800-bedded tertiary hospital where 27 sites (outdoor, indoor, road side and residential areas) were monitored for exposure to noise using Sound Level Meter for 24h. A detailed noise survey was also conducted around the sampling sites using a structured questionnaire to understand the opinion of the public regarding the impact of noise on their daily lives. The equivalent sound pressure level (Leq) was found higher than the permissible limits at all the sites both during daytime and night. The maximum equivalent sound pressure level (Lmax) during the day was observed higher (>80dB) at the emergency and around the main entrance of the hospital campus. Almost all the respondents (97%) regarded traffic as the major source of noise. About three-fourths (74%) reported irritation with loud noise whereas 40% of respondents reported headache due to noise. Less than one-third of respondents (29%) reported loss of sleep due to noise and 8% reported hypertension, which could be related to the disturbance caused due to noise. Noise levels in and around the hospital was well above the permissible standards. The recent Global Burden of Disease highlights the increasing risk of non communicable diseases. The non-auditory effects studied in the current work add to the risk factors associated with non communicable diseases. Hence, there is need to address the issue of noise pollution and associated health risks specially for vulnerable population. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system

  10. Direct Viewing of Dyslexics' Compensatory Strategies in Speech in Noise Using Auditory Classification Images.

    Science.gov (United States)

    Varnet, Léo; Meunier, Fanny; Trollé, Gwendoline; Hoen, Michel

    2016-01-01

    A vast majority of dyslexic children exhibit a phonological deficit, particularly noticeable in phonemic identification or discrimination tasks. The gap in performance between dyslexic and normotypical listeners appears to decrease into adulthood, suggesting that some individuals with dyslexia develop compensatory strategies. Some dyslexic adults however remain impaired in more challenging listening situations such as in the presence of background noise. This paper addresses the question of the compensatory strategies employed, using the recently developed Auditory Classification Image (ACI) methodology. The results of 18 dyslexics taking part in a phoneme categorization task in noise were compared with those of 18 normotypical age-matched controls. By fitting a penalized Generalized Linear Model on the data of each participant, we obtained his/her ACI, a map of the time-frequency regions he/she relied on to perform the task. Even though dyslexics performed significantly less well than controls, we were unable to detect a robust difference between the mean ACIs of the two groups. This is partly due to the considerable heterogeneity in listening strategies among a subgroup of 7 low-performing dyslexics, as confirmed by a complementary analysis. When excluding these participants to restrict our comparison to the 11 dyslexics performing as well as their average-reading peers, we found a significant difference in the F3 onset of the first syllable, and a tendency of difference on the F4 onset, suggesting that these listeners can compensate for their deficit by relying upon additional allophonic cues.

  11. Crossmodal plasticity in auditory, visual and multisensory cortical areas following noise-induced hearing loss in adulthood.

    Science.gov (United States)

    Schormans, Ashley L; Typlt, Marei; Allman, Brian L

    2017-01-01

    Complete or partial hearing loss results in an increased responsiveness of neurons in the core auditory cortex of numerous species to visual and/or tactile stimuli (i.e., crossmodal plasticity). At present, however, it remains uncertain how adult-onset partial hearing loss affects higher-order cortical areas that normally integrate audiovisual information. To that end, extracellular electrophysiological recordings were performed under anesthesia in noise-exposed rats two weeks post-exposure (0.8-20 kHz at 120 dB SPL for 2 h) and age-matched controls to characterize the nature and extent of crossmodal plasticity in the dorsal auditory cortex (AuD), an area outside of the auditory core, as well as in the neighboring lateral extrastriate visual cortex (V2L), an area known to contribute to audiovisual processing. Computer-generated auditory (noise burst), visual (light flash) and combined audiovisual stimuli were delivered, and the associated spiking activity was used to determine the response profile of each neuron sampled (i.e., unisensory, subthreshold multisensory or bimodal). In both the AuD cortex and the multisensory zone of the V2L cortex, the maximum firing rates were unchanged following noise exposure, and there was a relative increase in the proportion of neurons responsive to visual stimuli, with a concomitant decrease in the number of neurons that were solely responsive to auditory stimuli despite adjusting the sound intensity to account for each rat's hearing threshold. These neighboring cortical areas differed, however, in how noise-induced hearing loss affected audiovisual processing; the total proportion of multisensory neurons significantly decreased in the V2L cortex (control 38.8 ± 3.3% vs. noise-exposed 27.1 ± 3.4%), and dramatically increased in the AuD cortex (control 23.9 ± 3.3% vs. noise-exposed 49.8 ± 6.1%). Thus, following noise exposure, the cortical area showing the greatest relative degree of multisensory convergence

  12. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    in listeners with SNHL, it is likely that HI listeners rely on the enhanced envelope cues to retrieve the pitch of unresolved harmonics. Hence, the relative importance of pitch cues may be altered in HI listeners, whereby envelope cues may be used instead of TFS cues to obtain a similar performance in pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  13. An examination of the effects of various noise on physiological sensibility responses by using human EEG

    International Nuclear Information System (INIS)

    Cho, W. H.; Lee, J. K.; Son, T. Y.; Hwang, S. H.; Choi, H.; Lee, M. S.

    2013-01-01

    This study investigated human stress levels based on electroencephalogram (EEG) data and carried out a subjective evaluation analysis about noise. Visual information is very important for finding human's emotional state. And relatively more previous works have been done than those using auditory stimulus. Since there are fewer previous works, we thought that using auditory stimulus is good choice for our study. Twelve human subjects were exposed to classic piano, ocean wave, army alarm, ambulance, and mosquito noises. We used two groups of comfortable and uncomfortable noises are to see the difference between the definitely different two groups to confirm usefulness of using this setting of experiment. EEG data were collected during the experimental session. The subjects were tested in a soundproof chamber and asked to minimize blinking, head movement, and swallowing during the experiment. The total time of the noise experiment included the time of the relaxation phase, during which the subjects relaxed in silence for 10 minutes. The relaxation phase was followed by a 20 -second noise exposure. The alpha band activities of the subjects were significantly decreased for the ambulance and mosquito noises, as it compared to the classic piano and ocean wave noises. The alpha band activities of the subjects decreased by 12.8 ± 2.3% for the ocean wave noise, decreased by 32.0 ± 5.4% for the army alarm noise, decreased by 34.5 ± 6.7% for the ambulance noise and decreased by 58.3 ± 9.1% for the mosquito noise compared to that of classic piano. On the other hand, their beta band activities were significantly increased for the ambulance and mosquito noises as it compared to classic piano and ocean wave. The beta band activities of the subjects increased by 7.9 ± 1.7% for the ocean wave noise, increased by 20.6 ± 5.3% for the army alarm noise, increased by 48.0 ± 7.5% for the ambulance noise and increased by 61.9 ± 11.2% for the mosquito noise, as it is compared to

  14. An examination of the effects of various noise on physiological sensibility responses by using human EEG

    Energy Technology Data Exchange (ETDEWEB)

    Cho, W. H.; Lee, J. K.; Son, T. Y.; Hwang, S. H.; Choi, H. [Sungkyunkwan University, Suwon (Korea, Republic of); Lee, M. S. [Hyundai Motor Company, Hwaseong (Korea, Republic of)

    2013-12-15

    This study investigated human stress levels based on electroencephalogram (EEG) data and carried out a subjective evaluation analysis about noise. Visual information is very important for finding human's emotional state. And relatively more previous works have been done than those using auditory stimulus. Since there are fewer previous works, we thought that using auditory stimulus is good choice for our study. Twelve human subjects were exposed to classic piano, ocean wave, army alarm, ambulance, and mosquito noises. We used two groups of comfortable and uncomfortable noises are to see the difference between the definitely different two groups to confirm usefulness of using this setting of experiment. EEG data were collected during the experimental session. The subjects were tested in a soundproof chamber and asked to minimize blinking, head movement, and swallowing during the experiment. The total time of the noise experiment included the time of the relaxation phase, during which the subjects relaxed in silence for 10 minutes. The relaxation phase was followed by a 20 -second noise exposure. The alpha band activities of the subjects were significantly decreased for the ambulance and mosquito noises, as it compared to the classic piano and ocean wave noises. The alpha band activities of the subjects decreased by 12.8 ± 2.3% for the ocean wave noise, decreased by 32.0 ± 5.4% for the army alarm noise, decreased by 34.5 ± 6.7% for the ambulance noise and decreased by 58.3 ± 9.1% for the mosquito noise compared to that of classic piano. On the other hand, their beta band activities were significantly increased for the ambulance and mosquito noises as it compared to classic piano and ocean wave. The beta band activities of the subjects increased by 7.9 ± 1.7% for the ocean wave noise, increased by 20.6 ± 5.3% for the army alarm noise, increased by 48.0 ± 7.5% for the ambulance noise and increased by 61.9 ± 11.2% for the mosquito noise, as it is compared

  15. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  16. Relations Between the Intelligibility of Speech in Noise and Psychophysical Measures of Hearing Measured in Four Languages Using the Auditory Profile Test Battery

    NARCIS (Netherlands)

    van Esch, T. E. M.; Dreschler, W. A.

    2015-01-01

    The aim of the present study was to determine the relations between the intelligibility of speech in noise and measures of auditory resolution, loudness recruitment, and cognitive function. The analyses were based on data published earlier as part of the presentation of the Auditory Profile, a test

  17. A quiet NICU for improved infants’ health, development and well-being : A systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  18. A quiet NICU for improved infants' health, development and well-being : a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; van Stuijvenberg, M.; van Goudoever, J. B.

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  19. A quiet NICU for improved infants' health, development and well-being : A systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  20. Effects of noise-induced hearing loss on parvalbumin and perineuronal net expression in the mouse primary auditory cortex.

    Science.gov (United States)

    Nguyen, Anna; Khaleel, Haroun M; Razak, Khaleel A

    2017-07-01

    Noise induced hearing loss is associated with increased excitability in the central auditory system but the cellular correlates of such changes remain to be characterized. Here we tested the hypothesis that noise-induced hearing loss causes deterioration of perineuronal nets (PNNs) in the auditory cortex of mice. PNNs are specialized extracellular matrix components that commonly enwrap cortical parvalbumin (PV) containing GABAergic interneurons. Compared to somatosensory and visual cortex, relatively less is known about PV/PNN expression patterns in the primary auditory cortex (A1). Whether changes to cortical PNNs follow acoustic trauma remains unclear. The first aim of this study was to characterize PV/PNN expression in A1 of adult mice. PNNs increase excitability of PV+ inhibitory neurons and confer protection to these neurons against oxidative stress. Decreased PV/PNN expression may therefore lead to a reduction in cortical inhibition. The second aim of this study was to examine PV/PNN expression in superficial (I-IV) and deep cortical layers (V-VI) following noise trauma. Exposing mice to loud noise caused an increase in hearing threshold that lasted at least 30 days. PV and PNN expression in A1 was analyzed at 1, 10 and 30 days following the exposure. No significant changes were observed in the density of PV+, PNN+, or PV/PNN co-localized cells following hearing loss. However, a significant layer- and cell type-specific decrease in PNN intensity was seen following hearing loss. Some changes were present even at 1 day following noise exposure. Attenuation of PNN may contribute to changes in excitability in cortex following noise trauma. The regulation of PNN may open up a temporal window for altered excitability in the adult brain that is then stabilized at a new and potentially pathological level such as in tinnitus. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. Copyright © 2015 the American Physiological Society.

  2. Attention effects at auditory periphery derived from human scalp potentials: displacement measure of potentials.

    Science.gov (United States)

    Ikeda, Kazunari; Hayashi, Akiko; Sekiguchi, Takahiro; Era, Shukichi

    2006-10-01

    It is known in humans that electrophysiological measures such as the auditory brainstem response (ABR) are difficult to identify the attention effect at the auditory periphery, whereas the centrifugal effect has been detected by measuring otoacoustic emissions. This research developed a measure responsive to the shift of human scalp potentials within a brief post-stimulus period (13 ms), that is, displacement percentage, and applied it to an experiment to retrieve the peripheral attention effect. In the present experimental paradigm, tone pips were exposed to the left ear whereas the other ear was masked by white noise. Twelve participants each conducted two conditions of either ignoring or attending to the tone pips. Relative to averaged scalp potentials in the ignoring condition, the shift of the potentials was found within early component range during the attentive condition, and displacement percentage then revealed a significant magnitude difference between the two conditions. These results suggest that, using a measure representing the potential shift itself, the peripheral effect of attention can be detected from human scalp potentials.

  3. Reorganization of auditory map and pitch discrimination in adult rats chronically exposed to low-level ambient noise

    Directory of Open Access Journals (Sweden)

    Weimin eZheng

    2012-09-01

    Full Text Available Behavioral adaption to a changing environment is critical for an animal’s survival. How well the brain can modify its functional properties based on experience essentially defines the limits of behavioral adaptation. In adult animals the extent to which experience shapes brain function has not been fully explored. Moreover, the perceptual consequences of experience-induced changes in the brains of adults remain unknown. Here we show that the tonotopic map in the primary auditory cortex of adult rats living with low-level ambient noise underwent a dramatic reorganization. Behaviorally, chronic noise-exposure impaired fine, but not coarse pitch discrimination. When tested in a noisy environment, the noise-exposed rats performed as well as in a quiet environment whereas the control rats performed poorly. This suggests that noise-exposed animals had adapted to living in a noisy environment. Behavioral pattern analyses revealed that stress or distraction engendered by the noisy background could not account for the poor performance of the control rats in a noisy environment. A reorganized auditory map may therefore have served as the neural substrate for the consistent performance of the noise-exposed rats in a noisy environment.

  4. Acceptance of background noise, working memory capacity, and auditory evoked potentials in subjects with normal hearing.

    Science.gov (United States)

    Brännström, K Jonas; Zunic, Edita; Borovac, Aida; Ibertsson, Tina

    2012-01-01

    The acceptable noise level (ANL) test is a method for quantifying the amount of background noise that subjects accept when listening to speech. Large variations in ANL have been seen between normal-hearing subjects and between studies of normal-hearing subjects, but few explanatory variables have been identified. To explore a possible relationship between a Swedish version of the ANL test, working memory capacity (WMC), and auditory evoked potentials (AEPs). ANL, WMC, and AEP were tested in a counterbalanced order across subjects. Twenty-one normal-hearing subjects participated in the study (14 females and 7 males; aged 20-39 yr with an average of 25.7 yr). Reported data consists of age, pure-tone average (PTA), most comfortable level (MCL), background noise level (BNL), ANL (i.e., MCL - BNL), AEP latencies, AEP amplitudes, and WMC. Spearman's rank correlation coefficient was calculated between the collected variables to investigate associations. A principal component analysis (PCA) with Varimax rotation was conducted on the collected variables to explore underlying factors and estimate interactions between the tested variables. Subjects were also pooled into two groups depending on their results on the WMC test, one group with a score lower than the average and one with a score higher than the average. Comparisons between these two groups were made using the Mann-Whitney U-test with Bonferroni correction for multiple comparisons. A negative association was found between ANL and WMC but not between AEP and ANL or WMC. Furthermore, ANL is derived from MCL and BNL, and a significant positive association was found between BNL and WMC. However, no significant associations were seen between AEP latencies and amplitudes and the demographic variables, MCL, and BNL. The PCA identified two underlying factors: One that contained MCL, BNL, ANL, and WMC and another that contained latency for wave Na and amplitudes for waves V and Na-Pa. Using the variables in the first factor

  5. Predicting hearing thresholds in occupational noise-induced hearing loss by auditory steady state responses.

    Science.gov (United States)

    Attias, Joseph; Karawani, Hanin; Shemesh, Rafi; Nageris, Ben

    2014-01-01

    Currently available behavioral tools for the assessment of noise-induced hearing loss (NIHL) depend on the reliable cooperation of the subject. Furthermore, in workers' compensation cases, there is considerable financial gain to be had from exaggerating symptoms, such that accurate assessment of true hearing threshold levels is essential. An alternative objective physiologic tool for assessing NIHL is the auditory steady state response (ASSR) test, which combines frequency specificity with a high level of auditory stimulation, making it applicable for the evaluation of subjects with a moderate to severe deficit. The primary aim of the study was to assess the value of the multifrequency ASSR test in predicting the behavioral warble-tone audiogram in a large sample of young subjects with NIHL of varying severity or with normal hearing. The secondary goal was to assess suprathreshold ASSR growth functions in these two groups. The study group included 157 subjects regularly exposed to high levels of occupational noise, who attended a university-associated audiological clinic for evaluation of NIHL from 2009 through 2011. All underwent a behavioral audiogram, and on the basis of the findings, were divided into those with NIHL (108 subjects, 216 ears) or normal hearing (49 subjects, 98 ears). The accuracy of the ASSR threshold estimations for frequencies of 500, 1000, 2000, and 4000 Hz was compared between groups, and the specificity and sensitivity of the ASSR test in differentiating ears with or without NIHL was calculated using receiver operating characteristic analysis. Linear regression analysis was used to formulate an equation to predict the behavioral warble-tone audiogram at each test frequency using ASSR thresholds. Multifrequency ASSR amplitude growth as a function of stimulus intensity was compared between the NIHL and normal-hearing groups for 1000 Hz and 4000 Hz carrier frequencies. In the subjects with NIHL, ASSR thresholds to various frequencies were

  6. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  7. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  8. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  9. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners.

    Science.gov (United States)

    Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich

    2015-03-01

    Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.

  10. Non-auditory effects of noise in industry. IV. A field study on industrial noise and blood pressure

    NARCIS (Netherlands)

    Verbeek, J. H.; van Dijk, F. J.; de Vries, F. F.

    1987-01-01

    Audiometry and casual blood pressure measurements were carried out among industrial workers exposed to noise levels exceeding 80 dB(A). Workers with long-term noise exposure had increased blood pressure, after correction for age. Only a weak correlation was observed between noise-induced hearing

  11. Diffusion tractography of the subcortical auditory system in a postmortem human brain

    OpenAIRE

    Sitek, Kevin

    2017-01-01

    The subcortical auditory system is challenging to identify with standard human brain imaging techniques: MRI signal decreases toward the center of the brain as well as at higher resolution, both of which are necessary for imaging small brainstem auditory structures.Using high-resolution diffusion-weighted MRI, we asked:Can we identify auditory structures and connections in high-resolution ex vivo images?Which structures and connections can be mapped in vivo?

  12. A review of the auditory and non-auditory effects of exposure to noise on women\\'s health

    Directory of Open Access Journals (Sweden)

    Shiva Soury

    2017-09-01

    Conclusion: The present review article showed that the women’s exposure to occupational noise has specific effects in addition to hearing loss and physiological effects which for women in all circumstances, and especially during pregnancy, can have more consequences than men. Therefore, it is essential to pay attention to the physiological and even psychological characteristics of women, especially pregnant women, in the occupational health monitoring program and periodic medical examinations.

  13. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  14. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  15. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  16. Processamento auditivo de militares expostos a ruído ocupacional Auditory processing of servicemen exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    Carla Cassandra de Souza Santos

    2008-03-01

    Full Text Available OBJETIVO: avaliar o processamento auditivo de militares expostos a ruído ocupacional. MÉTODOS: foram avaliados 41 militares, com exposição a ruído superior a 10 anos, subdivididos em Grupo A (n =16, sem perda auditiva e Grupo B (n = 25, com perda auditiva. Foram realizadas avaliação audiológica básica e testes de processamento auditivo (testes de Fala Filtrada, SSW em Português e de Padrão de Freqüência. RESULTADOS: observou-se altas incidências de alteração de processamento auditivo, especialmente no teste de Fala Filtrada (43,75% e 68% nos grupos A e B, respectivamente e teste de Padrão de Freqüência (68,75% e 48%, nos grupos A e B, respectivamente. O teste SSW não se mostrou eficiente para avaliar as habilidades auditivas centrais de indivíduos expostos a elevados níveis de pressão sonora. CONCLUSÃO: a exposição a ruído ocupacional interfere no processamento auditivo de militares. As alterações na via auditiva central podem ser verificadas independente da presença de alteração auditiva periférica.PURPOSE: to evaluate the auditory processing of military personnel exposed to occupational noise. METHODS: 41 servicemen, exposed to noise for at least 10 years were evaluated, divided into Group A (n= 16, without hearing loss and Group B (n= 25, with hearing loss. The following evaluations were carried through: basic audilogic evaluation and auditory processing tests (low-filtered, SSW and Pitch Pattern Sequence tests. RESULTS: there were high incidences of auditory processing alterations, especially at low-filtered test (43.75% and 68% on groups A e B, respectively and Pitch Pattern Sequence test (68.75% and 48%, on groups A e B, respectively. The SSW test was not efficient to evaluate the central hearing abilities of people exposed to high levels of sound pressure. CONCLUSION: the occupational noise exposure interferes in the auditory processing of military personnel. The alterations on central auditory pathways

  17. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    OpenAIRE

    Niklas eRönnberg; Niklas eRönnberg; Mary eRudner; Mary eRudner; Thomas eLunner; Thomas eLunner; Thomas eLunner; Thomas eLunner; Stefan eStenfelt; Stefan eStenfelt

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise t...

  18. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise

    OpenAIRE

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.

    2011-01-01

    How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMR...

  19. Effects of background noise on inter-trial phase coherence and auditory N1-P2 responses to speech stimuli.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2015-10-01

    This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  1. Single-Sided Deafness: Impact of Cochlear Implantation on Speech Perception in Complex Noise and on Auditory Localization Accuracy.

    Science.gov (United States)

    Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias

    2017-12-01

    To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.

  2. Relations Between the Intelligibility of Speech in Noise and Psychophysical Measures of Hearing Measured in Four Languages Using the Auditory Profile Test Battery

    Directory of Open Access Journals (Sweden)

    T. E. M. Van Esch

    2015-12-01

    Full Text Available The aim of the present study was to determine the relations between the intelligibility of speech in noise and measures of auditory resolution, loudness recruitment, and cognitive function. The analyses were based on data published earlier as part of the presentation of the Auditory Profile, a test battery implemented in four languages. Tests of the intelligibility of speech, resolution, loudness recruitment, and lexical decision making were measured using headphones in five centers: in Germany, the Netherlands, Sweden, and the United Kingdom. Correlations and stepwise linear regression models were calculated. In sum, 72 hearing-impaired listeners aged 22 to 91 years with a broad range of hearing losses were included in the study. Several significant correlations were found with the intelligibility of speech in noise. Stepwise linear regression analyses showed that pure-tone average, age, spectral and temporal resolution, and loudness recruitment were significant predictors of the intelligibility of speech in fluctuating noise. Complex interrelationships between auditory factors and the intelligibility of speech in noise were revealed using the Auditory Profile data set in four languages. After taking into account the effects of pure-tone average and age, spectral and temporal resolution and loudness recruitment had an added value in the prediction of variation among listeners with respect to the intelligibility of speech in noise. The results of the lexical decision making test were not related to the intelligibility of speech in noise, in the population studied.

  3. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  4. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  5. Effects of noise exposure on neonatal auditory brainstem response thresholds in pregnant guinea pigs at different gestational periods.

    Science.gov (United States)

    Morimoto, Chihiro; Nario, Kazuhiko; Nishimura, Tadashi; Shimokura, Ryota; Hosoi, Hiroshi; Kitahara, Tadashi

    2017-01-01

    Noise exposure during pregnancy has been reported to cause fetal hearing impairment. However, little is known about the effects of noise exposure during various gestational stages on postnatal hearing. In the present study, we investigated the effects of noise exposure on auditory brainstem response (ABR) at the early, mid-, and late gestational periods in newborn guinea pigs. Pregnant guinea pigs were exposed to 4-kHz pure tone at a 120-dB sound pressure level for 4 h. We divided the animals into four groups as follows: the control, early gestational exposure, mid-gestational exposure, and late gestational exposure groups. ABR thresholds and latencies in newborns were recorded using 1-, 2-, and 4-kHz tone burst on postnatal days 1, 7, 14, and 28. Changes in ABR thresholds and latencies were measured between the 4 × 4 and 4 × 3 factorial groups mentioned above (gestational periods × postnatal days, gestational periods × frequencies). The thresholds were low in the order of control group guinea pigs. This is the first study to show that noise exposure during the early, mid-, and late gestational periods significantly elevated ABR thresholds in neonatal guinea pigs. © 2016 Japan Society of Obstetrics and Gynecology.

  6. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  7. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  8. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  9. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  10. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  11. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  12. Achilles’ Ear? Inferior Human Short-Term and Recognition Memory in the Auditory Modality

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects’ retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1–4 s). However, at longer retention intervals (8–32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices. PMID:24587119

  13. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  14. AUDITORY NUCLEI: DISTINCTIVE RESPONSE PATTERNS TO WHITE NOISE AND TONES IN UNANESTHETIZED CATS.

    Science.gov (United States)

    GALIN, D

    1964-10-09

    Electrical responses to "white" noise and tonal stimuli were recorded from unanesthetized cats with permanently implanted bipolar electrodes. The cochlear nucleus, inferior colliculus, and medial geniculate each showed distinctive patterns of evoked activity. White noise and tones produced qualitatively different types of response. A decrease in activity characterized the response of the inferior colliculus to tonal stimuli.

  15. The Effects of Acoustic White Noise on the Rat Central Auditory System During the Fetal and Critical Neonatal Periods: A Stereological Study.

    Science.gov (United States)

    Salehi, Mohammad Saied; Namavar, Mohammad Reza; Tamadon, Amin; Bahmani, Raziyeh; Jafarzadeh Shirazi, Mohammad Reza; Khazali, Homayoun; Dargahi, Leila; Pandamooz, Sareh; Mohammad-Rezazadeh, Farzad; Rashidi, Fatemeh Sadat

    2017-01-01

    To evaluate the effects of long-term, moderate level noise exposure during crucial periods of rat infants on stereological parameters of medial geniculate body (MGB) and auditory cortex. Twenty-four male offspring of 12 pregnant rats were divided into four groups: fetal-to-critical period group, which were exposed to noise from the last 10 days of fetal life till postnatal day (PND) 29; fetal period group that exposed to noise during the last 10 days of fetal life; critical period group, exposed to noise from PND 15 till PND 29, and control group. White noise at 90 dB for 2 h per day was used. Variance for variables was performed using Proc GLM followed by mean comparison by Duncan's multiple range test. Numerical density of neurons in MGB of fetal-to-critical period group was lower than control group. Similar results were seen in numerical density of neurons in layers IV and VI of auditory cortex. Furthermore, no significant difference was observed in the volume of auditory cortex among groups, and only MGB volume in fetal-to-critical period group was higher than other groups. Estimated total number of neurons in MGB was not significantly different among groups. It seems necessary to prevent long-term moderate level noise exposure during fetal-to-critical neonatal period.

  16. Human auditory steady state responses to binaural and monaural beats.

    Science.gov (United States)

    Schwarz, D W F; Taylor, P

    2005-03-01

    Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.

  17. Noise-evoked otoacoustic emissions in humans

    NARCIS (Netherlands)

    Maat, B; Wit, HP; van Dijk, P

    2000-01-01

    Click-evoked otoacoustic emissions (CEOAEs) and acoustical responses evoked by bandlimited Gaussian noise (noise-evoked otoacoustic emissions; NEOAEs) were measured in three normal-hearing subjects. For the NEOAEs the first- and second-order Wiener kernel and polynomial correlation functions up to

  18. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  19. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    Science.gov (United States)

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  20. Effect of fMRI acoustic noise on non-auditory working memory task: comparison between continuous and pulsed sound emitting EPI.

    Science.gov (United States)

    Haller, Sven; Bartsch, Andreas J; Radue, Ernst W; Klarhöfer, Markus; Seifritz, Erich; Scheffler, Klaus

    2005-11-01

    Conventional blood oxygenation level-dependent (BOLD) based functional magnetic resonance imaging (fMRI) is accompanied by substantial acoustic gradient noise. This noise can influence the performance as well as neuronal activations. Conventional fMRI typically has a pulsed noise component, which is a particularly efficient auditory stimulus. We investigated whether the elimination of this pulsed noise component in a recent modification of continuous-sound fMRI modifies neuronal activations in a cognitively demanding non-auditory working memory task. Sixteen normal subjects performed a letter variant n-back task. Brain activity and psychomotor performance was examined during fMRI with continuous-sound fMRI and conventional fMRI. We found greater BOLD responses in bilateral medial frontal gyrus, left middle frontal gyrus, left middle temporal gyrus, left hippocampus, right superior frontal gyrus, right precuneus and right cingulate gyrus with continuous-sound compared to conventional fMRI. Conversely, BOLD responses were greater in bilateral cingulate gyrus, left middle and superior frontal gyrus and right lingual gyrus with conventional compared to continuous-sound fMRI. There were no differences in psychomotor performance between both scanning protocols. Although behavioral performance was not affected, acoustic gradient noise interferes with neuronal activations in non-auditory cognitive tasks and represents a putative systematic confound.

  1. The effects of industrial noise of higher spectrum on the workers’ auditory perception abilities.

    Science.gov (United States)

    Mihailović, Dobrivoje; Đurić, Nenad; Kovačević, Ivana; Mihailović, Đorđe

    2016-08-01

    Results of previous studies gave support to the idea that machines in power plants produce noise of different levels of loudness and frequency, and that it could cause deterioration of the hearing ability of workers. As a matter of fact, noiseinduced hearing loss is the most widespread occupational disease nowadays. As noise is a complex acoustic phenomenon, more factors have to be considered when studying it, such as frequency, intensity and the period of exposure. The aim of this study was to find if there are differences in the absolute threshold of hearing between workers in the factory production lines that are constantly exposed to the industrial noise of higher spectrum and those exposed to the noise of standard spectrum at different frequencies of sound. In the research plan, there were 308 workers employed in the production line of the Factory “Knjaz Miloš”, Aranđelovac. A total of 205 of them were working in the conditions of higher spectrum noise (4,000 Hz – 8,000 Hz) and 103 workers were exposed to standard noise spectrum (31.5 Hz – 2,000.0 Hz). The objective measures of noise (frequency and amplitude) were acquired by phonometer, and measures of absolute threshold of hearing for both ears were obtained by audiometer by exposure to nine sound frequency levels. Data were statistically analyzed by establishing the significance of differences between absolute thresholds of hearing for both groups and for all nine frequency levels. It was found that the absolute threshold of hearing is significantly higher for the group exposed to highfrequency noise at the 4,000 Hz and 8,000 Hz levels of frequency. Reduction of hearing sensitivity is evident for those exposed to higher spectrum noise, which is particularly evident at the higher frequency levels. Employees are often unaware of its effects because they are the results of prolonged exposure. Therefore, working in those conditions requires preventive measures and regular testing of the hearing

  2. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  3. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    Directory of Open Access Journals (Sweden)

    Beverly Hannah

    2017-12-01

    Full Text Available Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF and Auditory-FacialGestural (AFG inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  4. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  5. Acquisition, Analyses and Interpretation of fMRI Data: A Study on the Effective Connectivity in Human Primary Auditory Cortices

    International Nuclear Information System (INIS)

    Ahmad Nazlim Yusoff; Mazlyfarina Mohamad; Khairiah Abdul Hamid

    2011-01-01

    A study on the effective connectivity characteristics in auditory cortices was conducted on five healthy Malay male subjects with the age of 20 to 40 years old using functional magnetic resonance imaging (fMRI), statistical parametric mapping (SPM5) and dynamic causal modelling (DCM). A silent imaging paradigm was used to reduce the scanner sound artefacts on functional images. The subjects were instructed to pay attention to the white noise stimulus binaurally given at intensity level of 70 dB higher than the hearing level for normal people. Functional specialisation was studied using Matlab-based SPM5 software by means of fixed effects (FFX), random effects (RFX) and conjunction analyses. Individual analyses on all subjects indicate asymmetrical bilateral activation between the left and right auditory cortices in Brodmann areas (BA)22, 41 and 42 involving the primary and secondary auditory cortices. The three auditory areas in the right and left auditory cortices are selected for the determination of the effective connectivity by constructing 9 network models. The effective connectivity is determined on four out of five subjects with the exception of one subject who has the BA22 coordinates located too far from BA22 coordinates obtained from group analysis. DCM results showed the existence of effective connectivity between the three selected auditory areas in both auditory cortices. In the right auditory cortex, BA42 is identified as input centre with unidirectional parallel effective connectivities of BA42→BA41and BA42→BA22. However, for the left auditory cortex, the input is BA41 with unidirectional parallel effective connectivities of BA41→BA42 and BA41→BA22. The connectivity between the activated auditory areas suggests the existence of signal pathway in the auditory cortices even when the subject is listening to noise. (author)

  6. Selective attention reduces physiological noise in the external ear canals of humans. II: Visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070

  7. Noise exposure and auditory thresholds of German airline pilots: a cross-sectional study.

    Science.gov (United States)

    Müller, Reinhard; Schneider, Joachim

    2017-05-30

    The cockpit workplace of airline pilots is a noisy environment. This study examines the hearing thresholds of pilots with respect to ambient noise and communication sound. The hearing of 487 German pilots was analysed by audiometry in the frequency range of 125 Hz-16 kHz in varying age groups. Cockpit noise (free-field) data and communication sound (acoustic manikin) measurements were evaluated. The ambient noise levels in cockpits were found to be between 74 and 80 dB(A), and the sound pressure levels under the headset were found to be between 84 and 88 dB(A).The left-right threshold differences at 3, 4 and 6 kHz show evidence of impaired hearing at the left ear, which worsens by age.In the age groups <40/≥40 years the mean differences at 3 kHz are 2/3 dB, at 4 kHz 2/4 dB and at 6 kHz 1/6 dB.In the pilot group which used mostly the left ear for communication tasks (43 of 45 are in the older age group) the mean difference at 3 kHz is 6 dB, at 4 kHz 7 dB and at 6 kHz 10 dB. The pilots who used the headset only at the right ear also show worse hearing at the left ear of 2 dB at 3 kHz, 3 dB at 4 kHz and at 6 kHz. The frequency-corrected exposure levels under the headset are 7-11 dB(A) higher than the ambient noise with an averaged signal-to-noise ratio for communication of about 10 dB(A). The left ear seems to be more susceptible to hearing loss than the right ear. Active noise reduction systems allow for a reduced sound level for the communication signal below the upper exposure action value of 85 dB(A) and allow for a more relaxed working environment for pilots. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  9. Noise exposure of immature rats can induce different age-dependent extra-auditory alterations that can be partially restored by rearing animals in an enriched environment.

    Science.gov (United States)

    Molina, S J; Capani, F; Guelman, L R

    2016-04-01

    It has been previously shown that different extra-auditory alterations can be induced in animals exposed to noise at 15 days. However, data regarding exposure of younger animals, that do not have a functional auditory system, have not been obtained yet. Besides, the possibility to find a helpful strategy to restore these changes has not been explored so far. Therefore, the aims of the present work were to test age-related differences in diverse hippocampal-dependent behavioral measurements that might be affected in noise-exposed rats, as well as to evaluate the effectiveness of a potential neuroprotective strategy, the enriched environment (EE), on noise-induced behavioral alterations. Male Wistar rats of 7 and 15 days were exposed to moderate levels of noise for two hours. At weaning, animals were separated and reared either in standard or in EE cages for one week. At 28 days of age, different hippocampal-dependent behavioral assessments were performed. Results show that rats exposed to noise at 7 and 15 days were differentially affected. Moreover, EE was effective in restoring all altered variables when animals were exposed at 7 days, while a few were restored in rats exposed at 15 days. The present findings suggest that noise exposure was capable to trigger significant hippocampal-related behavioral alterations that were differentially affected, depending on the age of exposure. In addition, it could be proposed that hearing structures did not seem to be necessarily involved in the generation of noise-induced hippocampal-related behaviors, as they were observed even in animals with an immature auditory pathway. Finally, it could be hypothesized that the differential restoration achieved by EE rearing might also depend on the degree of maturation at the time of exposure and the variable evaluated, being younger animals more susceptible to environmental manipulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    Science.gov (United States)

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  11. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    Directory of Open Access Journals (Sweden)

    Niklas eRönnberg

    2014-12-01

    Full Text Available Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR on listening effort, as a function of working memory capacity (WMC and updating ability (UA. The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing MLL. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech-fragments and vocal sounds in the background noise.

  12. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    Science.gov (United States)

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  13. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  14. Effects of long-term non-traumatic noise exposure on the adult central auditory system. Hearing problems without hearing loss.

    Science.gov (United States)

    Eggermont, Jos J

    2017-09-01

    It is known that hearing loss induces plastic changes in the brain, causing loudness recruitment and hyperacusis, increased spontaneous firing rates and neural synchrony, reorganizations of the cortical tonotopic maps, and tinnitus. Much less in known about the central effects of exposure to sounds that cause a temporary hearing loss, affect the ribbon synapses in the inner hair cells, and cause a loss of high-threshold auditory nerve fibers. In contrast there is a wealth of information about central effects of long-duration sound exposures at levels ≤80 dB SPL that do not even cause a temporary hearing loss. The central effects for these moderate level exposures described in this review include changes in central gain, increased spontaneous firing rates and neural synchrony, and reorganization of the cortical tonotopic map. A putative mechanism is outlined, and the effect of the acoustic environment during the recovery process is illustrated. Parallels are drawn with hearing problems in humans with long-duration exposures to occupational noise but with clinical normal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...... of reduced frequency selectivity on binaural pitch perception, the ability to accurately process the temporal fine structure (TFS) of sounds at the output of the cochlear filters was found to be essential for perceiving binaural pitch. Monaural TFS processing also played a major and independent role...

  16. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...

  17. Level-Dependent Nonlinear Hearing Protector Model in the Auditory Hazard Assessment Algorithm for Humans

    Science.gov (United States)

    2015-04-01

    HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with

  18. Searching for the optimal stimulus eliciting auditory brainstem responses in humans

    DEFF Research Database (Denmark)

    Fobel, Oliver; Dau, Torsten

    2004-01-01

    -chirp, was based on estimates of human basilar membrane (BM) group delays derived from stimulus-frequency otoacoustic emissions (SFOAE) at a sound pressure level of 40 dB [Shera and Guinan, in Recent Developments in Auditory Mechanics (2000)]. The other chirp, referred to as the A-chirp, was derived from latency...

  19. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  20. Behavioral lifetime of human auditory sensory memory predicted by physiological measures.

    Science.gov (United States)

    Lu, Z L; Williamson, S J; Kaufman, L

    1992-12-04

    Noninvasive magnetoencephalography makes it possible to identify the cortical area in the human brain whose activity reflects the decay of passive sensory storage of information about auditory stimuli (echoic memory). The lifetime for decay of the neuronal activation trace in primary auditory cortex was found to predict the psychophysically determined duration of memory for the loudness of a tone. Although memory for the loudness of a specific tone is lost, the remembered loudness decays toward the global mean of all of the loudnesses to which a subject is exposed in a series of trials.

  1. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    Science.gov (United States)

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  2. Comparing Auditory Noise Treatment with Stimulant Medication on Cognitive Task Performance in Children with Attention Deficit Hyperactivity Disorder: Results from a Pilot Study.

    Science.gov (United States)

    Söderlund, Göran B W; Björk, Christer; Gustafsson, Peik

    2016-01-01

    Recent research has shown that acoustic white noise (80 dB) can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD). This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC) took the same tests as a comparison. Twenty children with ADHD of combined or inattentive subtypes and twenty TDC matched for age and gender performed three different tests (word recall, spanboard and n-back task) during exposure to white noise (80 dB) and in a silent condition. The ADHD children were tested with and without central stimulant medication. In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks. This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.

  3. Comparing Auditory Noise Treatment with Stimulant Medication on Cognitive Task Performance in Children with Attention Deficit Hyperactivity Disorder: Results from a Pilot Study

    Directory of Open Access Journals (Sweden)

    Göran B W Söderlund

    2016-09-01

    Full Text Available Background: Recent research has shown that acoustic white noise (80 dB can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD. This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC took the same tests as a comparison.Methods: Twenty children with ADHD of combined or inattentive subtypes and twenty typically developed children matched for age and gender performed three different tests (word recall, spanboard and n-back task during exposure to white noise (80 dB and in a silent condition. The ADHD children were tested with and without central stimulant medication.Results: In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks.Conclusion: This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.

  4. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  5. Early continuous white noise exposure alters l-alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunit glutamate receptor 2 and gamma-aminobutyric acid type a receptor subunit beta3 protein expression in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Zhang, Jiping; Cai, Rui; Sun, Xinde

    2010-02-15

    Auditory experience during the postnatal critical period is essential for the normal maturation of auditory function. Previous studies have shown that rearing infant rat pups under conditions of continuous moderate-level noise delayed the emergence of adult-like topographic representational order and the refinement of response selectivity in the primary auditory cortex (A1) beyond normal developmental benchmarks and indefinitely blocked the closure of a brief, critical-period window. To gain insight into the molecular mechanisms of these physiological changes after noise rearing, we studied expression of the AMPA receptor subunit GluR2 and GABA(A) receptor subunit beta3 in the auditory cortex after noise rearing. Our results show that continuous moderate-level noise rearing during the early stages of development decreases the expression levels of GluR2 and GABA(A)beta3. Furthermore, noise rearing also induced a significant decrease in the level of GABA(A) receptors relative to AMPA receptors. However, in adult rats, noise rearing did not have significant effects on GluR2 and GABA(A)beta3 expression or the ratio between the two units. These changes could have a role in the cellular mechanisms involved in the delayed maturation of auditory receptive field structure and topographic organization of A1 after noise rearing. Copyright 2009 Wiley-Liss, Inc.

  6. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise.

    Science.gov (United States)

    Söderlund, Göran B W; Jobs, Elisabeth Nilsson

    2016-01-01

    The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6-9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman's speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.

  7. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear when Exposed to 65 dB of Auditory Noise

    Directory of Open Access Journals (Sweden)

    Göran B W Söderlund

    2016-01-01

    Full Text Available The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD, affecting approximately 6-9 % of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB. Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children (TDC and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.

  8. Jet Fuel Exacerbated Noise-Induced Hearing Loss: Focus on Prediction of Central Auditory Processing Dysfunction

    Science.gov (United States)

    2017-09-01

    standard deviation). 4.3.3 Lipid Class Determination for Partition Coefficient Prediction. Research was conducted on methods to quantify...Advancement of Military Medicine Aeromedical Research Department United States Air Force School of Aerospace Medicine Wright-Patterson AFB OH...Report for Oct 2015 to Mar 2017 Air Force Research Laboratory 711th Human Performance Wing Airman Systems Directorate Bioeffects Division

  9. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    Science.gov (United States)

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  10. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  11. Towards Clinical Application of Neurotrophic Factors to the Auditory Nerve; Assessment of Safety and Efficacy by a Systematic Review of Neurotrophic Treatments in Humans

    NARCIS (Netherlands)

    Bezdjian, Aren; Kraaijenga, Véronique J C; Ramekers, Dyan; Versnel, Huib; Thomeer, Hans G X M; Klis, Sjaak F L; Grolman, Wilko

    2016-01-01

    Animal studies have evidenced protection of the auditory nerve by exogenous neurotrophic factors. In order to assess clinical applicability of neurotrophic treatment of the auditory nerve, the safety and efficacy of neurotrophic therapies in various human disorders were systematically reviewed.

  12. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  13. Nonverbal arithmetic in humans: light from noise.

    Science.gov (United States)

    Cordes, Sara; Gallistel, C R; Gelman, Rochel; Latham, Peter

    2007-10-01

    Animal and human data suggest the existence of a cross-species system of analog number representation (e.g., Cordes, Gelman, Gallistel, & Whalen, 2001; Meeck & Church, 1983), which may mediate the computation of statistical regularities in the environment (Gallistel, Gelman, & Cordes, 2006). However, evidence of arithmetic manipulation of these nonverbal magnitude representations is sparse and lacking in depth. This study uses the analysis of variability as a tool for understanding properties of these combinatorial processes. Human subjects participated in tasks requiring responses dependent upon the addition, subtraction, or reproduction of nonverbal counts. Variance analyses revealed that the magnitude of both inputs and answer contributed to the variability in the arithmetic responses, with operand variability dominating. Other contributing factors to the observed variability and implications for logarithmic versus scalar models of magnitude representation are discussed in light of these results.

  14. Encoding of frequency-modulation (FM) rates in human auditory cortex.

    Science.gov (United States)

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2015-12-14

    Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.

  15. Bioconversion of Scutellaria baicalensis extract can increase recovery of auditory function in a mouse model of noise-induced hearing loss.

    Science.gov (United States)

    Rodriguez, Isabel; Hong, Bin Na; Nam, Youn Hee; Kim, Eun Young; Park, Geun Ha; Ji, Min Gun; Kang, Tong Ho

    2017-09-01

    In noise-induced hearing loss (NIHL), noise exposure damages cochlear sensory hair cells, which lack the capacity to regenerate. Following noise insult, intense metabolic activity occurs, resulting in a cochlear free radical imbalance. Oxidative stress and antioxidant enzyme alterations, including lipoxygenase upregulation, have been linked to chronic inflammation, which contributes to hearing impairment. We previously proposed Scutellaria baicalensis (SB) extract as an alternative therapeutic for preventing NIHL and attributed its pharmacological effects to baicalein. Although baicalein was most effective, its concentration in SB extract is much lower compared to baicalin. In this study, we performed enzymatic bioconversion using an Sumizyme (SM) enzyme to increase baicalein concentration in SB extract and consequently improve its therapeutic efficacy. HPLC analysis revealed that baicalein concentration in SB extract after bioconversion (BSB) was significantly increased. Moreover, BSB-treated mice exhibited significantly improved auditory function compared with control mice and tended to have improved auditory function compared with SB-treated mice. We also demonstrated that BSB effectively stimulates hair cell regeneration compared to SB that did not achieve the same effect in a zebrafish model. Finally, when compared the abilities of SB and BSB to inhibit lipoxygenase (LOX), BSB showed a greater efficacy. Cumulatively, our data suggest that BSB exhibits improved pharmacological properties for treating NIHL compared with SB. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding FMRI.

    Directory of Open Access Journals (Sweden)

    Ella Striem-Amit

    Full Text Available The primary sensory cortices are characterized by a topographical mapping of basic sensory features which is considered to deteriorate in higher-order areas in favor of complex sensory features. Recently, however, retinotopic maps were also discovered in the higher-order visual, parietal and prefrontal cortices. The discovery of these maps enabled the distinction between visual regions, clarified their function and hierarchical processing. Could such extension of topographical mapping to high-order processing regions apply to the auditory modality as well? This question has been studied previously in animal models but only sporadically in humans, whose anatomical and functional organization may differ from that of animals (e.g. unique verbal functions and Heschl's gyrus curvature. Here we applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities.

  17. Plasticity of the human auditory cortex related to musical training.

    Science.gov (United States)

    Pantev, Christo; Herholz, Sibylle C

    2011-11-01

    During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. NR2B subunit-dependent long-term potentiation enhancement in the rat cortical auditory system in vivo following masking of patterned auditory input by white noise exposure during early postnatal life.

    Science.gov (United States)

    Hogsden, Jennifer L; Dringenberg, Hans C

    2009-08-01

    The composition of N-methyl-D-aspartate (NMDA) receptor subunits influences the degree of synaptic plasticity expressed during development and into adulthood. Here, we show that theta-burst stimulation of the medial geniculate nucleus reliably induced NMDA receptor-dependent long-term potentiation (LTP) of field postsynaptic potentials recorded in the primary auditory cortex (A1) of urethane-anesthetized rats. Furthermore, substantially greater levels of LTP were elicited in juvenile animals (30-37 days old; approximately 55% maximal potentiation) than in adult animals (approximately 30% potentiation). Masking patterned sound via continuous white noise exposure during early postnatal life (from postnatal day 5 to postnatal day 50-60) resulted in enhanced, juvenile-like levels of LTP (approximately 70% maximal potentiation) relative to age-matched controls reared in unaltered acoustic environments (approximately 30%). Rats reared in white noise and then placed in unaltered acoustic environments for 40-50 days showed levels of LTP comparable to those of adult controls, indicating that white noise rearing results in a form of developmental arrest that can be overcome by subsequent patterned sound exposure. We explored the mechanisms mediating white noise-induced plasticity enhancements by local NR2B subunit antagonist application in A1. NR2B subunit antagonists (Ro 25-6981 or ifenprodil) completely reversed white noise-induced LTP enhancement at concentrations that did not affect LTP in adult or age-matched controls. We conclude that white noise exposure during early postnatal life results in the maintenance of juvenile-like, higher levels of plasticity in A1, an effect that appears to be critically dependent on NR2B subunit activation.

  19. Predicting dynamic range and intensity discrimination for electrical pulse-train stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2005-06-01

    This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.

  20. Intonational speech prosody encoding in the human auditory cortex.

    Science.gov (United States)

    Tang, C; Hamilton, L S; Chang, E F

    2017-08-25

    Speakers of all human languages regularly use intonational pitch to convey linguistic meaning, such as to emphasize a particular word. Listeners extract pitch movements from speech and evaluate the shape of intonation contours independent of each speaker's pitch range. We used high-density electrocorticography to record neural population activity directly from the brain surface while participants listened to sentences that varied in intonational pitch contour, phonetic content, and speaker. Cortical activity at single electrodes over the human superior temporal gyrus selectively represented intonation contours. These electrodes were intermixed with, yet functionally distinct from, sites that encoded different information about phonetic features or speaker identity. Furthermore, the representation of intonation contours directly reflected the encoding of speaker-normalized relative pitch but not absolute pitch. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  1. Noise

    Science.gov (United States)

    Noise is all around you, from televisions and radios to lawn mowers and washing machines. Normally, you ... sensitive structures of the inner ear and cause noise-induced hearing loss. More than 30 million Americans ...

  2. Towards a general framework for including noise impacts in LCA

    NARCIS (Netherlands)

    Cucurachi, Stefano; Heijungs, Reinout; Ohlau, Katrin

    Purpose Several damages have been associated with the exposure of human beings to noise. These include auditory effects, i.e., hearing impairment, but also non-auditory physiological ones such as hypertension and ischemic heart disease, or psychological ones such as annoyance, depression, sleep

  3. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  4. Frequency-specific attentional modulation in human primary auditory cortex and midbrain.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-07-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    Science.gov (United States)

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  6. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    Science.gov (United States)

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  7. Noise exposure alters long-term neural firing rates and synchrony in primary auditory and rostral belt cortices following bimodal stimulation.

    Science.gov (United States)

    Takacs, Joseph D; Forrest, Taylor J; Basura, Gregory J

    2017-12-01

    We previously demonstrated that bimodal stimulation (spinal trigeminal nucleus [Sp5] paired with best frequency tone) altered neural tone-evoked and spontaneous firing rates (SFRs) in primary auditory cortex (A1) 15 min after pairing in guinea pigs with and without noise-induced tinnitus. Neural responses were enhanced (+10 ms) or suppressed (0 ms) based on the bimodal pairing interval. Here we investigated whether bimodal stimulation leads to long-term (up to 2 h) changes in tone-evoked and SFRs and neural synchrony (correlate of tinnitus) and if the long-term bimodal effects are altered following noise exposure. To obviate the effects of permanent hearing loss on the results, firing rates and neural synchrony were measured three weeks following unilateral (left ear) noise exposure and a temporary threshold shift. Simultaneous extra-cellular single-unit recordings were made from contralateral (to noise) A1 and dorsal rostral belt (RB); an associative auditory cortical region thought to influence A1, before and after bimodal stimulation (pairing intervals of 0 ms; simultaneous Sp5-tone and +10 ms; Sp5 precedes tone). Sixty and 120 min after 0 ms pairing tone-evoked and SFRs were suppressed in sham A1; an effect only preserved 120 min following pairing in noise. Stimulation at +10 ms only affected SFRs 120 min after pairing in sham and noise-exposed A1. Within sham RB, pairing at 0 and +10 ms persistently suppressed tone-evoked and SFRs, while 0 ms pairing in noise markedly enhanced tone-evoked and SFRs up to 2 h. Together, these findings suggest that bimodal stimulation has long-lasting effects in A1 that also extend to the associative RB that is altered by noise and may have persistent implications for how noise damaged brains process multi-sensory information. Moreover, prior to bimodal stimulation, noise damage increased neural synchrony in A1, RB and between A1 and RB neurons. Bimodal stimulation led to persistent changes in neural synchrony in

  8. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    Science.gov (United States)

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  9. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    Directory of Open Access Journals (Sweden)

    Kongmeng Liew

    2018-02-01

    Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  10. The possible influence of noise frequency components on the health of exposed industrial workers - A review

    Directory of Open Access Journals (Sweden)

    K V Mahendra Prashanth

    2011-01-01

    Full Text Available Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency

  11. Effects of noise pollution over the blood serum immunoglobulins and auditory system on the VFM airport workers, Van, Turkey.

    Science.gov (United States)

    Akan, Zafer; Körpinar, Mehmet Ali; Tulgar, Metin

    2011-06-01

    Noise pollution is a common health problem for developing countries. Especially highways and airports lead to noise pollution in different levels and in many frequencies. In this study, we focused on the effect of noise pollution in airports. This work aimed measurements of noise pollution levels in Van Ferit Melen (VFM) airport and effect of noise pollution over the immunoglobulin A, G, and M changes among VFM airport workers in Turkey. It was seen that apron and terminal workers were exposed to high noise (>80 dB(A)) without any protective precautions. Noise-induced temporary threshold shifts and noise-induced permanent threshold shifts were detected between the apron workers (p  0.05). These findings suggested that the noise pollution in the VFM airport could lead to hearing loss and changes in blood serum immunoglobulin levels of airport workers. Blood serum immunoglobulin changes might be due to vibrational effects of noise pollution. Airport workers should apply protective precautions against effect of noise pollution in the VFM airport.

  12. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  13. Individual differences in sound-in-noise perception are related to the strength of short-latency neural responses to noise.

    Directory of Open Access Journals (Sweden)

    Ekaterina Vinnik

    2011-02-01

    Full Text Available Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40-66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes.

  14. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  15. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth.

    Science.gov (United States)

    van den Heuvel, Marion I; Otte, Renée A; Braeken, Marijke A K A; Winkler, István; Kushnerenko, Elena; Van den Bergh, Bea R H

    2015-07-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was delivered to infants with a frequent repetitive tone and three rare auditory events. The three rare events included a shorter than the regular inter-stimulus interval (ISI-deviant), white noise segments, and environmental sounds. The results suggest that the N250 infantile AERP component emerges during this period in response to white noise but not to environmental sounds, possibly indicating a developmental step towards separating acoustic deviance from contextual novelty. The scalp distribution of the AERP response to both the white noise and the environmental sounds shifted towards frontal areas and AERP peak latencies were overall lower in infants at 4 than at 2 months of age. These observations indicate improvements in the speed of sound processing and maturation of the frontal attentional network in infants during this period. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  17. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  18. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  19. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  20. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  1. Evolution of the auditory ossicles in extant hominids: metric variation in African apes and humans

    Science.gov (United States)

    Quam, Rolf M; Coleman, Mark N; Martínez, Ignacio

    2014-01-01

    The auditory ossicles in primates have proven to be a reliable source of phylogenetic information. Nevertheless, to date, very little data have been published on the metric dimensions of the ear ossicles in African apes and humans. The present study relies on the largest samples of African ape ear ossicles studied to date to address questions of taxonomic differences and the evolutionary transformation of the ossicles in gorillas, chimpanzees and humans. Both African ape taxa show a malleus that is characterized by a long and slender manubrium and relatively short corpus, whereas humans show the opposite constellation of a short and thick manubrium and relatively long corpus. These changes in the manubrium are plausibly linked with changes in the size of the tympanic membrane. The main difference between the incus in African apes and humans seems to be related to changes in the functional length. Compared with chimpanzees, human incudes are larger in nearly all dimensions, except articular facet height, and show a more open angle between the axes. The gorilla incus resembles humans more closely in its metric dimensions, including functional length, perhaps as a result of the dramatically larger body size compared with chimpanzees. The differences between the stapedes of humans and African apes are primarily size-related, with humans being larger in nearly all dimensions. Nevertheless, some distinctions between the African apes were found in the obturator foramen and head height. Although correlations between metric variables in different ossicles were generally lower than those between variables in the same bone, variables of the malleus/incus complex appear to be more strongly correlated than those of the incus/stapes complex, perhaps reflecting the different embryological and evolutionary origins of the ossicles. The middle ear lever ratio for the African apes is similar to other haplorhines, but humans show the lowest lever ratio within primates. Very low levels

  2. Noise Effects on Human Performance: A Meta-Analytic Synthesis

    Science.gov (United States)

    Szalma, James L.; Hancock, Peter A.

    2011-01-01

    Noise is a pervasive and influential source of stress. Whether through the acute effects of impulse noise or the chronic influence of prolonged exposure, the challenge of noise confronts many who must accomplish vital performance duties in its presence. Although noise has diffuse effects, which are shared in common with many other chronic forms of…

  3. Effect of Bluetooth headset and mobile phone electromagnetic fields on the human auditory nerve.

    Science.gov (United States)

    Mandalà, Marco; Colletti, Vittorio; Sacchetto, Luca; Manganotti, Paolo; Ramat, Stefano; Marcocci, Alessandro; Colletti, Liliana

    2014-01-01

    The possibility that long-term mobile phone use increases the incidence of astrocytoma, glioma and acoustic neuroma has been investigated in several studies. Recently, our group showed that direct exposure (in a surgical setting) to cell phone electromagnetic fields (EMFs) induces deterioration of auditory evoked cochlear nerve compound action potential (CNAP) in humans. To verify whether the use of Bluetooth devices reduces these effects, we conducted the present study with the same experimental protocol. Randomized trial. Twelve patients underwent retrosigmoid vestibular neurectomy to treat definite unilateral Ménière's disease while being monitored with acoustically evoked CNAPs to assess direct mobile phone exposure or alternatively the EMF effects of Bluetooth headsets. We found no short-term effects of Bluetooth EMFs on the auditory nervous structures, whereas direct mobile phone EMF exposure confirmed a significant decrease in CNAPs amplitude and an increase in latency in all subjects. The outcomes of the present study show that, contrary to the finding that the latency and amplitude of CNAPs are very sensitive to EMFs produced by the tested mobile phone, the EMFs produced by a common Bluetooth device do not induce any significant change in cochlear nerve activity. The conditions of exposure, therefore, differ from those of everyday life, in which various biological tissues may reduce the EMF affecting the cochlear nerve. Nevertheless, these novel findings may have important safety implications. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  4. What's that sound? Matches with auditory long-term memory induce gamma activity in human EEG.

    Science.gov (United States)

    Lenz, Daniel; Schadow, Jeanette; Thaerig, Stefanie; Busch, Niko A; Herrmann, Christoph S

    2007-04-01

    In recent years the cognitive functions of human gamma-band activity (30-100 Hz) advanced continuously into scientific focus. Not only bottom-up driven influences on 40 Hz activity have been observed, but also top-down processes seem to modulate responses in this frequency band. Among the various functions that have been related to gamma activity a pivotal role has been assigned to memory processes. Visual experiments suggested that gamma activity is involved in matching visual input to memory representations. Based on these findings we hypothesized that such memory related modulations of gamma activity exist in the auditory modality, as well. Thus, we chose environmental sounds for which subjects already had a long-term memory (LTM) representation and compared them to unknown, but physically similar sounds. 21 subjects had to classify sounds as 'recognized' or 'unrecognized', while EEG was recorded. Our data show significantly stronger activity in the induced gamma-band for recognized sounds in the time window between 300 and 500 ms after stimulus onset with a central topography. The results suggest that induced gamma-band activity reflects the matches between sounds and their representations in auditory LTM.

  5. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    Science.gov (United States)

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  6. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  7. Maturational changes in ear advantage for monaural word recognition in noise among listeners with central auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Mohsin Ahmed Shaikh

    2017-02-01

    Full Text Available This study aimed to investigate differences between ears in performance on a monaural word recognition in noise test among individuals across a broad range of ages assessed for (CAPD. Word recognition scores in quiet and in speech noise were collected retrospectively from the medical files of 107 individuals between the ages of 7 and 30 years who were diagnosed with (CAPD. No ear advantage was found on the word recognition in noise task in groups less than ten years. Performance in both ears was equally poor. Right ear performance improved across age groups, with scores of individuals above age 10 years falling within the normal range. In contrast, left ear performance remained essentially stable and in the impaired range across all age groups. Findings indicate poor left hemispheric dominance for speech perception in noise in children below the age of 10 years with (CAPD. However, a right ear advantage on this monaural speech in noise task was observed for individuals 10 years and older.

  8. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex.

    Science.gov (United States)

    Henry, Molly J; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.

  9. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    Science.gov (United States)

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  10. Human annoyance and reactions to hotel room specific noises

    Science.gov (United States)

    Everhard, Ian L.

    2004-05-01

    A new formula is presented where multiple annoyance sources and transmission loss values of any partition are combined to produce a new single number rating of annoyance. The explanation of the formula is based on theoretical psychoacoustics and survey testing used to create variables used to weight the results. An imaginary hotel room is processed through the new formula and is rated based on theoretical survey results that would be taken by guests of the hotel. The new single number rating compares the multiple sources of annoyance to a single imaginary unbiased source where absolute level is the only factor in stimulating a linear rise in annoyance [Fidell et al., J. Acoust. Soc. Am. 66, 1427 (1979); D. M. Jones and D. E. Broadbent, ``Human performance and noise,'' in Handbook of Noise Control, 3rd ed., edited by C. M. Harris (ASA, New York, 1998), Chap. 24; J. P. Conroy and J. S. Roland, ``STC Field Testing and Results,'' in Sound and Vibration Magazine, Acoustical Publications, pp. 10-15 (July 2003)].

  11. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  12. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  13. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... relation between frequency and travel time in the cochlea defines the cochlear delay. This delay is directly associated with the signal analysis occurring in the inner ear and is therefore of primary interest to get a better knowledge of this organ. It is possible to estimate the cochlear delay by direct...... and invasive techniques, but these disrupt the normal functioning of the cochlea and are usually conducted in animals. In order to obtain an estimate of the cochlear delay that is closer to the normally functioning human cochlea, the present project investigates non-invasive methods in normal hearing adults...

  14. Noise pollution in opencast mines - its impact on human environment

    International Nuclear Information System (INIS)

    Tripathy, D.P.; Patnaik, N.K.

    1994-01-01

    Noise could be defined as sound without agreeable musical quality or as unwanted sound. The problem of noise has been accentuated in the mining industry due to increased mechanisation. In opencast mines, noise is generated in almost all the mining operations, becoming thereby an integral part of the mining environment. Prolonged exposure to high levels of noise (>90dBA) proves harmful and may culminate in NIHL. Noise may also bring about other physiological disorders which could lead to irritability and lowering of efficiency. Before initiating any administrative, engineering and medical measures against the noise hazards, noise surveys are essential. They help in identifying the noise pollution sources and quantifying the risk exposures of workers. Effective antinoise measures can accordingly be formulated and implemented. The present paper discusses the results of noise studies in a limestone and dolomite quarry and analyses the SPL (dBA) produced by different machinery in this mine. Further, it focuses on the adverse effects of noise and lists the instruments available for noise monitoring. It also presents the noise standards recommended in India and abroad and suggests the noise abatement strategies to be adopted for protecting the workers against NHL. 7 refs., 2 figs., 6 tabs., 1 app

  15. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    Science.gov (United States)

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  16. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  17. The effect of human activity noise on the acoustic quality in open plan office

    DEFF Research Database (Denmark)

    Dehlbæk, Tania Stenholt; Jeong, Cheol-Ho; Brunskog, Jonas

    2016-01-01

    A disadvantage of open plan offices is the noise annoyance. Noise problems in open plan offices have been dealt with in several studies, and standards have been set up. Still, what has not been taken into account is the effect of human activity noise on acoustic conditions. In this study......, measurements of the general office noise levels and the room acoustic conditions according to ISO 3382-3 have been carried out in five open plan offices. Probability density functions of the sound pressure level have been obtained, and the human activity noise has been identified. Results showed a decrease...... in STI-values including the human activity noise compared to STI-values including only technical background noise as the standard recommends. Furthermore, at 500 Hz a regression analysis showed that the density of people in an room, absorption area, reverberation time as well as the ISO 3382-3 parameter...

  18. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  19. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  1. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  2. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.

    2008-01-01

    consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex....

  3. Efeitos auditivos da exposição combinada: interação entre monóxido de carbono, ruído e tabagismo Auditory effects of combined exposure: interaction between carbon monoxide, noise and smoking

    Directory of Open Access Journals (Sweden)

    Débora Gonçalves Ferreira

    2012-12-01

    Full Text Available OBJETIVO: Analisar os efeitos auditivos da exposição combinada ao monóxido de carbono (CO e ao ruído, e o impacto do tabagismo. MÉTODOS: Participaram da pesquisa 80 trabalhadores fumantes e não fumantes, do gênero masculino, oriundos de uma empresa siderúrgica, sendo que 40 estavam expostos ao CO e ao ruído e 40 somente ao ruído. Realizou-se análise retrospectiva dos dados referentes aos riscos ambientais (CO e ruído e das informações contidas nos prontuários médicos relacionadas à saúde auditiva e às concentrações biológicas do CO no sangue (COHb. Analisou-se a audiometria tonal de referência e a última, e os limiares auditivos em função do tabagismo, do tipo de exposição (CO e ruído ou somente ao ruído, do tempo de exposição, do nível de ruído e da idade. RESULTADOS: Tanto a concentração de CO como os níveis de ruído encontraram-se acima do limite de tolerância previsto na norma regulamentadora de número 15 do Ministério do Trabalho. O grupo exposto ao CO e ao ruído apresentou mais casos de PAIR (22,5%, comparativamente ao grupo exposto somente ao ruído (7,5% e também apresentou piora significativa nos limiares auditivos de 3, 4 e 6 kHz. Foram encontradas diferenças significativas entre a idade, o tempo de serviço, o tipo de exposição, o nível de ruído e o hábito de fumar influenciando nos limiares auditivos dos participantes. O hábito de fumar potencializou o efeito tanto do CO quanto do ruído no sistema auditivo. CONCLUSÃO: Efeitos auditivos significativos foram identificados na audição dos trabalhadores de uma siderúrgica expostos ao CO.PURPOSE: To analyze the auditory effects of the combined exposure to carbon monoxide (CO and noise, and the impact of smoking. METHODS: Participants were 80 male workers, smokers and non-smokers, from a steel industry - 40 exposed to CO and noise simultaneously, and 40 exposed only to noise. A retrospective data analysis was conducted regarding the

  4. Event-related brain potential correlates of human auditory sensory memory-trace formation.

    Science.gov (United States)

    Haenschel, Corinna; Vernon, David J; Dwivedi, Prabuddh; Gruzelier, John H; Baldeweg, Torsten

    2005-11-09

    The event-related potential (ERP) component mismatch negativity (MMN) is a neural marker of human echoic memory. MMN is elicited by deviant sounds embedded in a stream of frequent standards, reflecting the deviation from an inferred memory trace of the standard stimulus. The strength of this memory trace is thought to be proportional to the number of repetitions of the standard tone, visible as the progressive enhancement of MMN with number of repetitions (MMN memory-trace effect). However, no direct ERP correlates of the formation of echoic memory traces are currently known. This study set out to investigate changes in ERPs to different numbers of repetitions of standards, delivered in a roving-stimulus paradigm in which the frequency of the standard stimulus changed randomly between stimulus trains. Normal healthy volunteers (n = 40) were engaged in two experimental conditions: during passive listening and while actively discriminating changes in tone frequency. As predicted, MMN increased with increasing number of standards. However, this MMN memory-trace effect was caused mainly by enhancement with stimulus repetition of a slow positive wave from 50 to 250 ms poststimulus in the standard ERP, which is termed here "repetition positivity" (RP). This RP was recorded from frontocentral electrodes when participants were passively listening to or actively discriminating changes in tone frequency. RP may represent a human ERP correlate of rapid and stimulus-specific adaptation, a candidate neuronal mechanism underlying sensory memory formation in the auditory cortex.

  5. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks.

    Science.gov (United States)

    Harinen, Kirsi; Rinne, Teemu

    2013-08-15

    We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  7. Selective attention and the auditory vertex potential. I - Effects of stimulus delivery rate. II - Effects of signal intensity and masking noise

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.; Galambos, R.

    1976-01-01

    The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.

  8. Recovery function of the human brain stem auditory-evoked potential.

    Science.gov (United States)

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  9. A Review of Adverse Effects of Road Traffic Noise on Human Health

    Science.gov (United States)

    Singh, Devi; Kumari, Neeraj; Sharma, Pooja

    Noise pollution due to road traffic is a potential threat to human health. Since it is a global hazard, the rapid urbanization and exponential traffic growth have aggravated the problem. Population residing along the busy traffic lanes is continuously exposed to the sound levels which are above the permissible limits. This constant exposure to noise pollution is a cause of concern as it leads to several adverse impacts on human health. Traffic noise causes irritation and annoyance, sleep disturbances, cardiovascular disease, risk of stroke, diabetes, hypertension and loss of hearing. It results in decreased work performance. The present review highlights the serious health hazards of road traffic noise (RTN) which needs to be curbed. Preventive measures of noise pollution can help in combating noise-induced health hazards and increased work performance.

  10. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Frequency-specific attentional modulation in human primary auditory cortex and midbrain

    NARCIS (Netherlands)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-01-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning,

  12. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  13. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  14. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  15. The gap-startle paradigm to assess auditory temporal processing: Bridging animal and human research.

    Science.gov (United States)

    Fournier, Philippe; Hébert, Sylvie

    2016-05-01

    The gap-prepulse inhibition of the acoustic startle (GPIAS) paradigm is the primary test used in animal research to identify gap detection thresholds and impairment. When a silent gap is presented shortly before a loud startling stimulus, the startle reflex is inhibited and the extent of inhibition is assumed to reflect detection. Here, we applied the same paradigm in humans. One hundred and fifty-seven normal-hearing participants were tested using one of five gap durations (5, 25, 50, 100, 200 ms) in one of the following two paradigms-gap-embedded in or gap-following-the continuous background noise. The duration-inhibition relationship was observable for both conditions but followed different patterns. In the gap-embedded paradigm, GPIAS increased significantly with gap duration up to 50 ms and then more slowly up to 200 ms (trend only). In contrast, in the gap-following paradigm, significant inhibition-different from 0--was observable only at gap durations from 50 to 200 ms. The finding that different patterns are found depending on gap position within the background noise is compatible with distinct mechanisms underlying each of the two paradigms. © 2016 Society for Psychophysiological Research.

  16. STUDY ON NOISE LEVEL GENERATED BY HUMAN ACTIVITIES IN SIBIU CITY, ROMANIA

    Directory of Open Access Journals (Sweden)

    Cristina STANCA-MOISE

    2014-10-01

    Full Text Available In this paper I have proposed an analysis and monitoring of the noise sources in the open spaces of air traffic, rail and car in Sibiu. From centralizing data obtained from the analysis of the measurements performed with equipment noise levels, we concluded that the noise and vibration produced by means of Transportation (air, road, rail can affect human health if they exceed limits. Noise is present and part of our lives and always a source of pollution as any of modern man is not conscious.

  17. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  18. Acute stress alters auditory selective attention in humans independent of HPA: a study of evoked potentials.

    Directory of Open Access Journals (Sweden)

    Ludger Elling

    Full Text Available BACKGROUND: Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM. However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such "paracorticoidal" stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. METHODOLOGY/PRINCIPAL FINDINGS: The stressor consisted of a single cold pressor test. Auditory negative difference (Nd and mismatch negativity (MMN were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occurring 4-7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8-11 minutes after onset when no further modulations in the event-related potentials (ERP occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. CONCLUSIONS/SIGNIFICANCE: Prior studies have deliberately tracked the adrenocortical influence

  19. Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex.

    Science.gov (United States)

    Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C

    2015-10-15

    Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.

  20. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  1. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  2. Electrical noise modulates perception of electrical pulses in humans: sensation enhancement via stochastic resonance.

    Science.gov (United States)

    Iliopoulos, Fivos; Nierhaus, Till; Villringer, Arno

    2014-03-01

    Although noise is usually considered to be harmful for signal detection and information transmission, stochastic resonance (SR) describes the counterintuitive phenomenon of noise enhancing the detection and transmission of weak input signals. In mammalian sensory systems, SR-related phenomena may arise both in the peripheral and the central nervous system. Here, we investigate behavioral SR effects of subliminal electrical noise stimulation on the perception of somatosensory stimuli in humans. We compare the likelihood to detect near-threshold pulses of different intensities applied on the left index finger during presence vs. absence of subliminal noise on the same or an adjacent finger. We show that (low-pass) noise can enhance signal detection when applied on the same finger. This enhancement is strong for near-threshold pulses below the 50% detection threshold and becomes stronger when near-threshold pulses are applied as brief trains. The effect reverses at pulse intensities above threshold, especially when noise is replaced by subliminal sinusoidal stimulation, arguing for a peripheral direct current addition. Unfiltered noise applied on longer pulses enhances detection of all pulse intensities. Noise applied to an adjacent finger has two opposing effects: an inhibiting effect (presumably due to lateral inhibition) and an enhancing effect (most likely due to SR in the central nervous system). In summary, we demonstrate that subliminal noise can significantly modulate detection performance of near-threshold stimuli. Our results indicate SR effects in the peripheral and central nervous system.

  3. Occupational Styrene Exposure on Auditory Function Among Adults: A Systematic Review of Selected Workers

    Directory of Open Access Journals (Sweden)

    Francis T. Pleban

    2017-12-01

    Full Text Available A review study was conducted to examine the adverse effects of styrene, styrene mixtures, or styrene and/or styrene mixtures and noise on the auditory system in humans employed in occupational settings. The search included peer-reviewed articles published in English language involving human volunteers spanning a 25-year period (1990–2015. Studies included peer review journals, case–control studies, and case reports. Animal studies were excluded. An initial search identified 40 studies. After screening for inclusion, 13 studies were retrieved for full journal detail examination and review. As a whole, the results range from no to mild associations between styrene exposure and auditory dysfunction, noting relatively small sample sizes. However, four studies investigating styrene with other organic solvent mixtures and noise suggested combined exposures to both styrene organic solvent mixtures may be more ototoxic than exposure to noise alone. There is little literature examining the effect of styrene on auditory functioning in humans. Nonetheless, findings suggest public health professionals and policy makers should be made aware of the future research needs pertaining to hearing impairment and ototoxicity from styrene. It is recommended that chronic styrene-exposed individuals be routinely evaluated with a comprehensive audiological test battery to detect early signs of auditory dysfunction. Keywords: auditory system, human exposure, ototoxicity, styrene

  4. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    Science.gov (United States)

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  5. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory

  6. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  7. Impacts of pavement types on in-vehicle noise and human health.

    Science.gov (United States)

    Li, Qing; Qiao, Fengxiang; Yu, Lei

    2016-01-01

    Noise is a major source of pollution that can affect the human physiology and living environment. According to the World Health Organization (WHO), an exposure for longer than 24 hours to noise levels above 70 dB(A) may damage human hearing sensitivity, induce adverse health effects, and cause anxiety to residents nearby roadways. Pavement type with different roughness is one of the associated sources that may contribute to in-vehicle noise. Most previous studies have focused on the impact of pavement type on the surrounding acoustic environment of roadways, and given little attention to in-vehicle noise levels. This paper explores the impacts of different pavement types on in-vehicle noise levels and the associated adverse health effects. An old concrete pavement and a pavement with a thin asphalt overlay were chosen as the test beds. The in-vehicle noise caused by the asphalt and concrete pavements were measured, as well as the drivers' corresponding heart rates and reported riding comfort. Results show that the overall in-vehicle sound levels are higher than 70 dB(A) even at midnight. The newly overlaid asphalt pavement reduced in-vehicle noise at a driving speed of 96.5 km/hr by approximately 6 dB(A). Further, on the concrete pavement with higher roughness, driver heart rates were significantly higher than on the asphalt pavement. Drivers reported feeling more comfortable when driving on asphalt than on concrete pavement. Further tests on more drivers with different demographic characteristics, along highways with complicated configurations, and an examination of more factors contributing to in-vehicle noise are recommended, in addition to measuring additional physical symptoms of both drivers and passengers. While there have been many previous noise-related studies, few have addressed in-vehicle noise. Most studies have focused on the noise that residents have complained about, such as neighborhood traffic noise. As yet, there have been no complaints by

  8. Effects of exposure to noise and indoor air pollution on human perception and symptoms

    DEFF Research Database (Denmark)

    Witterseh, Thomas; Wargocki, Pawel; Fang, Lei

    1999-01-01

    The objective of the present study was to investigate human perception and SBS symptoms when people are exposed simultaneously to different levels of air pollution and ventilation noise. The air quality in an office was modified by placing or removing a carpet and the background noise level...... of the occupants were recorded throughout the exposure period. During occupation, the subjects performed simulated office work. The results show that elevated air pollution and noise in an office can interact and negatively affect office workers by increasing the prevalence of SBS symptoms. A moderate increase...... was modified by playing a recording of ventilation noise. Thirty female subjects, six at a time, occupied the office for 4.4 hours. The subjects assessed the air quality, the noise, and the indoor environment upon entering the office and on six occasions during occupation. Furthermore, SBS symptoms...

  9. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  10. ICBEN review of research on the biological effects of noise 2011-2014

    Science.gov (United States)

    Basner, Mathias; Brink, Mark; Bristow, Abigail; de Kluizenaar, Yvonne; Finegold, Lawrence; Hong, Jiyoung; Janssen, Sabine A; Klaeboe, Ronny; Leroux, Tony; Liebl, Andreas; Matsui, Toshihito; Schwela, Dieter; Sliwinska-Kowalska, Mariola; Sörqvist, Patrik

    2015-01-01

    The mandate of the International Commission on Biological Effects of Noise (ICBEN) is to promote a high level of scientific research concerning all aspects of noise-induced effects on human beings and animals. In this review, ICBEN team chairs and co-chairs summarize relevant findings, publications, developments, and policies related to the biological effects of noise, with a focus on the period 2011-2014 and for the following topics: Noise-induced hearing loss; nonauditory effects of noise; effects of noise on performance and behavior; effects of noise on sleep; community response to noise; and interactions with other agents and contextual factors. Occupational settings and transport have been identified as the most prominent sources of noise that affect health. These reviews demonstrate that noise is a prevalent and often underestimated threat for both auditory and nonauditory health and that strategies for the prevention of noise and its associated negative health consequences are needed to promote public health. PMID:25774609

  11. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  12. The effects of tones in noise on human annoyance and performance

    Science.gov (United States)

    Lee, Joonhee

    Building mechanical equipment often generates prominent tones because most systems include rotating parts like fans and pumps. These tonal noises can cause unpleasant user experiences in spaces and, in turn, lead to increased complaints by building occupants. Currently, architectural engineers can apply the noise criteria guidelines in standards or publications to achieve acceptable noise conditions for assorted types of spaces. However, these criteria do not apply well if the noise contains perceptible tones. The annoyance thresholds experienced by the general population with regards to the degree of tones in noise is a significant piece of knowledge that has not been well-established. Thus, this dissertation addresses the relationship between human perception and noises with tones in the built environment. Four phases of subjective testing were conducted in an indoor acoustic testing chamber at the University of Nebraska to achieve the research objective. The results indicate that even the least prominent tones in noises can significantly decrease the cognitive performance of participants on a mentally demanding task. Factorial repeated-measures analysis of variance of test results have proven that tonality has a crucial influence on working memory capacity of subjects, whereas loudness levels alone did not. A multidimensional annoyance model, incorporating psycho-acoustical attributes of noise in addition to loudness and tonality, has been proposed as a more accurate annoyance model.

  13. How does image noise affect actual and predicted human gaze allocation in assessing image quality?

    Science.gov (United States)

    Röhrbein, Florian; Goddard, Peter; Schneider, Michael; James, Georgina; Guo, Kun

    2015-07-01

    A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants' gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. How does stochastic resonance work within the human brain? - Psychophysics of internal and external noise

    International Nuclear Information System (INIS)

    Aihara, Takatsugu; Kitajo, Keiichi; Nozaki, Daichi; Yamamoto, Yoshiharu

    2010-01-01

    We review how research on stochastic resonance (SR) in neuroscience has evolved and point out that the previous studies have overlooked the interaction between internal and external noise. We propose a new psychometric function incorporating SR effects, and show that a Bayesian adaptive method applied to the function efficiently estimates the parameters of the function. Using this procedure in visual detection experiments, we provide significant insight into the relationship between internal and external noise in SR within the human brain.

  15. Mudança significativa do limiar auditivo em trabalhadores expostos a diferentes níveis de ruído Significant auditory threshold shift among workers exposed to different noise levels

    Directory of Open Access Journals (Sweden)

    Flavia Cardoso Oliva

    2011-09-01

    Full Text Available OBJETIVO: Avaliar a audição e a ocorrência de mudança significativa do limiar auditivo em trabalhadores de frigoríficos expostos a níveis de ruído abaixo das Normas e Regulamentações nacionais e internacionais e compará-los com trabalhadores expostos a níveis de ruído considerados excessivos. MÉTODOS: Utilizou-se um banco de dados contendo informações longitudinais de 266 trabalhadores. Foram selecionados trabalhadores com um mínimo de três exames audiométricos e os que continham dados de exposição ao ruído. Foram mantidos 63 exames, classificados de acordo com sua exposição ao ruído em três níveis: 79 a 84,9 dB(A, 85 a 89,9 dB(A e 90 a 98,8 dB(A. Foi avaliada a ocorrência de perdas auditivas e de mudança significativa de limiar auditivo dos participantes de cada subgrupo. RESULTADOS: Verificou-se diferenças em todas as frequências nos testes de comparação entre a média dos limiares auditivos para cada frequência em função do nível de exposição ao ruído. A correlação entre a ocorrência de Perda Auditiva Induzida por Níveis de Pressão Sonora Elevados (PAINPSE e os anos de exposição ao ruído dentro da empresa atual foi significativa (R=0,373; p=0,079. Foram encontradas mudanças permanentes de limiar auditivo nos três níveis de exposição ao ruído. CONCLUSÃO: Os achados do presente estudo sugerem a existência de uma associação entre mudança significativa do limiar auditivo dos trabalhadores e os anos de exposição ao ruído considerado de baixo risco.PURPOSE: To assess the hearing status and signs of significant auditory threshold shifts in meat-processing facility workers who are exposed to noise levels below nationally and internationally recommended exposure limits, and to compare these results with data from workers exposed to excessive noise levels. METHODS: Longitudinal audiometric data from 266 workers were evaluated, and only workers with a minimum of three audiometric test results

  16. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  17. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS

    Directory of Open Access Journals (Sweden)

    Paul Fredrick Sowman

    2014-06-01

    Full Text Available Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS -induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  18. Benzodiazepine temazepam suppresses the transient auditory 40-Hz response amplitude in humans.

    Science.gov (United States)

    Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H

    1999-06-18

    To discern the role of the GABA(A) receptors in the generation and attentive modulation of the transient auditory 40-Hz response, the effects of the benzodiazepine temazepam (10 mg) were studied in 10 healthy social drinkers, using a double-blind placebo-controlled design. Three hundred Hertz standard and 330 Hz rare deviant tones were presented to the left, and 1000 Hz standards and 1100 Hz deviants to the right ear of the subjects. Subjects attended to a designated ear and were to detect deviants therein while ignoring tones to the other. Temazepam significantly suppressed the amplitude of the 40-Hz response, the effect being equal for attended and non-attended tone responses. This suggests involvement of GABA(A) receptors in transient auditory 40-Hz response generation, however, not in the attentive modulation of the 40-Hz response.

  19. Non-linear laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-01-01

    Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two to...

  20. Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.

    Science.gov (United States)

    Ubiali, Thalita; Sanfins, Milaine Dominici; Borges, Leticia Reis; Colella-Santos, Maria Francisca

    2016-01-01

    The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing. P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation. P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values. Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.

  1. Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.

    Directory of Open Access Journals (Sweden)

    Thalita Ubiali

    Full Text Available The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing.P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation.P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values.Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.

  2. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  3. Optical coherence tomography noise modeling and fundamental bounds on human retinal layer segmentation accuracy (Conference Presentation)

    Science.gov (United States)

    DuBose, Theodore B.; Milanfar, Peyman; Izatt, Joseph A.; Farsiu, Sina

    2016-03-01

    The human retina is composed of several layers, visible by in vivo optical coherence tomography (OCT) imaging. To enhance diagnostics of retinal diseases, several algorithms have been developed to automatically segment one or more of the boundaries of these layers. OCT images are corrupted by noise, which is frequently the result of the detector noise and speckle, a type of coherent noise resulting from the presence of several scatterers in each voxel. However, it is unknown what the empirical distribution of noise in each layer of the retina is, and how the magnitude and distribution of the noise affects the lower bounds of segmentation accuracy. Five healthy volunteers were imaged using a spectral domain OCT probe from Bioptigen, Inc, centered at 850nm with 4.6µm full width at half maximum axial resolution. Each volume was segmented by expert manual graders into nine layers. The histograms of intensities in each layer were then fit to seven possible noise distributions from the literature on speckle and image processing. Using these empirical noise distributions and empirical estimates of the intensity of each layer, the Cramer-Rao lower bound (CRLB), a measure of the variance of an estimator, was calculated for each boundary layer. Additionally, the optimum bias of a segmentation algorithm was calculated, and a corresponding biased CRLB was calculated, which represents the improved performance an algorithm can achieve by using prior knowledge, such as the smoothness and continuity of layer boundaries. Our general mathematical model can be easily adapted for virtually any OCT modality.

  4. A Method for Simulation of Rotorcraft Fly-In Noise for Human Response Studies

    Science.gov (United States)

    Rizzi, Stephen A.; Christian, Andrew

    2015-01-01

    The low frequency content of rotorcraft noise allows it to be heard over great distances. This factor contributes to the disruption of natural quiet in national parks and wilderness areas, and can lead to annoyance in populated areas. Further, it can result in detection at greater distances compared to higher altitude fixed wing aircraft operations. Human response studies conducted in the field are made difficult since test conditions are difficult to control. Specifically, compared to fixed wing aircraft, the source noise itself may significantly vary over time even for nominally steady flight conditions, and the propagation of that noise is more variable due to low altitude meteorological conditions. However, it is possible to create the salient features of rotorcraft fly-in noise in a more controlled laboratory setting through recent advancements made in source noise synthesis, propagation modeling and reproduction. This paper concentrates on the first two of these. In particular, the rotorcraft source noise pressure time history is generated using single blade passage signatures from the main and tail rotors. These may be obtained from either acoustic source noise predictions or back-propagation of ground-based measurements. Propagation effects include atmospheric absorption, spreading loss, Doppler shift, and ground plane reflections.

  5. Human observer detection experiments with mammograms and power-law noise

    International Nuclear Information System (INIS)

    Burgess, Arthur E.; Jacobson, Francine L.; Judy, Philip F.

    2001-01-01

    We determined contrast thresholds for lesion detection as a function of lesion size in both mammograms and filtered noise backgrounds with the same average power spectrum, P(f )=B/f 3 . Experiments were done using hybrid images with digital images of tumors added to digitized normal backgrounds, displayed on a monochrome monitor. Four tumors were extracted from digitized specimen radiographs. The lesion sizes were varied by digital rescaling to cover the range from 0.5 to 16 mm. Amplitudes were varied to determine the value required for 92% correct detection in two-alternative forced-choice (2AFC) and 90% for search experiments. Three observers participated, two physicists and a radiologist. The 2AFC mammographic results demonstrated a novel contrast-detail (CD) diagram with threshold amplitudes that increased steadily (with slope of 0.3) with increasing size for lesions larger than 1 mm. The slopes for prewhitening model observers were about 0.4. Human efficiency relative to these models was as high as 90%. The CD diagram slopes for the 2AFC experiments with filtered noise were 0.44 for humans and 0.5 for models. Human efficiency relative to the ideal observer was about 40%. The difference in efficiencies for the two types of backgrounds indicates that breast structure cannot be considered to be pure random noise for 2AFC experiments. Instead, 2AFC human detection with mammographic backgrounds is limited by a combination of noise and deterministic masking effects. The search experiments also gave thresholds that increased with lesion size. However, there was no difference in human results for mammographic and filtered noise backgrounds, suggesting that breast structure can be considered to be pure random noise for this task. Our conclusion is that, in spite of the fact that mammographic backgrounds have nonstationary statistics, models based on statistical decision theory can still be applied successfully to estimate human performance

  6. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Logarithmic laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Koji Inui; Tomokazu Urakawa; Koya Yamashiro; Naofumi Otsuru; Yasuyuki Takeshima; Ryusuke Kakigi

    2009-01-01

    The cortical mechanisms underlying echoic memory and change detection were investigated using an auditory change-related component (N100c) of event-related brain potentials. N100c was elicited by paired sound stimuli, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of N100c elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1 ~ 1000 ms), ...

  8. Estimation of Human Workload from the Auditory Steady-State Response Recorded via a Wearable Electroencephalography System during Walking

    Directory of Open Access Journals (Sweden)

    Yusuke Yokota

    2017-06-01

    Full Text Available Workload in the human brain can be a useful marker of internal brain state. However, due to technical limitations, previous workload studies have been unable to record brain activity via conventional electroencephalography (EEG and magnetoencephalography (MEG devices in mobile participants. In this study, we used a wearable EEG system to estimate workload while participants walked in a naturalistic environment. Specifically, we used the auditory steady-state response (ASSR which is an oscillatory brain activity evoked by repetitive auditory stimuli, as an estimation index of workload. Participants performed three types of N-back tasks, which were expected to command different workloads, while walking at a constant speed. We used a binaural 500 Hz pure tone with amplitude modulation at 40 Hz to evoke the ASSR. We found that the phase-locking index (PLI of ASSR activity was significantly correlated with the degree of task difficulty, even for EEG data from few electrodes. Thus, ASSR appears to be an effective indicator of workload during walking in an ecologically valid environment.

  9. White noise improves learning by modulating activity in dopaminergic midbrain regions and right superior temporal sulcus.

    Science.gov (United States)

    Rausch, Vanessa H; Bauch, Eva M; Bunzeck, Nico

    2014-07-01

    In neural systems, information processing can be facilitated by adding an optimal level of white noise. Although this phenomenon, the so-called stochastic resonance, has traditionally been linked with perception, recent evidence indicates that white noise may also exert positive effects on cognitive functions, such as learning and memory. The underlying neural mechanisms, however, remain unclear. Here, on the basis of recent theories, we tested the hypothesis that auditory white noise, when presented during the encoding of scene images, enhances subsequent recognition memory performance and modulates activity within the dopaminergic midbrain (i.e., substantia nigra/ventral tegmental area, SN/VTA). Indeed, in a behavioral experiment, we can show in healthy humans that auditory white noise-but not control sounds, such as a sinus tone-slightly improves recognition memory. In an fMRI experiment, white noise selectively enhances stimulus-driven phasic activity in the SN/VTA and auditory cortex. Moreover, it induces stronger connectivity between SN/VTA and right STS, which, in addition, exhibited a positive correlation with subsequent memory improvement by white noise. Our results suggest that the beneficial effects of auditory white noise on learning depend on dopaminergic neuromodulation and enhanced connectivity between midbrain regions and the STS-a key player in attention modulation. Moreover, they indicate that white noise could be particularly useful to facilitate learning in conditions where changes of the mesolimbic system are causally related to memory deficits including healthy and pathological aging.

  10. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  11. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  12. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  13. Effects of first formant onset frequency on [-voice] judgments result from auditory processes not specific to humans.

    Science.gov (United States)

    Kluender, K R; Lotto, A J

    1994-02-01

    When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. The effects of aging on lifetime of auditory sensory memory in humans.

    Science.gov (United States)

    Cheng, Chia-Hsiung; Lin, Yung-Yang

    2012-02-01

    The amplitude change of cortical responses to repeated stimulation with respect to different interstimulus intervals (ISIs) is considered as an index of sensory memory. To determine the effect of aging on lifetime of auditory sensory memory, N100m responses were recorded in young, middle-aged, and elderly healthy volunteers (n=15 for each group). Trains of 5 successive tones were presented with an inter-train interval of 10 s. In separate sessions, the within-train ISIs were 0.5, 1, 2, 4, and 8 s. The amplitude ratio between N100m responses to the first and fifth stimuli (S5/S1 N100m ratio) within each ISI condition was obtained to reflect the recovery cycle profile. The recovery function time constant (τ) was smaller in the elderly (1.06±0.26 s, psensory memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  16. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  17. Robotic Discovery of the Auditory Scene

    National Research Council Canada - National Science Library

    Martinson, E; Schultz, A

    2007-01-01

    .... Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act...

  18. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  19. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  20. Neural oscillations in auditory working memory

    OpenAIRE

    Wilsch, A.

    2015-01-01

    The present thesis investigated memory load and memory decay in auditory working memory. Alpha power as a marker for memory load served as the primary indicator for load and decay fluctuations hypothetically reflecting functional inhibition of irrelevant information. Memory load was induced by presenting auditory signals (syllables and pure-tone sequences) in noise because speech-in-noise has been shown before to increase memory load. The aim of the thesis was to assess with magnetoencephalog...

  1. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  2. Failing to Get the Gist of What’s Being Said: Background Noise Impairs Higher Order Cognitive Processing

    OpenAIRE

    John Everett Marsh; John Everett Marsh; Robert eLjung; Anatole eNöstl; Emma eThreadgold; Tom A Campbell

    2015-01-01

    A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list correctly heard in noise, later memory for those words is typically impoverished. At such low signal-to-noise ratios when listeners could identify words, those participants could not necessarily remember those words. These investigations supported a view th...

  3. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    Science.gov (United States)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  4. Extending and Applying the EPIC Architecture for Human Cognition and Performance: Auditory and Spatial Components

    Science.gov (United States)

    2016-03-01

    Multi-Channel Speech Processing Summary of Previous Work As before, during this period, the project work was focused on Goal 1. The Annual Reports for...is so poor that it resembles the effect of a noise masker. The explanation is that the multiple speech maskers overlap so much that there are fewer...a more detailed explanation for speech understanding in the presence of masking signals. At present we have only been surveying the relevant

  5. Occupational Styrene Exposure on Auditory Function Among Adults: A Systematic Review of Selected Workers.

    Science.gov (United States)

    Pleban, Francis T; Oketope, Olutosin; Shrestha, Laxmi

    2017-12-01

    A review study was conducted to examine the adverse effects of styrene, styrene mixtures, or styrene and/or styrene mixtures and noise on the auditory system in humans employed in occupational settings. The search included peer-reviewed articles published in English language involving human volunteers spanning a 25-year period (1990-2015). Studies included peer review journals, case-control studies, and case reports. Animal studies were excluded. An initial search identified 40 studies. After screening for inclusion, 13 studies were retrieved for full journal detail examination and review. As a whole, the results range from no to mild associations between styrene exposure and auditory dysfunction, noting relatively small sample sizes. However, four studies investigating styrene with other organic solvent mixtures and noise suggested combined exposures to both styrene organic solvent mixtures may be more ototoxic than exposure to noise alone. There is little literature examining the effect of styrene on auditory functioning in humans. Nonetheless, findings suggest public health professionals and policy makers should be made aware of the future research needs pertaining to hearing impairment and ototoxicity from styrene. It is recommended that chronic styrene-exposed individuals be routinely evaluated with a comprehensive audiological test battery to detect early signs of auditory dysfunction.

  6. A 3 year update on the influence of noise on performance and behavior

    Directory of Open Access Journals (Sweden)

    Charlotte Clark

    2012-01-01

    Full Text Available The effect of noise exposure on human performance and behavior continues to be a focus for research activities. This paper reviews developments in the field over the past 3 years, highlighting current areas of research, recent findings, and ongoing research in two main research areas: Field studies of noise effects on children′s cognition and experimental studies of auditory distraction. Overall, the evidence for the effects of external environmental noise on children′s cognition has strengthened in recent years, with the use of larger community samples and better noise characterization. Studies have begun to establish exposure-effect thresholds for noise effects on cognition. However, the evidence remains predominantly cross-sectional and future research needs to examine whether sound insulation might lessen the effects of external noise on children′s learning. Research has also begun to explore the link between internal classroom acoustics and children′s learning, aiming to further inform the design of the internal acoustic environment. Experimental studies of the effects of noise on cognitive performance are also reviewed, including functional differences in varieties of auditory distraction, semantic auditory distraction, individual differences in susceptibility to auditory distraction, and the role of cognitive control on the effects of noise on understanding and memory of target speech materials. In general, the results indicate that there are at least two functionally different types of auditory distraction: One due to the interruption of processes (as a result of attention being captured by the sound, another due to interference between processes. The magnitude of the former type is related to individual differences in cognitive control capacities (e.g., working memory capacity; the magnitude of the latter is not. Few studies address noise effects on behavioral outcomes, emphasizing the need for researchers to explore noise

  7. A 3 year update on the influence of noise on performance and behavior.

    Science.gov (United States)

    Clark, Charlotte; Sörqvist, Patrik

    2012-01-01

    The effect of noise exposure on human performance and behavior continues to be a focus for research activities. This paper reviews developments in the field over the past 3 years, highlighting current areas of research, recent findings, and ongoing research in two main research areas: Field studies of noise effects on children's cognition and experimental studies of auditory distraction. Overall, the evidence for the effects of external environmental noise on children's cognition has strengthened in recent years, with the use of larger community samples and better noise characterization. Studies have begun to establish exposure-effect thresholds for noise effects on cognition. However, the evidence remains predominantly cross-sectional and future research needs to examine whether sound insulation might lessen the effects of external noise on children's learning. Research has also begun to explore the link between internal classroom acoustics and children's learning, aiming to further inform the design of the internal acoustic environment. Experimental studies of the effects of noise on cognitive performance are also reviewed, including functional differences in varieties of auditory distraction, semantic auditory distraction, individual differences in susceptibility to auditory distraction, and the role of cognitive control on the effects of noise on understanding and memory of target speech materials. In general, the results indicate that there are at least two functionally different types of auditory distraction: One due to the interruption of processes (as a result of attention being captured by the sound), another due to interference between processes. The magnitude of the former type is related to individual differences in cognitive control capacities (e.g., working memory capacity); the magnitude of the latter is not. Few studies address noise effects on behavioral outcomes, emphasizing the need for researchers to explore noise effects on behavior in more

  8. Non-linear laws of echoic memory and auditory change detection in humans.

    Science.gov (United States)

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-07-03

    The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  9. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    Science.gov (United States)

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  10. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  11. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  12. Optimizing CT technique to reduce radiation dose: effect of changes in kVp, iterative reconstruction, and noise index on dose and noise in a human cadaver.

    Science.gov (United States)

    Chang, Kevin J; Collins, Scott; Li, Baojun; Mayo-Smith, William W

    2017-06-01

    For assessment of the effect of varying the peak kilovoltage (kVp), the adaptive statistical iterative reconstruction technique (ASiR), and automatic dose modulation on radiation dose and image noise in a human cadaver, a cadaver torso underwent CT scanning at 80, 100, 120 and 140 kVp, each at ASiR settings of 0, 30 and 50 %, and noise indices (NIs) of 5.5, 11 and 22. The volume CT dose index (CTDI vol ), image noise, and attenuation values of liver and fat were analyzed for 20 data sets. Size-specific dose estimates (SSDEs) and liver-to-fat contrast-to-noise ratios (CNRs) were calculated. Values for different combinations of kVp, ASiR, and NI were compared. The CTDI vol varied by a power of 2 with kVp values between 80 and 140 without ASiR. Increasing ASiR levels allowed a larger decrease in CTDI vol and SSDE at higher kVp than at lower kVp while image noise was held constant. In addition, CTDI vol and SSDE decreased with increasing NI at each kVp, but the decrease was greater at higher kVp than at lower kVp. Image noise increased with decreasing kVp despite a fixed NI; however, this noise could be offset with the use of ASiR. The CT number of the liver remained unchanged whereas that of fat decreased as the kVp decreased. Image noise and dose vary in a complicated manner when the kVp, ASiR, and NI are varied in a human cadaver. Optimization of CT protocols will require balancing of the effects of each of these parameters to maximize image quality while minimizing dose.

  13. GF-GC Theory of Human Cognition: Differentiation of Short-Term Auditory and Visual Memory Factors.

    Science.gov (United States)

    McGhee, Ron; Lieberman, Lewis

    1994-01-01

    Study sought to determine whether separate short-term auditory and visual memory factors would emerge given a sufficient number of markers in a factor matrix. A principal component factor analysis with varimax rotation was performed. Short-term visual and short-term auditory memory factors emerged as expected. (RJM)

  14. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth

    NARCIS (Netherlands)

    van den Heuvel, Marion I.; Otte, Renee A.; Braeken, Marijke A. K. A.; Winkler, Istvan; Kushnerenko, Elena; Van den Bergh, Bea R. H.

    2015-01-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was

  15. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  16. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  17. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    Science.gov (United States)

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  18. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  19. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  20. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    Science.gov (United States)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  1. Generation of human auditory steady-state responses (SSRs). II: Addition of responses to individual stimuli.

    Science.gov (United States)

    Santarelli, R; Maurizi, M; Conti, G; Ottaviani, F; Paludetti, G; Pettorossi, V E

    1995-03-01

    In order to investigate the generation of the 40 Hz steady-state response (SSR), auditory potentials evoked by clicks were recorded in 16 healthy subjects in two stimulating conditions. Firstly, repetition rates of 7.9 and 40 Hz were used to obtain individual middle latency responses (MLRs) and 40 Hz-SSRs, respectively. In the second condition, eight click trains were presented at a 40 Hz repetition rate and an inter-train interval of 126 ms. We extracted from the whole train response: (1) the response-segment taking place after the last click of the train (last click response, LCR), (2) a modified LCR (mLCR) obtained by clearing the LCR from the amplitude enhancement due to the overlapping of the responses to the clicks preceding the last within the stimulus train. In comparison to MLRs, the most relevant feature of the evoked activity following the last click of the train (LCRs, mLCRs) was the appearance in the 50-110 ms latency range of one (in 11 subjects) or two (in 2 subjects) additional positive-negative deflections having the same periodicity as that of MLR waves. The grand average (GA) of the 40 Hz-SSRs was compared with three predictions synthesized by superimposing: (1) the GA of MLRs, (2) the GA of LCRs, (3) the GA of mLCRs. Both the MLR and mLCR predictions reproduced the recorded signal in amplitude while the LCR prediction amplitude resulted almost twice that of the 40 Hz-SSR. With regard to the phase, the MLR, LCR and mLCR closely predicted the recorded signal. Our findings confirm the effectiveness of the linear addition mechanism in the generation of the 40 Hz-SSR. However the responses to individual stimuli within the 40 Hz-SSR differ from MLRs because of additional periodic activity. These results suggest that phenomena related to the resonant frequency of the activated system may play a role in the mechanisms which interact to generate the 40 Hz-SSR.

  2. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  3. Acoustic fMRI noise : Linear time-invariant system model

    NARCIS (Netherlands)

    Sierra, Carlos V. Rizzo; Versluis, Maarten J.; Hoogduin, Johannes M.; Duifhuis, Hendrikus (Diek)

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For auditory system studies, however, the acoustic noise generated by the scanner tends to interfere with the assessments of this activation. Understanding and modeling fMRI acoustic

  4. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  5. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    OpenAIRE

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...

  6. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  7. Neuromagnetic Representation of Musical Register Information in HumaN Auditory Cortex

    NARCIS (Netherlands)

    Andermann, M.; Van Dinther, C.H.B.A.; Patterson, R.D.; Rupp, A.

    2011-01-01

    Pulse-resonance sounds like vowels or instrumental tones contain acoustic information about the physical size of the sound source (pulse rate) and body resonators (resonance scale). Previous research has revealed correlates of these variables in humans using functional neuroimaging. Here, we report

  8. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The Physiological Bases of Hidden Noise-Induced Hearing Loss: Protocol for a Functional Neuroimaging Study.

    Science.gov (United States)

    Dewey, Rebecca Susan; Hall, Deborah A; Guest, Hannah; Prendergast, Garreth; Plack, Christopher J; Francis, Susan T

    2018-03-09

    Rodent studies indicate that noise exposure can cause permanent damage to synapses between inner hair cells and high-threshold auditory nerve fibers, without permanently altering threshold sensitivity. These demonstrations of what is commonly known as hidden hearing loss have been confirmed in several rodent species, but the implications for human hearing are unclear. Our Medical Research Council-funded program aims to address this unanswered question, by investigating functional consequences of the damage to the human peripheral and central auditory nervous system that results from cumulative lifetime noise exposure. Behavioral and neuroimaging techniques are being used in a series of parallel studies aimed at detecting hidden hearing loss in humans. The planned neuroimaging study aims to (1) identify central auditory biomarkers associated with hidden hearing loss; (2) investigate whether there are any additive contributions from tinnitus or diminished sound tolerance, which are often comorbid with hearing problems; and (3) explore the relation between subcortical functional magnetic resonance imaging (fMRI) measures and the auditory brainstem response (ABR). Individuals aged 25 to 40 years with pure tone hearing thresholds ≤20 dB hearing level over the range 500 Hz to 8 kHz and no contraindications for MRI or signs of ear disease will be recruited into the study. Lifetime noise exposure will be estimated using an in-depth structured interview. Auditory responses throughout the central auditory system will be recorded using ABR and fMRI. Analyses will focus predominantly on correlations between lifetime noise exposure and auditory response characteristics. This paper reports the study protocol. The funding was awarded in July 2013. Enrollment for the study described in this protocol commenced in February 2017 and was completed in December 2017. Results are expected in 2018. This challenging and comprehensive study will have the potential to impact diagnostic

  10. BDNF Increases Survival and Neuronal Differentiation of Human Neural Precursor Cells Cotransplanted with a Nanofiber Gel to the Auditory Nerve in a Rat Model of Neuronal Damage

    Directory of Open Access Journals (Sweden)

    Yu Jiao

    2014-01-01

    Full Text Available Objectives. To study possible nerve regeneration of a damaged auditory nerve by the use of stem cell transplantation. Methods. We transplanted HNPCs to the rat AN trunk by the internal auditory meatus (IAM. Furthermore, we studied if addition of BDNF affects survival and phenotypic differentiation of the grafted HNPCs. A bioactive nanofiber gel (PA gel, in selected groups mixed with BDNF, was applied close to the implanted cells. Before transplantation, all rats had been deafened by a round window niche application of β-bungarotoxin. This neurotoxin causes a selective toxic destruction of the AN while keeping the hair cells intact. Results. Overall, HNPCs survived well for up to six weeks in all groups. However, transplants receiving the BDNF-containing PA gel demonstrated significantly higher numbers of HNPCs and neuronal differentiation. At six weeks, a majority of the HNPCs had migrated into the brain stem and differentiated. Differentiated human cells as well as neurites were observed in the vicinity of the cochlear nucleus. Conclusion. Our results indicate that human neural precursor cells (HNPC integration with host tissue benefits from additional brain derived neurotrophic factor (BDNF treatment and that these cells appear to be good candidates for further regenerative studies on the auditory nerve (AN.

  11. Towards Clinical Application of Neurotrophic Factors to the Auditory Nerve; Assessment of Safety and Efficacy by a Systematic Review of Neurotrophic Treatments in Humans

    Directory of Open Access Journals (Sweden)

    Aren Bezdjian

    2016-11-01

    Full Text Available Animal studies have evidenced protection of the auditory nerve by exogenous neurotrophic factors. In order to assess clinical applicability of neurotrophic treatment of the auditory nerve, the safety and efficacy of neurotrophic therapies in various human disorders were systematically reviewed. Outcomes of our literature search included disorder, neurotrophic factor, administration route, therapeutic outcome, and adverse event. From 2103 articles retrieved, 20 randomized controlled trials including 3974 patients were selected. Amyotrophic lateral sclerosis (53% was the most frequently reported indication for neurotrophic therapy followed by diabetic polyneuropathy (28%. Ciliary neurotrophic factor (50%, nerve growth factor (24% and insulin-like growth factor (21% were most often used. Injection site reaction was a frequently occurring adverse event (61% followed by asthenia (24% and gastrointestinal disturbances (20%. Eighteen out of 20 trials deemed neurotrophic therapy to be safe, and six out of 17 studies concluded the neurotrophic therapy to be effective. Positive outcomes were generally small or contradicted by other studies. Most non-neurodegenerative diseases treated by targeted deliveries of neurotrophic factors were considered safe and effective. Hence, since local delivery to the cochlea is feasible, translation from animal studies to human trials in treating auditory nerve degeneration seems promising.

  12. Complexity and multifractality of neuronal noise in mouse and human hippocampal epileptiform dynamics

    Science.gov (United States)

    Serletis, Demitre; Bardakjian, Berj L.; Valiante, Taufik A.; Carlen, Peter L.

    2012-10-01

    Fractal methods offer an invaluable means of investigating turbulent nonlinearity in non-stationary biomedical recordings from the brain. Here, we investigate properties of complexity (i.e. the correlation dimension, maximum Lyapunov exponent, 1/fγ noise and approximate entropy) and multifractality in background neuronal noise-like activity underlying epileptiform transitions recorded at the intracellular and local network scales from two in vitro models: the whole-intact mouse hippocampus and lesional human hippocampal slices. Our results show evidence for reduced dynamical complexity and multifractal signal features following transition to the ictal epileptiform state. These findings suggest that pathological breakdown in multifractal complexity coincides with loss of signal variability or heterogeneity, consistent with an unhealthy ictal state that is far from the equilibrium of turbulent yet healthy fractal dynamics in the brain. Thus, it appears that background noise-like activity successfully captures complex and multifractal signal features that may, at least in part, be used to classify and identify brain state transitions in the healthy and epileptic brain, offering potential promise for therapeutic neuromodulatory strategies for afflicted patients suffering from epilepsy and other related neurological disorders. This paper is based on chapter 5 of Serletis (2010 PhD Dissertation Department of Physiology, Institute of Biomaterials and Biomedical Engineering, University of Toronto).

  13. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  14. Binaural auditory beats affect vigilance performance and mood.

    Science.gov (United States)

    Lane, J D; Kasian, S J; Owens, J E; Marsh, G R

    1998-01-01

    When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.

  15. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  16. Time-frequency analysis with temporal and spectral resolution as the human auditory system

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    1992-01-01

    The human perception of sound is a suitable area for the application of a simultaneous time-frequency analysis, since the ear is selective in both domains. A perfect reconstruction filter bank with bandwidths approximating the critical bands is presented. The orthogonality of the filter makes...... it possible to examine the masking effect with realistic signals. The tree structure of the filter bank makes it difficult to obtain well-attenuated stop-bands. The use of filters of different length solves this problem...

  17. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  18. The psychosis-like effects of Δ(9)-tetrahydrocannabinol are associated with increased cortical noise in healthy humans.

    Science.gov (United States)

    Cortes-Briones, Jose A; Cahill, John D; Skosnik, Patrick D; Mathalon, Daniel H; Williams, Ashley; Sewell, R Andrew; Roach, Brian J; Ford, Judith M; Ranganathan, Mohini; D'Souza, Deepak Cyril

    2015-12-01

    Drugs that induce psychosis may do so by increasing the level of task-irrelevant random neural activity or neural noise. Increased levels of neural noise have been demonstrated in psychotic disorders. We tested the hypothesis that neural noise could also be involved in the psychotomimetic effects of delta-9-tetrahydrocannabinol (Δ(9)-THC), the principal active constituent of cannabis. Neural noise was indexed by measuring the level of randomness in the electroencephalogram during the prestimulus baseline period of an oddball task using Lempel-Ziv complexity, a nonlinear measure of signal randomness. The acute, dose-related effects of Δ(9)-THC on Lempel-Ziv complexity and signal power were studied in humans (n = 24) who completed 3 test days during which they received intravenous Δ(9)-THC (placebo, .015 and .03 mg/kg) in a double-blind, randomized, crossover, and counterbalanced design. Δ(9)-THC increased neural noise in a dose-related manner. Furthermore, there was a strong positive relationship between neural noise and the psychosis-like positive and disorganization symptoms induced by Δ(9)-THC, which was independent of total signal power. Instead, there was no relationship between noise and negative-like symptoms. In addition, Δ(9)-THC reduced total signal power during both active drug conditions compared with placebo, but no relationship was detected between signal power and psychosis-like symptoms. At doses that produced psychosis-like effects, Δ(9)-THC increased neural noise in humans in a dose-dependent manner. Furthermore, increases in neural noise were related with increases in Δ(9)-THC-induced psychosis-like symptoms but not negative-like symptoms. These findings suggest that increases in neural noise may contribute to the psychotomimetic effects of Δ(9)-THC. Published by Elsevier Inc.

  19. Sensory augmentation: integration of an auditory compass signal into human perception of space

    Science.gov (United States)

    Schumann, Frank; O’Regan, J. Kevin

    2017-01-01

    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences. PMID:28195187

  20. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  1. Estimating individual listeners’ auditory-filter bandwidth in simultaneous and non-simultaneous masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Caminade, Sabine; Strelcyk, Olaf

    2010-01-01

    Frequency selectivity in the human auditory system is often measured using simultaneous masking of tones presented in notched noise. Based on such masking data, the equivalent rectangular bandwidth (ERB) of the auditory filters can be derived by applying the power spectrum model of masking....... Considering bandwidth estimates from previous studies based on forward masking, only average data across a number of subjects have been considered. The present study is concerned with bandwidth estimates in simultaneous and forward masking in individual normal-hearing subjects. In order to investigate...... the reliability of the individual estimates, a statistical resampling method is applied. It is demonstrated that a rather large set of experimental data is required to reliably estimate auditory filter bandwidth, particularly in the case of simultaneous masking. The poor overall reliability of the filter...

  2. Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.

    Science.gov (United States)

    Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki

    2017-09-06

    Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.

  3. Estudo dos efeitos auditivos e extra-auditivos da exposição ocupacional a ruído e vibração Auditory and extra-auditory effects of occupational exposure to noise and vibration

    Directory of Open Access Journals (Sweden)

    Márcia Fernandes

    2002-10-01

    compact hydraulic excavators. The 73 participants underwent an interview, otoscopy, and pure-tone audiometry. Regarding general health, group 2 workers, exposed to whole-body vibration presented the highest number of complaints. Results: All the participants from group 1 use hearing protectors and 11% of them complained about tinnitus. Not all workers from group 2 use hearing protectors and 17% of them 2 reported tinnitus. However, group 1 workers, exposed to hand-arm vibration was the group with the highest percentage of abnormal audiograms. Conclusion: This study revealed a series of weaknesses in the health surveillance of these populations and indicated the need for the implementation of preventive programs focusing on their exposures to noise and vibration.

  4. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In

  5. Musical noise reduction using an adaptive filter

    Science.gov (United States)

    Hanada, Takeshi; Murakami, Takahiro; Ishida, Yoshihisa; Hoya, Tetsuya

    2003-10-01

    This paper presents a method for reducing a particular noise (musical noise). The musical noise is artificially produced by Spectral Subtraction (SS), which is one of the most conventional methods for speech enhancement. The musical noise is the tin-like sound and annoying in human auditory. We know that the duration of the musical noise is considerably short in comparison with that of speech, and that the frequency components of the musical noise are random and isolated. In the ordinary SS-based methods, the musical noise is removed by the post-processing. However, the output of the ordinary post-processing is delayed since the post-processing uses the succeeding frames. In order to improve this problem, we propose a novel method using an adaptive filter. In the proposed system, the observed noisy signal is used as the input signal to the adaptive filter and the output of SS is used as the reference signal. In this paper we exploit the normalized LMS (Least Mean Square) algorithm for the adaptive filter. Simulation results show that the proposed method has improved the intelligibility of the enhanced speech in comparison with the conventional method.

  6. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    Science.gov (United States)

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  7. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  8. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  9. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  10. Effects of train noise and vibration on human heart rate during sleep: an experimental study.

    Science.gov (United States)

    Croy, Ilona; Smith, Michael G; Waye, Kerstin Persson

    2013-05-28

    Transportation of goods on railways is increasing and the majority of the increased numbers of freight trains run during the night. Transportation noise has adverse effects on sleep structure, affects the heart rate (HR) during sleep and may be linked to cardiovascular disease. Freight trains also generate vibration and little is known regarding the impact of vibration on human sleep. A laboratory study was conducted to examine how a realistic nocturnal railway traffic scenario influences HR during sleep. Case-control. Healthy participants. 24 healthy volunteers (11 men, 13 women, 19-28 years) spent six consecutive nights in the sleep laboratory. All participants slept during one habituation night, one control and four experimental nights in which train noise and vibration were reproduced. In the experimental nights, 20 or 36 trains with low-vibration or high-vibration characteristics were presented. Polysomnographical data and ECG were recorded. The train exposure led to a significant change of HR within 1 min of exposure onset (p=0.002), characterised by an initial and a delayed increase of HR. The high-vibration condition provoked an average increase of at least 3 bpm per train in 79% of the participants. Cardiac responses were in general higher in the high-vibration condition than in the low-vibration condition (p=0.006). No significant effect of noise sensitivity and gender was revealed, although there was a tendency for men to exhibit stronger HR acceleration than women. Freight trains provoke HR accelerations during sleep, and the vibration characteristics of the trains are of special importance. In the long term, this may affect cardiovascular functioning of persons living close to railways.

  11. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    Science.gov (United States)

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  13. A late-emerging auditory deficit in autism.

    Science.gov (United States)

    Erviti, Mayalen; Semal, Catherine; Wright, Beverly A; Amestoy, Anouck; Bouvard, Manuel P; Demany, Laurent

    2015-05-01

    Individuals with autism spectrum disorders (ASD) show enhanced perceptual and memory abilities in the domain of pitch, but also perceptual deficits in other auditory domains. The present study investigated their skills with respect to "echoic memory," a form of short-term sensory memory intimately tied to auditory perception, using a developmental perspective. We tested 23 high-functioning participants with ASD and 26 typically developing (TD) participants, distributed in two age groups (children vs. young adults; mean ages: ∼11 and ∼21 years). By means of an adaptive psychophysical procedure, we measured the longest period for which periodic (i.e., repeated) noise could be reliably discriminated from nonperiodic (i.e., plain random) noise. On each experimental trial, a single noise sample was presented to the participant, who had to classify this sound as periodic or nonperiodic. The TD adults performed, on average, much better than the other three groups, who performed similarly overall. As a function of practice, the measured thresholds improved for the TD participants, but did not change for the ASD participants. Thresholds were not correlated to performance in a test assessing verbal memory. The variance of the participants' response biases was larger among the ASD participants than among the TD participants. The results mainly suggest that echoic memory takes a long time to fully develop in TD humans, and that this development stops prematurely in persons with ASD. (c) 2015 APA, all rights reserved).

  14. Failing to get the gist of what's being said: background noise impairs higher-order cognitive processing

    OpenAIRE

    Marsh, John E.; Ljung, Robert; N?stl, Anatole; Threadgold, Emma; Campbell, Tom A.

    2015-01-01

    A dynamic interplay is known to exist between auditory processing and human cognition. For example, prior investigations of speech-in-noise have revealed there is more to learning than just listening: Even if all words within a spoken list are correctly heard in noise, later memory for those words is typically impoverished. These investigations supported a view that there is a "gap" between the intelligibility of speech and memory for that speech. Here, the notion was that this gap between sp...

  15. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  16. Self-organized critical noise amplification in human closed loop control

    Directory of Open Access Journals (Sweden)

    Felix Patzelt

    2007-11-01

    Full Text Available When humans perform closed loop control tasks like in upright standing or while balancing a stick, their behavior exhibits non-Gaussian fluctuations with long-tailed distributions. The origin of these fluctuations is not known. Here, we investigate if they are caused by selforganized critical noise amplification which emerges in control systems when an unstable dynamics becomes stabilized by an adaptive controller that has finite memory. Starting from this theory, we formulate a realistic model of adaptive closed loop control by including constraints on memory and delays. To test this model, we performed psychophysical experiments where humans balanced an unstable target on a screen. It turned out that the model reproduces the long tails of the distributions together with other characteristic features of the human control dynamics. Fine-tuning the model to match the experimental dynamics identifies parameters characterizing a subject’s control system which can be independently tested. Our results suggest that the nervous system involved in closed loop motor control nearly optimally estimates system parameters on-line from very short epochs of past observations.

  17. Resolution improvement and noise reduction in human pinhole SPECT using a multi-ray approach and the SHINE method

    International Nuclear Information System (INIS)

    Seret, A.; Vanhove, C.; Defrise, M.

    2009-01-01

    Purpose: This work aimed at quantifying the gains in spatial resolution and noise that could be achieved when using resolution modelling based on a multi-ray approach and additionally the Statistical and Heuristic Noise Extraction (SHINE) method in human pinhole single photon emission tomography (PH-SPECT). Methods: PH-SPECT of two line phantoms and one homogeneous cylinder were recorded using parameters suited for studies of the human neck area. They were reconstructed using pinhole dedicated ordered subsets expectation maximisation algorithm including a resolution recovery technique based on 7 or 21 rays. Optionally, the SPECT data were SHINE pre-processed. Transverse and axial full widths at half-maximum (FWHM) were obtained from the line phantoms. The noise was quantified using the coefficient of variation (COV) derived from the uniform phantom. Two human PH-SPECT studies of the thyroid (a hot nodule and a very low uptake) were processed with the same algorithms. Results: Depending on the number of iterations, FWHM decreased by 30 to 50% when using the multi-ray approach in the reconstruction process. The SHINE method did not affect the resolution but decreased the COV by at least 20% and by 45% when combined with the multi-ray method. The two human studies illustrated the gain in spatial resolution and the decrease in noise afforded both by the multi-ray reconstruction and the SHINE method. Conclusion: Iterative reconstruction with resolution modelling allows to obtain high resolution human PH-SPECT studies with reduced noise content. The SHINE method affords an additional noise reduction without compromising the resolution. (orig.)

  18. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    Science.gov (United States)

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  19. Nicotine, auditory sensory memory and attention in a human ketamine model of schizophrenia: moderating influence of a hallucinatory trait

    Directory of Open Access Journals (Sweden)

    Verner eKnott

    2012-09-01

    Full Text Available Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low level auditory sensory processes and higher order attention-dependent operations. Objectives: As N-methyl-D-aspartate receptor (NMDAR hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: a to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention, as indexed by the auditory event-related brain potential (ERP – mismatch negativity (MMN, and b to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD. Methods: In a randomized, double-blind, placebo controlled design involving a low intravenous dose of ketamine (.04 mg/kg and a 4 mg dose of nicotine gum, MMN and performance on a rapid visual information processing (RVIP task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12 or higher (H-HD for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed (reaction time and accuracy (increased % hits and d΄ and reduced false alarms on the RIVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d΄, as well as reaction time were poorer in H-HD (vs. L-HD and while hit rate and d΄ was increased by nicotine in H-HD, reaction time was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairments and improved attention, particularly in individuals prone to HD.

  20. Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory.

    Science.gov (United States)

    Buchsbaum, Bradley R; Olsen, Rosanna K; Koch, Paul; Berman, Karen Faith

    2005-11-23

    To hear a sequence of words and repeat them requires sensory-motor processing and something more-temporary storage. We investigated neural mechanisms of verbal memory by using fMRI and a task designed to tease apart perceptually based ("echoic") memory from phonological-articulatory memory. Sets of two- or three-word pairs were presented bimodally, followed by a cue indicating from which modality (auditory or visual) items were to be retrieved and rehearsed over a delay. Although delay-period activation in the planum temporale (PT) was insensible to the source modality and showed sustained delay-period activity, the superior temporal gyrus (STG) activated more vigorously when the retrieved items had arrived to the auditory modality and showed transient delay-period activity. Functional connectivity analysis revealed two topographically distinct fronto-temporal circuits, with STG co-activating more strongly with ventrolateral prefrontal cortex and PT co-activating more strongly with dorsolateral prefrontal cortex. These argue for separate contributions of ventral and dorsal auditory streams in verbal working memory.

  1. Age-Associated Reduction of Asymmetry in Human Central Auditory Function: A 1H-Magnetic Resonance Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Xianming Chen

    2013-01-01

    Full Text Available The aim of this study was to investigate the effects of age on hemispheric asymmetry in the auditory cortex after pure tone stimulation. Ten young and 8 older healthy volunteers took part in this study. Two-dimensional multivoxel 1H-magnetic resonance spectroscopy scans were performed before and after stimulation. The ratios of N-acetylaspartate (NAA, glutamate/glutamine (Glx, and γ-amino butyric acid (GABA to creatine (Cr were determined and compared between the two groups. The distribution of metabolites between the left and right auditory cortex was also determined. Before stimulation, left and right side NAA/Cr and right side GABA/Cr were significantly lower, whereas right side Glx/Cr was significantly higher in the older group compared with the young group. After stimulation, left and right side NAA/Cr and GABA/Cr were significantly lower, whereas left side Glx/Cr was significantly higher in the older group compared with the young group. There was obvious asymmetry in right side Glx/Cr and left side GABA/Cr after stimulation in young group, but not in older group. In summary, there is marked hemispheric asymmetry in auditory cortical metabolites following pure tone stimulation in young, but not older adults. This reduced asymmetry in older adults may at least in part underlie the speech perception difficulties/presbycusis experienced by aging adults.

  2. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    Science.gov (United States)

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  3. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  4. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus

    Directory of Open Access Journals (Sweden)

    Vasiliki (Vivian Iliadou

    2017-11-01

    Full Text Available Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.

  5. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Directory of Open Access Journals (Sweden)

    Sundeep Teki

    Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  6. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  7. First to Flush: The Effects of Ambient Noise on Songbird Flight Initiation Distances and Implications for Human Experiences with Nature

    Directory of Open Access Journals (Sweden)

    Alissa R. Petrelli

    2017-06-01

    Full Text Available Throughout the world, birds represent the primary type of wildlife that people experience on a daily basis. However, a growing body of evidence suggests that alterations to the acoustic environment can negatively affect birds as well as humans in a variety of ways, and altered acoustics from noise pollution has the potential to influence human interactions with wild birds. Birds respond to approaching humans in a manner analogous to approaching predators, but the context of the interaction can also greatly influence the distance at which a bird initiates flight or escape behavior (i.e., flight initiation distance or FID. Here, we hypothesized that reliance on different sensory modalities to balance foraging and threat detection can influence how birds respond to approaching threats in the presence of background noise. We surveyed 12 songbird species in California and Wyoming and categorized each species into one of three foraging guilds: ground foragers, canopy gleaners, and hawking flycatchers and predicted FIDs to decrease, remain the same and increase with noise exposure, respectively. Contrary to expectations, the canopy gleaning and flycatching guilds exhibited mixed responses, with some species exhibiting unchanged FIDs with noise while others exhibited increased FIDs with noise. However, FIDs of all ground foraging species and one canopy gleaner decreased with noise levels. Additionally, we found no evidence of phylogenetic structure among species' mean FID responses and only weak phylogenetic structure for the relationship between FIDs and noise levels. Although our results provide mixed support for foraging strategy as a predictor of bird response to noise, our finding that most of the species we surveyed have shorter FIDs with increases in noise levels suggest that human observers may be able to approach ground foraging species more closely under noisy conditions. From an ecological perspective, however, it remains unclear whether

  8. Research on road traffic noise and human health in India: Review of literature from 1991 to current

    Directory of Open Access Journals (Sweden)

    Dibyendu Banerjee

    2012-01-01

    Full Text Available This article reviews the literature on research conducted during the last two decades on traffic noise impacts in India. Road traffic noise studies in India are fewer and restricted only to the metropolitan areas. The studies over the years have also focused on the monitoring, recording, analysis, modeling, and to some extent mapping related themes. Negligible studies are observed in areas of physiological and sleep research exposure-effect context. Most impact studies have been associated with annoyance and attitudinal surveys only. Little scientific literature exists related to effects of traffic noise on human physiology in the Indian context. The findings of this review search and analysis observe that very little studies are available relating to traffic noise and health impacts. All of them are subjective response studies and only a small portion of them quantify the exposure-effect chain and model the noise index with annoyance. The review of papers showed that road traffic noise is a cause for annoyance to a variety of degree among the respondents. A generalization of impacts and meta-analysis was not possible due to variability of the study designs and outputs preferred.

  9. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    Science.gov (United States)

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  10. Risk factor noise - otological aspects

    Energy Technology Data Exchange (ETDEWEB)

    Haas, E

    1984-06-11

    After a short review of the anatomy and physiology of the inner ear the pathogenesis of chronic noise-induced hearing loss is discussed. The exposure to noise results first in a temporary but reversible threshold shift. But if the exposure to noise was exceedingly high or if the rest period would have required further noise reduction, a state of so-called auditory fatigue develops, finally leading to noise-induced hearing loss, a state which is considered irreversible. The noise-perception varies greatly among individuals and thus it is impossible to determine a certain noise intensity above which noise leasions will to be expected. It is generally accepted, that longterm exposure to noise above 85 dB (A) may lead to hearing loss in a portion of the exposed persons.

  11. Effect of noise on humans; Effet du bruit sur l'homme

    Energy Technology Data Exchange (ETDEWEB)

    Jouhaneau, J. [Conservatoire National des Arts et Metiers, CNAM, 75 - Paris (France)

    2001-07-01

    Noise remains one of the most badly known pollution source with respect to its effects on mankind and to its economical and social impacts. This bad knowledge is the consequence of the difficulty to measure the real short, average and long-term consequences of the noise aggression on living organisms which can adapt and hide some part of these effects. Also, noise comprises several subjective components and can be felt differently from one person to the other with different reactions sometimes contradictory or ambiguous. Content: 1 - definitions of noise; 2 - auditive effects of noise (discomfort, loss of audition); 3 - non-auditive effects (physiological stress, physiological disturbances); 4 - other disturbances (effects on sleep, effects on performance); 5 - theories about the mechanisms of noise action. (J.S.)

  12. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  13. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  14. Rate of decay of auditory sensation

    NARCIS (Netherlands)

    Plomp, R.

    1964-01-01

    The rate of decay of auditory sensation was investigated by measuring the minimum silent interval that must be introduced between two noise pulses to be perceived. The value of this critical time Δt was determined for difierent intensity levels of both the first and the second pulse. It is shown

  15. Auditory Training for Children with Processing Disorders.

    Science.gov (United States)

    Katz, Jack; Cohen, Carolyn F.

    1985-01-01

    The article provides an overview of central auditory processing (CAP) dysfunction and reviews research on approaches to improve perceptual skills; to provide discrimination training for communicative and reading disorders; to increase memory and analysis skills and dichotic listening; to provide speech-in-noise training; and to amplify speech as…

  16. Objective measures of binaural masking level differences and comodulation masking release based on late auditory evoked potentials

    DEFF Research Database (Denmark)

    Epp, Bastian; Yasin, Ifat; Verhey, Jesko L.

    2013-01-01

    at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result......The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound...... in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners...

  17. External auditory exostoses in the Xuchang and Xujiayao human remains: Patterns and implications among eastern Eurasian Middle and Late Pleistocene crania.

    Science.gov (United States)

    Trinkaus, Erik; Wu, Xiu-Jie

    2017-01-01

    In the context of Middle and Late Pleistocene eastern Eurasian human crania, the external auditory exostoses (EAE) of the late archaic Xuchang 1 and 2 and the Xujiayao 15 early Late Pleistocene human temporal bones are described. Xujiayao 15 has small EAE (Grade 1), Xuchang 1 presents bilateral medium EAE (Grade 2), and Xuchang 2 exhibits bilaterally large EAE (Grade 3), especially on the right side. These cranial remains join the other eastern Eurasian later Pleistocene humans in providing frequencies of 61% (N = 18) and 58% (N = 12) respectively for archaic and early modern human samples. These values are near the upper limits of recent human frequencies, and they imply frequent aquatic exposure among these Pleistocene humans. In addition, the medial extents of the Xuchang 1 and 2 EAE would have impinged on their tympanic membranes, and the large EAE of Xuchang 2 would have resulted in cerumen impaction. Both effects would have produced conductive hearing loss, a serious impairment in a Pleistocene foraging context.

  18. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    2016-10-01

    Full Text Available In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release has not been well characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus as well as a broad P3b-like potential (between ~300 and 600 ms with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  19. Noise suppression by noise

    OpenAIRE

    Vilar, J. M. G. (José M. G.), 1972-; Rubí Capaceti, José Miguel

    2001-01-01

    We have analyzed the interplay between an externally added noise and the intrinsic noise of systems that relax fast towards a stationary state, and found that increasing the intensity of the external noise can reduce the total noise of the system. We have established a general criterion for the appearance of this phenomenon and discussed two examples in detail.

  20. Ocular-following responses to white noise stimuli in humans reveal a novel nonlinearity that results from temporal sampling.

    Science.gov (United States)

    Sheliga, Boris M; Quaia, Christian; FitzGibbon, Edmond J; Cumming, Bruce G

    2016-01-01

    White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion.

  1. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  2. Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns.

    Directory of Open Access Journals (Sweden)

    Jayalakshmi Viswanathan

    2016-11-01

    Full Text Available Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs (the two halves of the noise were identical or 1-s plain random noises (Ns. Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin and scrambled (chopping sounds into 10- and 20-ms bits before shuffling versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant’s discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities.

  3. Analysis of impact noise induced by hitting of titanium head golf driver.

    Science.gov (United States)

    Kim, Young Ho; Kim, Young Chul; Lee, Jun Hee; An, Yong-Hwi; Park, Kyung Tae; Kang, Kyung Min; Kang, Yeon June

    2014-11-01

    The hitting of titanium head golf driver against golf ball creates a short duration, high frequency impact noise. We analyzed the spectra of these impact noises and evaluated the auditory hazards from exposure to the noises. Noises made by 10 titanium head golf drivers with five maximum hits were collected, and the spectra of the pure impact sounds were studied using a noise analysis program. The noise was measured at 1.7 m (position A) and 3.4 m (position B) from the hitting point in front of the hitter and at 3.4 m (position C) behind the hitting point. Average time duration was measured and auditory risk units (ARUs) at position A were calculated using the Auditory Hazard Assessment Algorithm for Humans. The average peak levels at position A were 119.9 dBA at the sound pressure level (SPL) peak and 100.0 dBA at the overall octave level. The average peak levels (SPL and overall octave level) at position B were 111.6 and 96.5 dBA, respectively, and at position C were 111.5 and 96.7 dBA, respectively. The average time duration and ARUs measured at position A were 120.6 ms and 194.9 units, respectively. Although impact noises made by titanium head golf drivers showed relatively low ARUs, individuals enjoying golf frequently may be susceptible to hearing loss due to the repeated exposure of this intense impact noise with short duration and high frequency. Unprotected exposure to impact noises should be limited to prevent cochleovestibular disorders.

  4. Proceedings of the 6th International Congress on Noise as a Public Health Problem, volume 2

    Science.gov (United States)

    1993-07-01

    The 160 papers from the congress are presented. Topics covered include the following: noise induced hearing loss; noise and communication; community response to noise; noise and animal life; non-auditory physiological effects; influence of noise on performance and behavior; noise and disturbed sleep; and regulations and standards.

  5. Noise-induced cochlear synaptopathy in rhesus monkeys (Macaca mulatta).

    Science.gov (United States)

    Valero, M D; Burton, J A; Hauser, S N; Hackett, T A; Ramachandran, R; Liberman, M C

    2017-09-01

    Cochlear synaptopathy can result from various insults, including acoustic trauma, aging, ototoxicity, or chronic conductive hearing loss. For example, moderate noise exposure in mice can destroy up to ∼50% of synapses between auditory nerve fibers (ANFs) and inner hair cells (IHCs) without affecting outer hair cells (OHCs) or thresholds, because the synaptopathy occurs first in high-threshold ANFs. However, the fiber loss likely impairs temporal processing and hearing-in-noise, a classic complaint of those with sensorineural hearing loss. Non-human primates appear to be less vulnerable to noise-induced hair-cell loss than rodents, but their susceptibility to synaptopathy has not been studied. Because establishing a non-human primate model may be important in the development of diagnostics and therapeutics, we examined cochlear innervation and the damaging effects of acoustic overexposure in young adult rhesus macaques. Anesthetized animals were exposed bilaterally to narrow-band noise centered at 2 kHz at various sound-pressure levels for 4 h. Cochlear function was assayed for up to 8 weeks following exposure via auditory brainstem responses (ABRs) and otoacoustic emissions (OAEs). A moderate loss of synaptic connections (mean of 12-27% in the basal half of the cochlea) followed temporary threshold shifts (TTS), despite minimal hair-cell loss. A dramatic loss of synapses (mean of 50-75% in the basal half of the cochlea) was seen on IHCs surviving noise exposures that produced permanent threshold shifts (PTS) and widespread hair-cell loss. Higher noise levels were required to produce PTS in macaques compared to rodents, suggesting that primates are less vulnerable to hair-cell loss. However, the phenomenon of noise-induced cochlear synaptopathy in primates is similar to that seen in rodents. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  7. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  8. Children's auditory working memory performance in degraded listening conditions.

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R

    2014-08-01

    The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It was hypothesized that stressing the working memory system with the presence of noise would impede working memory processes in real time and result in poorer working memory performance in degraded conditions. Twenty children with typical hearing between 8 and 10 years old were tested using 4 auditory working memory tasks (Forward Digit Recall, Backward Digit Recall, Listening Recall Primary, and Listening Recall Secondary). Stimuli were from the standardized Working Memory Test Battery for Children. Each task was administered in quiet and in 4-talker babble noise at 0 dB and -5 dB signal-to-noise ratios. Children's auditory working memory performance was systematically decreased in the presence of multitalker babble noise compared with quiet. Differences between low-complexity and high-complexity tasks were observed, with children performing more poorly on tasks with greater storage and processing demands. There was no interaction between noise and complexity of task. All tasks were negatively impacted similarly by the addition of noise. Auditory working memory performance was negatively impacted by the presence of multitalker babble noise. Regardless of complexity of task, noise had a similar effect on performance. These findings suggest that the addition of noise inhibits auditory working memory processes in real time for school-age children.

  9. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  10. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  11. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  12. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  13. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  14. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) With Hearing Protection Software, Release MIL-STD-1474E

    Science.gov (United States)

    2013-12-01

    unfolding.” Invited article in NHCA’s Spectrum. 1997 MIL-STD-1474D. (1997). “Department of defense design criteria. Noise limits,” http...Passenger Safety, Washington, DC. Price, G. R. (1997). “Understanding hazard from intense sounds.” Invited seminar to Audiology Department, University of...Putting theory into practice.” In H. M. Borchgrevink (Ed.), Hearing and hearing prophylaxis, Scandinavian Audiology Supplement 16 (pp. 111-121

  15. Human response to wind turbine noise - perception, annoyance and moderating factors

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Eja

    2007-05-15

    The aims of this thesis were to describe and gain an understanding of how people who live in the vicinity of wind turbines are affected by wind turbine noise, and how individual, situational and visual factors, as well as sound properties, moderate the response. Methods A cross-sectional study was carried out in a flat, mainly rural area in Sweden, with the objective to estimate the prevalence of noise annoyance and to examine the dose-response relationship between A-weighted sound pressure levels (SPLs) and perception of and annoyance with wind turbine noise. Subjective responses were obtained through a questionnaire (n = 513; response rate: 68%) and outdoor, A-weighted SPLs were calculated for each respondent. To gain a deeper understanding of the observed noise annoyance, 15 people living in an area were interviewed using open-ended questions. The interviews were analysed using the comparative method of Grounded Theory (GT). An additional cross-sectional study, mainly exploring the influence of individual and situational factors, was carried out in seven areas in Sweden that differed with regard to terrain (flat or complex) and degree of urbanization (n = 765; response rate: 58%). To further explore the impact of visual factors, data from the two cross-sectional studies were tested with structural equation modelling. A proposed model of the influence of visual attitude on noise annoyance, also comprising the influence of noise level and general attitude, was tested among respondents who could see wind turbines versus respondents who could not see wind turbines from their dwelling, and respondents living in flat versus complex terrain. Dose-response relationships were found both for perception of noise and for noise annoyance in relation to A-weighted SPLs. The risk of annoyance was enhanced among respondents who could see at least one turbine from their dwelling and among those living in a rural in comparison with a suburban area. Noise from wind turbines was

  16. Human response to wind turbine noise - perception, annoyance and moderating factors

    International Nuclear Information System (INIS)

    Pedersen, Eja

    2007-05-01

    The aims of this thesis were to describe and gain an understanding of how people who live in the vicinity of wind turbines are affected by wind turbine noise, and how individual, situational and visual factors, as well as sound properties, moderate the response. Methods A cross-sectional study was carried out in a flat, mainly rural area in Sweden, with the objective to estimate the prevalence of noise annoyance and to examine the dose-response relationship between A-weighted sound pressure levels (SPLs) and perception of and annoyance with wind turbine noise. Subjective responses were obtained through a questionnaire (n = 513; response rate: 68%) and outdoor, A-weighted SPLs were calculated for each respondent. To gain a deeper understanding of the observed noise annoyance, 15 people living in an area were interviewed using open-ended questions. The interviews were analysed using the comparative method of Grounded Theory (GT). An additional cross-sectional study, mainly exploring the influence of individual and situational factors, was carried out in seven areas in Sweden that differed with regard to terrain (flat or complex) and degree of urbanization (n = 765; response rate: 58%). To further explore the impact of visual factors, data from the two cross-sectional studies were tested with structural equation modelling. A proposed model of the influence of visual attitude on noise annoyance, also comprising the influence of noise level and general attitude, was tested among respondents who could see wind turbines versus respondents who could not see wind turbines from their dwelling, and respondents living in flat versus complex terrain. Dose-response relationships were found both for perception of noise and for noise annoyance in relation to A-weighted SPLs. The risk of annoyance was enhanced among respondents who could see at least one turbine from their dwelling and among those living in a rural in comparison with a suburban area. Noise from wind turbines was

  17. Left auditory cortex is involved in pairwise comparisons of the direction of frequency modulated tones

    Directory of Open Access Journals (Sweden)

    Nicole eAngenstein

    2013-07-01

    Full Text Available Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex of humans. For this we used the active categorization of the direction (up versus down of slow frequency modulated (FM tones. Several studies suggest that this task is mainly processed in the right auditory cortex. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right auditory cortex in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM-direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right auditory cortex whereas the sequential comparison of this feature between tones in a pair is probably performed in the left auditory cortex.

  18. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  19. Effect of noise and filtering on largest Lyapunov exponent of time series associated with human walking.

    Science.gov (United States)

    Mehdizadeh, Sina; Sanjari, Mohammad Ali

    2017-11-07

    This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  1. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    International Nuclear Information System (INIS)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Pallier, C.; Oppenheim, C.; Rizzi, L.; Dehaene, S.

    2009-01-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  2. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  3. Exposure to environmental noise and risk for male infertility: A population-based cohort study

    International Nuclear Information System (INIS)

    Min, Kyoung-Bok; Min, Jin-Young

    2017-01-01

    Background: Noise is associated with poor reproductive health. A number of animal studies have suggested the possible effects of exposure to high noise levels on fertility; to date, a little such research has been performed on humans. Objectives: We examined an association between daytime and nocturnal noise exposures over four years (2002–2005) and subsequent male infertility. Methods: We used the National Health Insurance Service-National Sample Cohort (2002–2013), a population-wide health insurance claims dataset. A total of 206,492 males of reproductive age (20–59 years) with no history of congenital malformations were followed up for an 8-year period (2006–2013). Male infertility was defined as per ICD-10 code N46. Data on noise exposure was obtained from the National Noise Information System. Exposure levels of daytime and night time noise were extrapolated using geographic information systems and collated with the subjects' administrative district code, and individual exposure levels assigned. Results: During the study period, 3293 (1.6%) had a diagnosis of infertility. Although there was no association of infertility with 1-dB increments in noise exposure, a non-linear dose-response relationship was observed between infertility and quartiles of daytime and night time noise after adjustment for confounding variables (i.e., age, income, residential area, exercise, smoking, alcohol drinking, blood sugar, body mass index, medical histories, and particulate pollution). Based on WHO criteria, adjusted odds for infertility were significantly increased (OR = 1.14; 95% CI, 1.05–1.23) in males exposed to night time noise ≥ 55 dB. Conclusion: We found a significant association between exposure to environmental noise for four years and the subsequent incidence of male infertility, suggesting long-term exposure to noise has a role in pathogenesis of male infertility. - Highlights: • Noise is widespread and imposes auditory and non-auditory health

  4. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  5. Some remarks on the effects of drugs, lack of sleep and loud noise on human performance.

    NARCIS (Netherlands)

    Sanders, A.F. & A.A. Bunt.

    1971-01-01

    Some literature is reviewed on the effect of some drugs, (amphetamine, hypnotics, alcohol), loud noise and sleep loss in test of time estimation, decision making, long term performance and short term memory. Results are most clear with respect to amphetamine, hypnotics and lack of sleep, in that

  6. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    Science.gov (United States)

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both

  7. Investigation of a glottal related harmonics-to-noise ratio and spectral tilt as indicators of glottal noise in synthesized and human voice signals.

    LENUS (Irish Health Repository)

    Murphy, Peter J

    2008-03-01

    The harmonics-to-noise ratio (HNR) of the voiced speech signal has implicitly been used to infer information regarding the turbulent noise level at the glottis. However, two problems exist for inferring glottal noise attributes from the HNR of the speech wave form: (i) the measure is fundamental frequency (f0) dependent for equal levels of glottal noise, and (ii) any deviation from signal periodicity affects the ratio, not just turbulent noise. An alternative harmonics-to-noise ratio formulation [glottal related HNR (GHNR\\')] is proposed to overcome the former problem. In GHNR\\' a mean over the spectral range of interest of the HNRs at specific harmonic\\/between-harmonic frequencies (expressed in linear scale) is calculated. For the latter issue [(ii)] two spectral tilt measures are shown, using synthesis data, to be sensitive to glottal noise while at the same time being comparatively insensitive to other glottal aperiodicities. The theoretical development predicts that the spectral tilt measures reduce as noise levels increase. A conventional HNR estimator, GHNR\\' and two spectral tilt measures are applied to a data set of 13 pathological and 12 normal voice samples. One of the tilt measures and GHNR\\' are shown to provide statistically significant differentiating power over a conventional HNR estimator.

  8. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  9. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  10. Auditory evoked field measurement using magneto-impedance sensors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K., E-mail: o-kabou@echo.nuee.nagoya-u.ac.jp; Tajima, S.; Song, D.; Uchiyama, T. [Graduate School of Engineering, Nagoya University, Nagoya (Japan); Hamada, N.; Cai, C. [Aichi Steel Corporation, Tokai (Japan)

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  11. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  12. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  13. Auditory function in the Tc1 mouse model of down syndrome suggests a limited region of human chromosome 21 involved in otitis media.

    Directory of Open Access Journals (Sweden)

    Stephanie Kuhn

    Full Text Available Down syndrome is one of the most common congenital disorders leading to a wide range of health problems in humans, including frequent otitis media. The Tc1 mouse carries a significant part of human chromosome 21 (Hsa21 in addition to the full set of mouse chromosomes and shares many phenotypes observed in humans affected by Down syndrome with trisomy of chromosome 21. However, it is unknown whether Tc1 mice exhibit a hearing phenotype and might thus represent a good model for understanding the hearing loss that is common in Down syndrome. In this study we carried out a structural and functional assessment of hearing in Tc1 mice. Auditory brainstem response (ABR measurements in Tc1 mice showed normal thresholds compared to littermate controls and ABR waveform latencies and amplitudes were equivalent to controls. The gross anatomy of the middle and inner ears was also similar between Tc1 and control mice. The physiological properties of cochlear sensory receptors (inner and outer hair cells: IHCs and OHCs were investigated using single-cell patch clamp recordings from the acutely dissected cochleae. Adult Tc1 IHCs exhibited normal resting membrane potentials and expressed all K(+ currents characteristic of control hair cells. However, the size of the large conductance (BK Ca(2+ activated K(+ current (I(K,f, which enables rapid voltage responses essential for accurate sound encoding, was increased in Tc1 IHCs. All physiological properties investigated in OHCs were indistinguishable between the two genotypes. The normal functional hearing and the gross structural anatomy of the middle and inner ears in the Tc1 mouse contrast to that observed in the Ts65Dn model of Down syndrome which shows otitis media. Genes that are trisomic in Ts65Dn but disomic in Tc1 may predispose to otitis media when an additional copy is active.

  14. Auditory memory for temporal characteristics of sound.

    Science.gov (United States)

    Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike

    2008-05-01

    This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.

  15. A Temporal White Noise Analysis for Extracting the Impulse Response Function of the Human Electroretinogram.

    Science.gov (United States)

    Zele, Andrew J; Feigl, Beatrix; Kambhampati, Pradeep K; Aher, Avinash; McKeefry, Declan; Parry, Neil; Maguire, John; Murray, Ian; Kremers, Jan

    2017-11-01

    We introduce a method for determining the impulse response function (IRF) of the ERG derived from responses to temporal white noise (TWN) stimuli. This white noise ERG (wnERG) was recorded in participants with normal trichromatic vision to full-field (Ganzfeld) and 39.3° diameter focal stimuli at mesopic and photopic mean luminances and at different TWN contrasts. The IRF was obtained by cross-correlating the TWN stimulus with the wnERG. We show that wnERG recordings are highly repeatable, with good signal-to-noise ratio, and do not lead to blink artifacts. The wnERG resembles a flash ERG waveform with an initial negativity (N1) followed by a positivity (P1), with amplitudes that are linearly related to stimulus contrast. These N1 and N1-P1 components showed commonalties in implicit times with the a- and b-waves of flash ERGs. There was a clear transition from rod- to cone-driven wnERGs at ∼1 photopic cd.m -2 . We infer that oscillatory potentials found with the flash ERG, but not the wnERG, may reflect retinal nonlinearities due to the compression of energy into a short time period during a stimulus flash. The wnERG provides a new approach to study the physiology of the retina using a stimulation method with adaptation and contrast conditions similar to natural scenes to allow for independent variation of stimulus strength and mean luminance, which is not possible with the conventional flash ERG. The white noise ERG methodology will be of benefit for clinical studies and animal models in the evaluation of hypotheses related to cellular redundancy to understand the effects of disease on specific visual pathways.

  16. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Objective measures of binaural masking level differences and comodulation masking release based on late auditory evoked potentials.

    Science.gov (United States)

    Epp, Bastian; Yasin, Ifat; Verhey, Jesko L

    2013-12-01

    The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. How Might People Near National Roads Be Affected by Traffic Noise as Electric Vehicles Increase in Number? A Laboratory Study of Subjective Evaluations of Environmental Noise.

    Science.gov (United States)

    Walker, Ian; Kennedy, John; Martin, Susanna; Rice, Henry

    2016-01-01

    We face a likely shift to electric vehicles (EVs) but the environmental and human consequences of this are not yet well understood. Simulated auditory traffic scenes were synthesized from recordings of real conventional and EVs. These sounded similar to what might be heard by a person near a major national road. Versions of the simulation had 0%, 20%, 40%, 60%, 80% and 100% EVs. Participants heard the auditory scenes in random order, rating each on five perceptual dimensions such as pleasant-unpleasant and relaxing-stressful. Ratings of traffic noise were, overall, towards the negative end of these scales, but improved significantly when there were high proportions of EVs in the traffic mix, particularly when there were 80% or 100% EVs. This suggests a shift towards a high proportion of EVs is likely to improve the subjective experiences of people exposed to traffic noise from major roads. The effects were not a simple result of EVs being quieter: ratings of bandpass-filtered versions of the recordings suggested that people's perceptions of traffic noise were specifically influenced by energy in the 500-2000 Hz band. Engineering countermeasures to reduce noise in this band might be effective for improving the subjective experience of people living or working near major roads, even for conventional vehicles; energy in the 0-100 Hz band was particularly associated with people identifying sound as 'quiet' and, again, this might feed into engineering to reduce the impact of traffic noise on people.

  20. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Acoustic Trauma Changes the Parvalbumin-Positive Neurons in Rat Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Congli Liu

    2018-01-01

    Full Text Available Acoustic trauma is being reported to damage the auditory periphery and central system, and the compromised cortical inhibition is involved in auditory disorders, such as hyperacusis and tinnitus. Parvalbumin-containing neurons (PV neurons, a subset of GABAergic neurons, greatly shape and synchronize neural network activities. However, the change of PV neurons following acoustic trauma remains to be elucidated. The present study investigated how auditory cortical PV neurons change following unilateral 1 hour noise exposure (left ear, one octave band noise centered at 16 kHz, 116 dB SPL. Noise exposure elevated the auditory brainstem response threshold of the exposed ear when examined 7 days later. More detectable PV neurons were observed in both sides of the auditory cortex of noise-exposed rats when compared to control. The detectable PV neurons of the left auditory cortex (ipsilateral to the exposed ear to noise exposure outnumbered those of the right auditory cortex (contralateral to the exposed ear. Quantification of Western blotted bands revealed higher expression level of PV protein in the left cortex. These findings of more active PV neurons in noise-exposed rats suggested that a compensatory mechanism might be initiated to maintain a stable state of the brain.

  2. Noise Pollution Control System in the Hospital Environment

    Science.gov (United States)

    Figueroa Gallo, LM; Olivera, JM

    2016-04-01

    Problems related to environmental noise are not a new subject, but they became a major issue to solve because of the increasing, in complexity and intensity, of human activities due technological advances. Numerous international studies had dealt with the exposure of critical patients to noisy environment such as the Neonatal Intensive Care Units; their results show that there are difficulties in the organization in the developing brain, it can damage the delicate auditory structures and can cause biorhythm disorders, specially in preterm infants. The objective of this paper is to present the development and implementation of a control system that includes technical-management-training aspects to regulate the levels of specific noise sources in the neonatal hospitalization environment. For this purpose, there were applied different tools like: observations, surveys, procedures, an electronic control device and a training program for a Neonatal Service Unit. As a result, all noise sources were identified -some of them are eliminable-; all the service stable staff categories participated voluntarily; environmental noise measurements yielded values between 62.5 and 64.6 dBA and maximum were between 86.1 and 89.7 dBA; it was designed and installed a noise control device and the staff is being trained in noise reduction best practices.

  3. Noise Pollution Control System in the Hospital Environment

    International Nuclear Information System (INIS)

    Figueroa Gallo, LM; Olivera, JM

    2016-01-01

    Problems related to environmental noise are not a new subject, but they became a major issue to solve because of the increasing, in complexity and intensity, of human activities due technological advances. Numerous international studies had dealt with the exposure of critical patients to noisy environment such as the Neonatal Intensive Care Units; their results show that there are difficulties in the organization in the developing brain, it can damage the delicate auditory structures and can cause biorhythm disorders, specially in preterm infants. The objective of this paper is to present the development and implementation of a control system that includes technical-management-training aspects to regulate the levels of specific noise sources in the neonatal hospitalization environment. For this purpose, there were applied different tools like: observations, surveys, procedures, an electronic control device and a training program for a Neonatal Service Unit. As a result, all noise sources were identified -some of them are eliminable-; all the service stable staff categories participated voluntarily; environmental noise measurements yielded values between 62.5 and 64.6 dBA and maximum were between 86.1 and 89.7 dBA; it was designed and installed a noise control device and the staff is being trained in noise reduction best practices. (paper)

  4. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    Science.gov (United States)

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  5. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  6. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  7. Neuronal effects of nicotine during auditory selective attention.

    Science.gov (United States)

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R

    2015-06-01

    Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.

  8. Does exposure to noise from human activities compromise sensory information from cephalopod statocysts?

    Science.gov (United States)

    Solé, Marta; Lenoir, Marc; Durfort, Mercè; López-Bejar, Manel; Lombarte, Antoni; van der Schaar, Mike; André, Michel

    2013-10-01

    Many anthropogenic noise sources are nowadays contributing to the general noise budget of the oceans. The extent to which sound in the sea impacts and affects marine life is a topic of considerable current interest both to the scientific community and to the general public. Cepaholopods potentially represent a group of species whose ecology may be influenced by artificial noise that would have a direct consequence on the functionality and sensitivity of their sensory organs, the statocysts. These are responsible for their equilibrium and movements in the water column. Controlled Exposure Experiments, including the use of a 50-400Hz sweep (RL=157±5dB re 1μPa with peak levels up to SPL=175dB re 1μPa) revealed lesions in the statocysts of four cephalopod species of the Mediterranean Sea, when exposed to low frequency sounds: (n=76) of Sepia officinalis, (n=4) Octopus vulgaris, (n=5) Loligo vulgaris and (n=2) Illex condietii. The analysis was performed through scanning (SEM) and transmission (TEM) electron microscopical techniques of the whole inner structure of the cephalopods' statocyst, especially on the macula and crista. All exposed individuals presented the same lesions and the same incremental effects over time, consistent with a massive acoustic trauma observed in other species that have been exposed to much higher intensities of sound: Immediately after exposure, the damage was observed in the macula statica princeps (msp) and in the crista sensory epithelium. Kinocilia on hair cells were either missing or were bent or flaccid. A number of hair cells showed protruding apical poles and ruptured lateral plasma membranes, most probably resulting from the extrusion of cytoplasmic material. Hair cells were also partially ejected from the sensory epithelium, and spherical holes corresponding to missing hair cells were visible in the epithelium. The cytoplasmic content of the damaged hair cells showed obvious changes, including the presence of numerous vacuoles

  9. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Title: Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline Christopher J. Smalt...representation of speech intelligibility in noise. The auditory-periphery model of Zilany et al. (JASA 2009,2014) is used to make predictions of...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  10. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study

    OpenAIRE

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    PURPOSE: To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise - GIN) and IQ, attention, memory and age measurements. METHOD: Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and ...

  11. The fast detection of rare auditory feature conjunctions in the human brain as revealed by cortical gamma-band electroencephalogram.

    Science.gov (United States)

    Ruusuvirta, T; Huotilainen, M

    2005-01-01

    Natural environments typically contain temporal scatters of sounds emitted from multiple sources. The sounds may often physically stand out from one another in their conjoined rather than simple features. This poses a particular challenge for the brain to detect which of these sounds are rare and, therefore, potentially important for survival. We recorded gamma-band (32-40 Hz) electroencephalographic (EEG) oscillations from the scalp of adult humans who passively listened to a repeated tone carrying frequent and rare conjunctions of its frequency and intensity. EEG oscillations that this tone induced, rather than evoked, differed in amplitude between the two conjunction types within the 56-ms analysis window from tone onset. Our finding suggests that, perhaps with the support of its non-phase-locked synchrony in the gamma band, the human brain is able to detect rare sounds as feature conjunctions very rapidly.

  12. Tinnitus with a normal audiogram: Relation to noise exposure but no evidence for cochlear synaptopathy.

    Science.gov (United States)

    Guest, Hannah; Munro, Kevin J; Prendergast, Garreth; Howe, Simon; Plack, Christopher J

    2017-02-01

    In rodents, exposure to high-level noise can destroy synapses between inner hair cells and auditory nerve fibers, without causing hair cell loss or permanent threshold elevation. Such "cochlear synaptopathy" is associated with amplitude reductions in wave I of the auditory brainstem response (ABR) at moderate-to-high sound levels. Similar ABR results have been reported in humans with tinnitus and normal audiometric thresholds, leading to the suggestion that tinnitus in these cases might be a consequence of synaptopathy. However, the ABR is an indirect measure of synaptopathy and it is unclear whether the results in humans reflect the same mechanisms demonstrated in rodents. Measures of noise exposure were not obtained in the human studies, and high frequency audiometric loss may have impacted ABR amplitudes. To clarify the role of cochlear synaptopathy in tinnitus with a normal audiogram, we recorded ABRs, envelope following responses (EFRs), and noise exposure histories in young adults with tinnitus and matched controls. Tinnitus was associated with significantly greater lifetime noise exposure, despite close matching for age, sex, and audiometric thresholds up to 14 kHz. However, tinnitus was not associated with reduced ABR wave I amplitude, nor with significant effects on EFR measures of synaptopathy. These electrophysiological measures were also uncorrelated with lifetime noise exposure, providing no evidence of noise-induced synaptopathy in this cohort, despite a wide range of exposures. In young adults with normal audiograms, tinnitus may be related not to cochlear synaptopathy but to other effects of noise exposure. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.