WorldWideScience

Sample records for humans auditory noise

  1. Ubiquitous crossmodal Stochastic Resonance in humans: auditory noise facilitates tactile, visual and proprioceptive sensations.

    Directory of Open Access Journals (Sweden)

    Eduardo Lugo

    Full Text Available BACKGROUND: Stochastic resonance is a nonlinear phenomenon whereby the addition of noise can improve the detection of weak stimuli. An optimal amount of added noise results in the maximum enhancement, whereas further increases in noise intensity only degrade detection or information content. The phenomenon does not occur in linear systems, where the addition of noise to either the system or the stimulus only degrades the signal quality. Stochastic Resonance (SR has been extensively studied in different physical systems. It has been extended to human sensory systems where it can be classified as unimodal, central, behavioral and recently crossmodal. However what has not been explored is the extension of this crossmodal SR in humans. For instance, if under the same auditory noise conditions the crossmodal SR persists among different sensory systems. METHODOLOGY/PRINCIPAL FINDINGS: Using physiological and psychophysical techniques we demonstrate that the same auditory noise can enhance the sensitivity of tactile, visual and propioceptive system responses to weak signals. Specifically, we show that the effective auditory noise significantly increased tactile sensations of the finger, decreased luminance and contrast visual thresholds and significantly changed EMG recordings of the leg muscles during posture maintenance. CONCLUSIONS/SIGNIFICANCE: We conclude that crossmodal SR is a ubiquitous phenomenon in humans that can be interpreted within an energy and frequency model of multisensory neurons spontaneous activity. Initially the energy and frequency content of the multisensory neurons' activity (supplied by the weak signals is not enough to be detected but when the auditory noise enters the brain, it generates a general activation among multisensory neurons of different regions, modifying their original activity. The result is an integrated activation that promotes sensitivity transitions and the signals are then perceived. A physiologically

  2. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  3. Human Auditory Communication Disturbances Due To Road Traffic Noise Pollution in Calabar City, Nigeria

    Directory of Open Access Journals (Sweden)

    E. O. Obisung

    2016-10-01

    Full Text Available Study on auditory communication disturbances due to road transportation noise in Calabar Urban City, Nigeria was carried out. Both subjective (psycho-social and objective (acoustical measurements were made for a period of twelve months. Questionnaire/interview schedules containing pertinent questions were administered randomly to 500 respondents of age 15 year and above, who were also with a good level of literacy skills (reading writing and leaving in houses sited along or parallel to busy road, with heavy traffic volume for at least three (3 years. The questionnaires provided the psycho-social responses of respondents used in this study, their reactions to road traffic noise effect on communication activities (listening to radio, listening and watching television, verbal communication between individuals, speech communication and telephone/GSM communication. Acoustical measurements were made at the facades of respondents' houses facing the road using precision digital sound level meter, Bruel and Kjaer (B & K type 732 following ISO standards 1996. The meter read the road traffic noise levels at measurement sites (facades of respondents' houses. From the results obtained in this study residents of Calabar City suffer serious communication interferences as a result of excessive road traffic noise levels. The noise indices used for this study were LAeq and Ldn. Noise levels obtained were over 93 dB(A (daytime and 60 dB(A, (nighttime for LAeq and 80 dB(A for Ldn. These far exceeded the recommended theoretical values of 45-55 and 70 dB(A, for LAeqand Ldn respectively. A-weighted sound pressure level (SPLS range between 87.0 and 100.0 dB(A. In this study it was also observed that over 98% of the respondents reported their television watching/radio listening disturbed, 99% recorded telephone/GSM disturbed, and 98% reported face-to-face verbal conversation disturbed, and 98% reported speech communication disturbed. The background noise levels (BNLs of

  4. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  5. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  6. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review

    Directory of Open Access Journals (Sweden)

    Marie-Josée Castellanos

    2016-12-01

    Full Text Available Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  7. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review.

    Science.gov (United States)

    Castellanos, Marie-Josée; Fuente, Adrian

    2016-12-09

    Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  8. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  9. Acoustic Noise of MRI Scans of the Internal Auditory Canal and Potential for Intracochlear Physiological Changes

    CERN Document Server

    Busada, M A; Ibrahim, G; Huckans, J H

    2012-01-01

    Magnetic resonance imaging (MRI) is a widely used medical imaging technique to assess the health of the auditory (vestibulocochlear) nerve. A well known problem with MRI machines is that the acoustic noise they generate during a scan can cause auditory temporary threshold shifts (TTS) in humans. In addition, studies have shown that excessive noise in general can cause rapid physiological changes of constituents of the auditory within the cochlea. Here, we report in-situ measurements of the acoustic noise from a 1.5 Tesla MRI machine (GE Signa) during scans specific to auditory nerve assessment. The measured average and maximum noise levels corroborate earlier investigations where TTS occurred. We briefly discuss the potential for physiological changes to the intracochlear branches of the auditory nerve as well as iatrogenic misdiagnoses of intralabyrinthine and intracochlear schwannomas due to hypertrophe of the auditory nerve within the cochlea during MRI assessment.

  10. Non-auditory Effect of Noise Pollution and Its Risk on Human Brain Activity in Different Audio Frequency Using Electroencephalogram Complexity.

    Science.gov (United States)

    Allahverdy, Armin; Jafari, Amir Homayoun

    2016-10-01

    Noise pollution is one of the most harmful ambiance disturbances. It may cause many deficits in ability and activity of persons in the urban and industrial areas. It also may cause many kinds of psychopathies. Therefore, it is very important to measure the risk of this pollution in different area. This study was conducted in the Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences from June to September of 2015, in which, different frequencies of noise pollution were played for volunteers. 16-channel EEG signal was recorded synchronously, then by using fractal dimension and relative power of Beta sub-band of EEG, the complexity of EEG signals was measured. As the results, it is observed that the average complexity of brain activity is increased in the middle of audio frequency range and the complexity map of brain activity changes in different frequencies, which can show the effects of frequency changes on human brain activity. The complexity of EEG is a good measure for ranking the annoyance and non-auditory risk of noise pollution on human brain activity.

  11. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  12. Context-Dependent Encoding in the Human Auditory Brainstem Relates to Hearing Speech in Noise: Implications for Developmental Dyslexia

    National Research Council Canada - National Science Library

    Chandrasekaran, Bharath; Hornickel, Jane; Skoe, Erika; Nicol, Trent; Kraus, Nina

    2009-01-01

    We examined context-dependent encoding of speech in children with and without developmental dyslexia by measuring auditory brainstem responses to a speech syllable presented in a repetitive or variable context...

  13. Evaluation of Evidence for Altered Behavior and Auditory Deficits in Fishes Due to Human-Generated Noise Sources

    Science.gov (United States)

    2006-04-01

    Rutilus rutilus). Some of the roach were exposed to cobalt , which reversibly blocks the responsiveness of lateral line receptors (Karlsen and Sand...cartilaginous fishes, such as pelagic and benthic sharks, skates, and rays, since their auditory systems have potentially important variations in

  14. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a bas

  15. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  16. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  17. Phonological Processing In Human Auditory Cortical Fields

    Directory of Open Access Journals (Sweden)

    David L Woods

    2011-04-01

    Full Text Available We used population-based cortical-surface analysis of functional magnetic imaging (fMRI data to characterize the processing of consonant-vowel-consonant syllables (CVCs and spectrally-matched amplitude-modulated noise bursts (AMNBs in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1 Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1 and rostral field (R preferring AMNBs and lateral regions preferring CVCs. (2 Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that different ACFs are specialized for the automatic processing of different spectrotemporal sound features.

  18. DEVELOPING ‘STANDARD NOVEL ‘VAD’ TECHNIQUE’ AND ‘NOISE FREE SIGNALS’ FOR SPEECH AUDITORY BRAINSTEM RESPONSES FOR HUMAN SUBJECTS

    OpenAIRE

    Ranganadh Narayanam*

    2016-01-01

    In this research as a first step we have concentrated on collecting non-intra cortical EEG data of Brainstem Speech Evoked Potentials from human subjects in an Audiology Lab in University of Ottawa. The problems we have considered are the most advanced and most essential problems of interest in Auditory Neural Signal Processing area in the world: The first problem is the Voice Activity Detection (VAD) in Speech Auditory Brainstem Responses (ABR); The second problem is to identify the best De-...

  19. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  20. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  1. Reduction of internal noise in auditory perceptual learning.

    Science.gov (United States)

    Jones, Pete R; Moore, David R; Amitay, Sygal; Shub, Daniel E

    2013-02-01

    This paper examines what mechanisms underlie auditory perceptual learning. Fifteen normal hearing adults performed two-alternative, forced choice, pure tone frequency discrimination for four sessions. External variability was introduced by adding a zero-mean Gaussian random variable to the frequency of each tone. Measures of internal noise, encoding efficiency, bias, and inattentiveness were derived using four methods (model fit, classification boundary, psychometric function, and double-pass consistency). The four methods gave convergent estimates of internal noise, which was found to decrease from 4.52 to 2.93 Hz with practice. No group-mean changes in encoding efficiency, bias, or inattentiveness were observed. It is concluded that learned improvements in frequency discrimination primarily reflect a reduction in internal noise. Data from highly experienced listeners and neural networks performing the same task are also reported. These results also indicated that auditory learning represents internal noise reduction, potentially through the re-weighting of frequency-specific channels.

  2. Ion channel noise can explain firing correlation in auditory nerves.

    Science.gov (United States)

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels.

  3. Hearing an illusory vowel in noise: suppression of auditory cortical activity.

    Science.gov (United States)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Başkent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-06-06

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.

  4. Auditory peripersonal space in humans: a case of auditory-tactile extinction.

    Science.gov (United States)

    Làdavas, E; Pavani, F; Farnè, A

    2001-01-01

    Animal experiments have shown that the spatial correspondence between auditory and tactile receptive fields of ventral pre-motor neurons provides a map of auditory peripersonal space around the head. This allows neurons to localize a near sound with respect to the head. In the present study, we demonstrated the existence of an auditory peripersonal space around the head in humans. In a right-brain damaged patient with tactile extinction, a sound delivered near the ipsilesional side of the head extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). In contrast, when an auditory stimulus was presented far from the head, cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was found only when a complex sound like a white noise burst was presented; pure tones did not produce spatially specific cross-modal extinction. These results show a high degree of functional similarity between the characteristics of the auditory peripersonal space representation in humans and monkeys. This similarity suggests that analogous physiological substrates might be responsible for coding this multisensory integrated representation of peripersonal space in human and non-human primates.

  5. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  6. Noise exposure and auditory effects on preschool personnel

    Directory of Open Access Journals (Sweden)

    Fredrik Sjödin

    2012-01-01

    Full Text Available Hearing impairments and tinnitus are being reported in an increasing extent from employees in the preschool. The investigation included 101 employees at 17 preschools in Umeå county, Sweden. Individual noise recordings and stationary recordings in dining rooms and play halls were conducted at two departments per preschool. The effects of noise exposures were carried out through audiometric screenings and by use of questionnaires. The average individual noise exposure was close to 71 dB(A, with individual differences but small differences between the preschools. The noise levels in the dining room and playing halls were about 64 dB(A, with small differences between the investigated types of rooms and preschools. The hearing loss of the employees was significantly higher for the frequencies tested when compared with an unexposed control group in Sweden. Symptoms of tinnitus were reported among about 31% of the employees. Annoyance was rated as somewhat to very annoying. The voices of the children were the most annoying noise source. The dB(A level and fluctuation of the noise exposure were significantly correlated to the number of children per department. The preschool sound environment is complex and our findings indicate that the sound environment is hazardous regarding auditory disorders. The fluctuation of the noise is of special interest for further research.

  7. Noise exposure and auditory effects on preschool personnel.

    Science.gov (United States)

    Sjödin, Fredrik; Kjellberg, Anders; Knutsson, Anders; Landström, Ulf; Lindberg, Lennart

    2012-01-01

    Hearing impairments and tinnitus are being reported in an increasing extent from employees in the preschool. The investigation included 101 employees at 17 preschools in Umeå county, Sweden. Individual noise recordings and stationary recordings in dining rooms and play halls were conducted at two departments per preschool. The effects of noise exposures were carried out through audiometric screenings and by use of questionnaires. The average individual noise exposure was close to 71 dB(A), with individual differences but small differences between the preschools. The noise levels in the dining room and playing halls were about 64 dB(A), with small differences between the investigated types of rooms and preschools. The hearing loss of the employees was significantly higher for the frequencies tested when compared with an unexposed control group in Sweden. Symptoms of tinnitus were reported among about 31% of the employees. Annoyance was rated as somewhat to very annoying. The voices of the children were the most annoying noise source. The dB(A) level and fluctuation of the noise exposure were significantly correlated to the number of children per department. The preschool sound environment is complex and our findings indicate that the sound environment is hazardous regarding auditory disorders. The fluctuation of the noise is of special interest for further research.

  8. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  9. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  10. Signs of noise-induced neural degeneration in humans

    DEFF Research Database (Denmark)

    Holtegaard, Pernille; Olsen, Steen Østergaard

    2015-01-01

    Animal studies demonstrated that noise exposure causes a primary and selective loss of auditory-nerve fibres with low spontaneous firing rate. This neuronal impairment, if also present in humans, can be assumed to affect the processing of supra-threshold stimuli, especially in the presence...... thresholds within the “normal” range perform poorer, in terms of their speech recognition threshold in noise (SRTN), and (2) if auditory brainstem responses (ABR) reveal lower amplitude of wave I in the noise-exposed listeners. A test group of noise/music-exposed individuals and a control group were...

  11. A computer model of auditory efferent suppression: implications for the recognition of speech in noise.

    Science.gov (United States)

    Brown, Guy J; Ferry, Robert T; Meddis, Ray

    2010-02-01

    The neural mechanisms underlying the ability of human listeners to recognize speech in the presence of background noise are still imperfectly understood. However, there is mounting evidence that the medial olivocochlear system plays an important role, via efferents that exert a suppressive effect on the response of the basilar membrane. The current paper presents a computer modeling study that investigates the possible role of this activity on speech intelligibility in noise. A model of auditory efferent processing [Ferry, R. T., and Meddis, R. (2007). J. Acoust. Soc. Am. 122, 3519-3526] is used to provide acoustic features for a statistical automatic speech recognition system, thus allowing the effects of efferent activity on speech intelligibility to be quantified. Performance of the "basic" model (without efferent activity) on a connected digit recognition task is good when the speech is uncorrupted by noise but falls when noise is present. However, recognition performance is much improved when efferent activity is applied. Furthermore, optimal performance is obtained when the amount of efferent activity is proportional to the noise level. The results obtained are consistent with the suggestion that efferent suppression causes a "release from adaptation" in the auditory-nerve response to noisy speech, which enhances its intelligibility.

  12. Auditory stream segregation using amplitude modulated bandpass noise

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2015-08-01

    Full Text Available The purpose of this study was to investigate the roles of spectral overlap and amplitude modulation (AM rate for stream segregation for noise signals, as well as to test the build-up effect based on these two cues. Segregation ability was evaluated using an objective paradigm with listeners’ attention focused on stream segregation. Stimulus sequences consisted of two interleaved sets of bandpass noise bursts (A and B bursts. The A and B bursts differed in spectrum, AM-rate, or both. The amount of the difference between the two sets of noise bursts was varied. Long and short sequences were studied to investigate the build-up effect for segregation based on spectral and AM-rate differences. Results showed the following: 1. Stream segregation ability increased with greater spectral separation. 2. Larger AM-rate separations were associated with stronger segregation abilities. 3. Spectral separation was found to elicit the build-up effect for the range of spectral differences assessed in the current study. 4. AM-rate separation interacted with spectral separation suggesting an additive effect of spectral separation and AM-rate separation on segregation build-up. The findings suggest that, when normal-hearing listeners direct their attention toward segregation, they are able to segregate auditory streams based on reduced spectral contrast cues that vary by the amount of spectral overlap. Further, regardless of the spectral separation they were able to use AM-rate difference as a secondary/weaker cue. Based on the spectral differences, listeners can segregate auditory streams better as the listening duration is prolonged—i.e. sparse spectral cues elicit build-up segregation; however, AM-rate differences only appear to elicit build-up when in combination with spectral difference cues.

  13. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  14. Recent advances in research on non-auditory effects of community noise.

    Science.gov (United States)

    Belojević, Goran; Paunović, Katarina

    2016-01-01

    Non-auditory effects of noise on humans have been intensively studied in the last four decades. The International Commission on Biological Effects of Noise has been following scientific advances in this field by organizing international congresses from the first one in 1968 in Washington, DC, to the 11th congress in Nara, Japan, in 2014. There is already a large scientific body of evidence on the effects of noise on annoyance, communication, performance and behavior, mental health, sleep, and cardiovascular functions including relationship with hypertension and ischemic heart disease. In the last five years new issues in this field have been tackled. Large epidemiological studies on community noise have reported its relationship with breast cancer, stroke, type 2 diabetes, and obesity. It seems that noise-induced sleep disturbance may be one of the mediating factors in these effects. Given a large public health importance of the above-mentioned diseases, future studies should more thoroughly address the mechanisms underlying the reported association with community noise exposure. Keywords: noise; cancer; stroke; diabetes mellitus type 2; obesity

  15. Auditory training of speech recognition with interrupted and continuous noise maskers by children with hearing impairment.

    Science.gov (United States)

    Sullivan, Jessica R; Thibodeau, Linda M; Assmann, Peter F

    2013-01-01

    Previous studies have indicated that individuals with normal hearing (NH) experience a perceptual advantage for speech recognition in interrupted noise compared to continuous noise. In contrast, adults with hearing impairment (HI) and younger children with NH receive a minimal benefit. The objective of this investigation was to assess whether auditory training in interrupted noise would improve speech recognition in noise for children with HI and perhaps enhance their utilization of glimpsing skills. A partially-repeated measures design was used to evaluate the effectiveness of seven 1-h sessions of auditory training in interrupted and continuous noise. Speech recognition scores in interrupted and continuous noise were obtained from pre-, post-, and 3 months post-training from 24 children with moderate-to-severe hearing loss. Children who participated in auditory training in interrupted noise demonstrated a significantly greater improvement in speech recognition compared to those who trained in continuous noise. Those who trained in interrupted noise demonstrated similar improvements in both noise conditions while those who trained in continuous noise only showed modest improvements in the interrupted noise condition. This study presents direct evidence that auditory training in interrupted noise can be beneficial in improving speech recognition in noise for children with HI.

  16. Non-Auditory Health Hazard Vulnerability to Noise Pollution: Assessing Public Awareness Gap

    Directory of Open Access Journals (Sweden)

    Tanjir Ahmed

    2015-04-01

    Full Text Available In Dhaka, one of the top ten megacities in Asia and the capital of Bangladesh, the problem of noise related pollution is prevalent. In almost every part of Dhaka city, the levels of noise which are established by W.H.O. are regularly exceeded, thus prompting adverse health effects on its inhabitants. This sort of pollution is more acute in central portion of Dhaka than its periphery. Therefore, if the greater Dhaka is taken as a study area, the central’s problem may be underestimated. This study is prepared to find out the actual condition of auditory and non-auditory health effect of noise among roadside people and provide recommendation to ameliorate the same and consequently reduce noise level in Dhaka city as an effort to make Dhaka a better place to live in. The result shows that both auditory and non-auditory effects of noise are at alarming condition in all zones of the city.

  17. The digits-in-noise test: assessing auditory speech recognition abilities in noise.

    Science.gov (United States)

    Smits, Cas; Theo Goverts, S; Festen, Joost M

    2013-03-01

    A speech-in-noise test which uses digit triplets in steady-state speech noise was developed. The test measures primarily the auditory, or bottom-up, speech recognition abilities in noise. Digit triplets were formed by concatenating single digits spoken by a male speaker. Level corrections were made to individual digits to create a set of homogeneous digit triplets with steep speech recognition functions. The test measures the speech reception threshold (SRT) in long-term average speech-spectrum noise via a 1-up, 1-down adaptive procedure with a measurement error of 0.7 dB. One training list is needed for naive listeners. No further learning effects were observed in 24 subsequent SRT measurements. The test was validated by comparing results on the test with results on the standard sentences-in-noise test. To avoid the confounding of hearing loss, age, and linguistic skills, these measurements were performed in normal-hearing subjects with simulated hearing loss. The signals were spectrally smeared and/or low-pass filtered at varying cutoff frequencies. After correction for measurement error the correlation coefficient between SRTs measured with both tests equaled 0.96. Finally, the feasibility of the test was approved in a study where reference SRT values were gathered in a representative set of 1386 listeners over 60 years of age.

  18. Noise-induced cell death in the mouse medial geniculate body and primary auditory cortex.

    Science.gov (United States)

    Basta, Dietmar; Tzschentke, Barbara; Ernst, Arne

    Noise-induced effects within the inner ear have been well investigated for several years. However, this peripheral damage cannot fully explain the audiological symptoms in noise-induced hearing loss (NIHL), e.g. tinnitus, recruitment, reduced speech intelligibility, hyperacusis. There are few reports on central noise effects. Noise can induce an apoptosis of neuronal tissue within the lower auditory pathway. Higher auditory structures (e.g. medial geniculate body, auditory cortex) are characterized by metabolic changes after noise exposure. However, little is known about the microstructural changes of the higher auditory pathway after noise exposure. The present paper was therefore aimed at investigating the cell density in the medial geniculate body (MGB) and the primary auditory cortex (AI) after noise exposure. Normal hearing mice were exposed to noise (10 kHz center frequency at 115 dB SPL for 3 h) at the age of 21 days under anesthesia (Ketamin/Rompun, 10:1). After 1 week, auditory brainstem response recordings (ABR) were performed in noise exposed and normal hearing animals. After fixation, the brain was microdissected and stained (Kluever-Barrera). The cell density in the MGB subdivisions and the AI were determined by counting the cells within a grid. Noise-exposed animals showed a significant ABR threshold shift over the whole frequency range. Cell density was significantly reduced in all subdivisions of the MGB and in layers IV-VI of AI. The present findings demonstrate a significant noise-induced change of the neuronal cytoarchitecture in central key areas of auditory processing. These changes could contribute to the complex psychoacoustic symptoms after NIHL.

  19. Functional properties of human auditory cortical fields

    Directory of Open Access Journals (Sweden)

    David L Woods

    2010-12-01

    Full Text Available While auditory cortex in non-human primates has been subdivided into multiple functionally-specialized auditory cortical fields (ACFs, the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and nonattended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to nonattended sounds. Three centrally-located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally-defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.

  20. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    Science.gov (United States)

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  1. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  2. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  3. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  4. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  5. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  6. Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions.

    Science.gov (United States)

    Crosse, Michael J; Butler, John S; Lalor, Edmund C

    2015-10-21

    Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to

  7. Review of weapon noise measurement and damage risk criteria: considerations for auditory protection and performance.

    Science.gov (United States)

    Nakashima, Ann; Farinaccio, Rocco

    2015-04-01

    Noise-induced hearing loss resulting from weapon noise exposure has been studied for decades. A summary of recent work in weapon noise signal analysis, current knowledge of hearing damage risk criteria, and auditory performance in impulse noise is presented. Most of the currently used damage risk criteria are based on data that cannot be replicated or verified. There is a need to address the effects of combined noise exposures, from similar or different weapons and continuous background noise, in future noise exposure regulations. Advancements in hearing protection technology have expanded the options available to soldiers. Individual selection of hearing protection devices that are best suited to the type of exposure, the auditory task requirements, and hearing status of the user could help to facilitate their use. However, hearing protection devices affect auditory performance, which in turn affects situational awareness in the field. This includes communication capability and the localization and identification of threats. Laboratory training using high-fidelity weapon noise recordings has the potential to improve the auditory performance of soldiers in the field, providing a low-cost tool to enhance readiness for combat.

  8. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  9. High levels of sound pressure: acoustic reflex thresholds and auditory complaints of workers with noise exposure

    Directory of Open Access Journals (Sweden)

    Alexandre Scalli Mathias Duarte

    2015-08-01

    Full Text Available INTRODUCTION: The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints.METHODS: This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests.RESULTS: The workers' age ranged from 18 to 50 years (mean = 39.6, and noise exposure time from one to 38 years (mean = 17.3. We found that 15.1% (55 of the workers had bilateral hearing loss, 38.5% (140 had bilateral tinnitus, 52.8% (192 had abnormal sensitivity to loud sounds, and 47.2% (172 had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000 Hz bilaterally.CONCLUSION: There was no significance relationship between auditory complaints and acoustic reflexes.

  10. A review of the history, development and application of auditory weighting functions in humans and marine mammals.

    Science.gov (United States)

    Houser, Dorian S; Yost, William; Burkard, Robert; Finneran, James J; Reichmuth, Colleen; Mulsow, Jason

    2017-03-01

    This document reviews the history, development, and use of auditory weighting functions for noise impact assessment in humans and marine mammals. Advances from the modern era of electroacoustics, psychophysical studies of loudness, and other related hearing studies are reviewed with respect to the development and application of human auditory weighting functions, particularly A-weighting. The use of auditory weighting functions to assess the effects of environmental noise on humans-such as hearing damage-risk criteria-are presented, as well as lower-level effects such as annoyance and masking. The article also reviews marine mammal auditory weighting functions, the development of which has been fundamentally directed by the objective of predicting and preventing noise-induced hearing loss. Compared to the development of human auditory weighting functions, the development of marine mammal auditory weighting functions have faced additional challenges, including a large number of species that must be considered, a lack of audiometric information on most species, and small sample sizes for nearly all species for which auditory data are available. The review concludes with research recommendations to address data gaps and assumptions underlying marine mammal auditory weighting function design and application.

  11. Computerized classification of auditory trauma: Results of an investigation on screening employees exposed to noise

    Science.gov (United States)

    Klockhoff, I.

    1977-01-01

    An automatic, computerized method was developed to classify results from a screening of employees exposed to noise, resulting in a fast and effective method of identifying and taking measures against auditory trauma. This technique also satisfies the urgent need for quick discovery of cases which deserve compensation in accordance with the Law on Industrial Accident Insurance. Unfortunately, use of this method increases the burden on the already overloaded investigatory resources of the auditory health care system.

  12. Auditory coding of human movement kinematics.

    Science.gov (United States)

    Vinken, Pia M; Kröger, Daniela; Fehse, Ursula; Schmitz, Gerd; Brock, Heike; Effenberg, Alfred O

    2013-01-01

    Although visual perception is dominant on motor perception, control and learning, auditory information can enhance and modulate perceptual as well as motor processes in a multifaceted manner. During last decades new methods of auditory augmentation had been developed with movement sonification as one of the most recent approaches expanding auditory movement information also to usually mute phases of movement. Despite general evidence on the effectiveness of movement sonification in different fields of applied research there is nearly no empirical proof on how sonification of gross motor human movement should be configured to achieve information rich sound sequences. Such lack of empirical proof is given for (a) the selection of suitable movement features as well as for (b) effective kinetic-acoustical mapping patterns and for (c) the number of regarded dimensions of sonification. In this study we explore the informational content of artificial acoustical kinematics in terms of a kinematic movement sonification using an intermodal discrimination paradigm. In a repeated measure design we analysed discrimination rates of six everyday upper limb actions to evaluate the effectiveness of seven different kinds of kinematic-acoustical mappings as well as short-term learning effects. The kinematics of the upper limb actions were calculated based on inertial motion sensor data and transformed into seven different sonifications. Sound sequences were randomly presented to participants and discrimination rates as well as confidence of choice were analysed. Data indicate an instantaneous comprehensibility of the artificial movement acoustics as well as short-term learning effects. No differences between different dimensional encodings became evident thus indicating a high efficiency for intermodal pattern discrimination for the acoustically coded velocity distribution of the actions. Taken together movement information related to continuous kinematic parameters can be

  13. Effects of broadband noise on cortical evoked auditory responses at different loudness levels in young adults.

    Science.gov (United States)

    Sharma, Mridula; Purdy, Suzanne C; Munro, Kevin J; Sawaya, Kathleen; Peter, Varghese

    2014-03-26

    Young adults with no history of hearing concerns were tested to investigate their /da/-evoked cortical auditory evoked potentials (P1-N1-P2) recorded from 32 scalp electrodes in the presence and absence of noise at three different loudness levels (soft, comfortable, and loud), at a fixed signal-to-noise ratio (+3 dB). P1 peak latency significantly increased at soft and loud levels, and N1 and P2 latencies increased at all three levels in the presence of noise, compared with the quiet condition. P1 amplitude was significantly larger in quiet than in noise conditions at the loudest level. N1 amplitude was larger in quiet than in noise for the soft level only. P2 amplitude was reduced in the presence of noise to a similar degree at all loudness levels. The differential effects of noise on P1, N1, and P2 suggest differences in auditory processes underlying these peaks. The combination of level and signal-to-noise ratio should be considered when using cortical auditory evoked potentials as an electrophysiological indicator of degraded speech processing.

  14. The reduced cochlear output and the failure to adapt the central auditory response causes tinnitus in noise exposed rats.

    Directory of Open Access Journals (Sweden)

    Lukas Rüttiger

    Full Text Available Tinnitus is proposed to be caused by decreased central input from the cochlea, followed by increased spontaneous and evoked subcortical activity that is interpreted as compensation for increased responsiveness of central auditory circuits. We compared equally noise exposed rats separated into groups with and without tinnitus for differences in brain responsiveness relative to the degree of deafferentation in the periphery. We analyzed (1 the number of CtBP2/RIBEYE-positive particles in ribbon synapses of the inner hair cell (IHC as a measure for deafferentation; (2 the fine structure of the amplitudes of auditory brainstem responses (ABR reflecting differences in sound responses following decreased auditory nerve activity and (3 the expression of the activity-regulated gene Arc in the auditory cortex (AC to identify long-lasting central activity following sensory deprivation. Following moderate trauma, 30% of animals exhibited tinnitus, similar to the tinnitus prevalence among hearing impaired humans. Although both tinnitus and no-tinnitus animals exhibited a reduced ABR wave I amplitude (generated by primary auditory nerve fibers, IHCs ribbon loss and high-frequency hearing impairment was more severe in tinnitus animals, associated with significantly reduced amplitudes of the more centrally generated wave IV and V and less intense staining of Arc mRNA and protein in the AC. The observed severe IHCs ribbon loss, the minimal restoration of ABR wave size, and reduced cortical Arc expression suggest that tinnitus is linked to a failure to adapt central circuits to reduced cochlear input.

  15. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  17. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    Science.gov (United States)

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  18. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    Science.gov (United States)

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  19. Auditory Effects of Exposure to Noise and Solvents: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Lobato, Diolen Conceição Barros

    2014-01-01

    Full Text Available Introduction Industry workers are exposed to different environmental risk agents that, when combined, may potentiate risks to hearing. Objective To evaluate the effects of the combined exposure to noise and solvents on hearing in workers. Methods A transversal retrospective cohort study was performed through documentary analysis of an industry. The sample (n = 198 was divided into four groups: the noise group (NG, exposed only to noise; the noise and solvents group (NSG, exposed to noise and solvents; the noise control group and noise and solvents control group (CNS, no exposure. Results The NG showed 16.66% of cases suggestive of bilateral noise-induced hearing loss and NSG showed 5.26%. The NG and NSG had worse thresholds than their respective control groups. Females were less susceptible to noise than males; however, when simultaneously exposed to solvents, hearing was affected in a similar way, resulting in significant differences (p < 0.05. The 40- to 49-year-old age group was significantly worse (p < 0.05 in the auditory thresholds in the NSG compared with the CNS. Conclusion The results observed in this study indicate that simultaneous exposure to noise and solvents can damage the peripheral auditory system.

  20. Investigation into the response of the auditory and acoustic communications systems in the Beluga whale (Delphinapterus leucas) of the St. Lawrence River Estuary to noise, using vocal classification

    Science.gov (United States)

    Scheifele, Peter Martin

    2003-06-01

    Noise pollution has only recently become recognized as a potential danger to marine mammals in general, and to the Beluga Whale (Delphinapterus leucas) in particular. These small gregarious Odontocetes make extensive use of sound for social communication and pod cohesion. The St. Lawrence River Estuary is habitat to a small, critically endangered population of about 700 Beluga whales who congregate in four different sites in its upper estuary. The population is believed to be threatened by the stress of high-intensity, low frequency noise. One way to determine whether noise is having an effect on an animal's auditory ability might be to observe a natural and repeatable response of the auditory and vocal systems to varying noise levels. This can be accomplished by observing changes in animal vocalizations in response to auditory feedback. A response such as this observed in humans and some animals is known as the Lombard Vocal Response, which represents a reaction of the auditory system directly manifested by changes in vocalization level. In this research this population of Beluga Whales was tested to determine whether a vocalization-as-a-function-of-noise phenomenon existed by using Hidden Markhov "classified" vocalizations as targets for acoustical analyses. Correlation and regression analyses indicated that the phenomenon does exist and results of a human subjects experiment along with results from other animal species known to exhibit the response strongly implicate the Lombard Vocal Response in the Beluga.

  1. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss.

    Science.gov (United States)

    Sun, Pengfei; Qin, Jun; Campbell, Kathleen

    2015-01-01

    Noise induced hearing loss (NIHL) remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL) caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL) and complex velocity level (CVL), which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL) filter to obtain velocities of basilar membrane (BM) in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  2. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    2015-01-01

    Full Text Available Noise induced hearing loss (NIHL remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL and complex velocity level (CVL, which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL filter to obtain velocities of basilar membrane (BM in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  3. Wiener-kernel analysis of responses to noise of chinchilla auditory-nerve fibers

    NARCIS (Netherlands)

    Recio-Spinoso, A; Temchin, AN; van Dijk, P; Fan, YH; Ruggero, MA

    2005-01-01

    Responses to broadband Gaussian white noise were recorded in auditory-nerve fibers of deeply anesthetized chinchillas and analyzed by computation of zeroth-, first-, and second-order Wiener kernels. The first- order kernels ( similar to reverse correlations or "revcors") of fibers with characteristi

  4. JP-8 jet fuel can promote auditory impairment resulting from subsequent noise exposure in rats.

    Science.gov (United States)

    Fechter, Laurence D; Gearhart, Caroline; Fulton, Sherry; Campbell, Jerry; Fisher, Jeffrey; Na, Kwangsam; Cocker, David; Nelson-Miller, Alisa; Moon, Patrick; Pouyatos, Benoit

    2007-08-01

    We report on the transient and persistent effects of JP-8 jet fuel exposure on auditory function in rats. JP-8 has become the standard jet fuel utilized in the United States and North Atlantic Treaty Organization countries for military use and it is closely related to Jet A fuel, which is used in U.S. domestic aviation. Rats received JP-8 fuel (1000 mg/m(3)) by nose-only inhalation for 4 h and half of them were immediately subjected to an octave band of noise ranging between 97 and 105 dB in different experiments. The noise by itself produces a small, but permanent auditory impairment. The current permissible exposure level for JP-8 is 350 mg/m(3). Additionally, a positive control group received only noise exposure, and a fourth group consisted of untreated control subjects. Exposures occurred either on 1 day or repeatedly on 5 successive days. Impairments in auditory function were assessed using distortion product otoacoustic emissions and compound action potential testing. In other rats, tissues were harvested following JP-8 exposure for assessment of hydrocarbon levels or glutathione (GSH) levels. A single JP-8 exposure by itself at 1000 mg/m(3) did not disrupt auditory function. However, exposure to JP-8 and noise produced an additive disruption in outer hair cell function. Repeated 5-day JP-8 exposure at 1000 mg/m(3) for 4 h produced impairment of outer hair cell function that was most evident at the first postexposure assessment time. Partial though not complete recovery was observed over a 4-week postexposure period. The adverse effects of repeated JP-8 exposures on auditory function were inconsistent, but combined treatment with JP-8 + noise yielded greater impairment of auditory function, and hair cell loss than did noise by itself. Qualitative comparison of outer hair cell loss suggests an increase in outer hair cell death among rats treated with JP-8 + noise for 5 days as compared to noise alone. In most instances, hydrocarbon constituents of the fuel

  5. Auditory sensitivity in opiate addicts with and without a history of noise exposure

    Directory of Open Access Journals (Sweden)

    Vishakha Rawool

    2011-01-01

    Full Text Available Several case reports suggest that some individuals are susceptible to hearing loss from opioids. A combination of noise and opium exposure is possible in either occupational setting such as military service or recreational settings. According to the Drug Enforcement Agency of the U.S. Department of Justice, prescriptions for opiate-based drugs have skyrocketed in the past decade. Since both opium and noise independently can cause hearing loss, it is important to know the prevalence of hearing loss among individuals who are exposed to opium or both opium and noise. The purpose of this research was to evaluate auditory sensitivity in individuals with a history of opium abuse and/or occupational or nonoccupational noise exposure. Twenty-three men who reported opiate abuse served as participants in the study. Four of the individuals reported no history of noise exposure, 12 reported hobby-related noise exposure, 7 reported occupational noise exposure including 2 who also reported hobby-related noise exposure. Fifty percent (2/4 of the individuals without any noise exposure had a hearing loss confirming previous reports that some of the population is vulnerable to the ototoxic effects of opioids. The percentage of population with hearing loss increased with hobby-related (58% and occupational noise exposure (100%. Mixed MANOVA revealed a significant ear, frequency, and noise exposure interaction. Health professionals need to be aware of the possible ototoxic effects of opioids, since early detection of hearing loss from opium abuse may lead to cessation of abuse and further progression of hearing loss. The possibility that opium abuse may interact with noise exposure in determining auditory thresholds needs to be considered in noise exposed individuals who are addicted to opiates. Possible mechanisms of cochlear damage from opium abuse, possible reasons for individual susceptibility, and recommendations for future studies are presented in the article.

  6. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Directory of Open Access Journals (Sweden)

    Alexandra Parbery-Clark

    Full Text Available Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30, we asked whether musical experience benefits an older cohort of musicians (ages 45-65, potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory. Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  7. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L.; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18–30), we asked whether musical experience benefits an older cohort of musicians (ages 45–65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  8. Effects of acoustic noise on the auditory nerve compound action potentials evoked by electric pulse trains.

    Science.gov (United States)

    Nourski, Kirill V; Abbas, Paul J; Miller, Charles A; Robinson, Barbara K; Jeng, Fuh-Cherng

    2005-04-01

    This study investigated the effects of acoustic noise on the auditory nerve compound action potentials in response to electric pulse trains. Subjects were adult guinea pigs, implanted with a minimally invasive electrode to preserve acoustic sensitivity. Electrically evoked compound action potentials (ECAP) were recorded from the auditory nerve trunk in response to electric pulse trains both during and after the presentation of acoustic white noise. Simultaneously presented acoustic noise produced a decrease in ECAP amplitude. The effect of the acoustic masker on the electric probe was greatest at the onset of the acoustic stimulus and it was followed by a partial recovery of the ECAP amplitude. Following cessation of the acoustic noise, ECAP amplitude recovered over a period of approximately 100-200 ms. The effects of the acoustic noise were more prominent at lower electric pulse rates (interpulse intervals of 3 ms and higher). At higher pulse rates, the ECAP adaptation to the electric pulse train alone was larger and the acoustic noise, when presented, produced little additional effect. The observed effects of noise on ECAP were the greatest at high electric stimulus levels and, for a particular electric stimulus level, at high acoustic noise levels.

  9. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-05-11

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  10. Sensitivity of offset and onset cortical auditory evoked potentials to signals in noise.

    Science.gov (United States)

    Baltzell, Lucas S; Billings, Curtis J

    2014-02-01

    The purpose of this study was to determine the effects of SNR and signal level on the offset response of the cortical auditory evoked potential (CAEP). Successful listening often depends on how well the auditory system can extract target signals from competing background noise. Both signal onsets and offsets are encoded neurally and contribute to successful listening in noise. Neural onset responses to signals in noise demonstrate a strong sensitivity to signal-to-noise ratio (SNR) rather than signal level; however, the sensitivity of neural offset responses to these cues is not known. We analyzed the offset response from two previously published datasets for which only the onset response was reported. For both datasets, CAEPs were recorded from young normal-hearing adults in response to a 1000-Hz tone. For the first dataset, tones were presented at seven different signal levels without background noise, while the second dataset varied both signal level and SNR. Offset responses demonstrated sensitivity to absolute signal level in quiet, SNR, and to absolute signal level in noise. Offset sensitivity to signal level when presented in noise contrasts with previously published onset results. This sensitivity suggests a potential clinical measure of cortical encoding of signal level in noise.

  11. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    Science.gov (United States)

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  12. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  13. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  14. Auditory effects of noise on infant and adult guinea pigs.

    Science.gov (United States)

    Danto, J; Caiazzo, A J

    1977-01-01

    This pilot study compared the susceptibility of the infant (48 hr) and adult (120 days) guinea pig to the effects of noise. Subjects were exposed to a narrow band of noise (center frequency 4 kHz) at an intensity of 115 dB sound pressure level (SPL) for 1 hr. Postexposure thresholds were obtained by a conditioned suppression technique. Results indicated that the infant animals displayed a mean hearing threshold of 25 dB SPL that significantly differed from the adult mean threshold of 7.5 dB SPL.

  15. Early life exposure to noise alters the representation of auditory localization cues in the auditory space map of the barn owl.

    Science.gov (United States)

    Efrati, Adi; Gutfreund, Yoram

    2011-05-01

    The auditory space map in the optic tectum (OT) (also known as superior colliculus in mammals) relies on the tuning of neurons to auditory localization cues that correspond to specific sound source locations. This study investigates the effects of early auditory experiences on the neural representation of binaural auditory localization cues. Young barn owls were raised in continuous omnidirectional broadband noise from before hearing onset to the age of ∼ 65 days. Data from these birds were compared with data from age-matched control owls and from normal adult owls (>200 days). In noise-reared owls, the tuning of tectal neurons for interaural level differences and interaural time differences was broader than in control owls. Moreover, in neurons from noise-reared owls, the interaural level differences tuning was biased towards sounds louder in the contralateral ear. A similar bias appeared, but to a much lesser extent, in age-matched control owls and was absent in adult owls. To follow the recovery process from noise exposure, we continued to survey the neural representations in the OT for an extended period of up to several months after removal of the noise. We report that all the noise-rearing effects tended to recover gradually following exposure to a normal acoustic environment. The results suggest that deprivation from experiencing normal acoustic localization cues disrupts the maturation of the auditory space map in the OT.

  16. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  17. Functional maps of human auditory cortex: effects of acoustic features and attention.

    Directory of Open Access Journals (Sweden)

    David L Woods

    Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses

  18. The effects of concomitant Ginkgo intake on noise induced Hippocampus injury. Possible auditory clinical correlate

    Directory of Open Access Journals (Sweden)

    Alaa Abousetta

    2014-11-01

    Full Text Available This study was conducted to determine the injurious effects of noise on the hippocampus, and to show whether Ginkgo biloba (Gb has any modulatory effect on hippocampal injury. Fifteen adult male albino rats were divided into three groups; control group, noise group and protected group. The noise group was exposed to 100 dB Sound pressure level (SPL white noise, six hours/day for four consecutive weeks. The protected group was exposed to the same noise level with the administration of Gb extract to the animals (50 mg/kg daily for 4 weeks. In the noise exposed group, both pyramidal cell layer and dentate gyrus (DG granular cell layer showed a decrease in thickness with loss and degeneration of many cells. The protected group showed preservation of many parameters as compared to the noise group i.e. increase in thickness of Cornu Ammonis area3 (CA3 & DG; increase in surface area of cells and increased vascularity. In conclusion, noise had detrimental effects on cells of Cornu Ammonis area1 (CA1, CA3 & DG of the hippocampus. In view of this finding, the clinical auditory hazardous effects in people exposed to harmful noise such as tinnitus, as well as memory disturbances and learning disabilities might have a new dimension. The administration of Gb protected the hippocampus against the injurious effect of noise. The probable mechanism and usefulness of Gb in reducing the previously mentioned effects are discussed.

  19. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    Directory of Open Access Journals (Sweden)

    Hendrik eSantosa

    2014-12-01

    Full Text Available The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard. If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%, the entire subjects showed the right-hemispheric lateralization: This is due to the subjects’ effort to hear the music in the presence of noises. However, too much noise has reduced the subjects’ discerning efforts.

  20. A dynamic auditory-cognitive system supports speech-in-noise perception in older adults.

    Science.gov (United States)

    Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina

    2013-06-01

    Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of an auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we used structural equation modeling to evaluate the interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55-79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Noise exposure and oxidative balance in auditory and extra-auditory structures in adult and developing animals. Pharmacological approaches aimed to minimize its effects.

    Science.gov (United States)

    Molina, S J; Miceli, M; Guelman, L R

    2016-07-01

    Noise coming from urban traffic, household appliances or discotheques might be as hazardous to the health of exposed people as occupational noise, because may likewise cause hearing loss, changes in hormonal, cardiovascular and immune systems and behavioral alterations. Besides, noise can affect sleep, work performance and productivity as well as communication skills. Moreover, exposure to noise can trigger an oxidative imbalance between reactive oxygen species (ROS) and the activity of antioxidant enzymes in different structures, which can contribute to tissue damage. In this review we systematized the information from reports concerning noise effects on cell oxidative balance in different tissues, focusing on auditory and non-auditory structures. We paid specific attention to in vivo studies, including results obtained in adult and developing subjects. Finally, we discussed the pharmacological strategies tested by different authors aimed to minimize the damaging effects of noise on living beings.

  2. Auditory Recognition of Familiar and Unfamiliar Subjects with Wind Turbine Noise

    Directory of Open Access Journals (Sweden)

    Luigi Maffei

    2015-04-01

    Full Text Available Considering the wide growth of the wind turbine market over the last decade as well as their increasing power size, more and more potential conflicts have arisen in society due to the noise radiated by these plants. Our goal was to determine whether the annoyance caused by wind farms is related to aspects other than noise. To accomplish this, an auditory experiment on the recognition of wind turbine noise was conducted to people with long experience of wind turbine noise exposure and to people with no previous experience to this type of noise source. Our findings demonstrated that the trend of the auditory recognition is the same for the two examined groups, as far as the increase of the distance and the decrease of the values of sound equivalent levels and loudness are concerned. Significant differences between the two groups were observed as the distance increases. People with wind turbine noise experience showed a higher tendency to report false alarms than people without experience.

  3. Auditory recognition of familiar and unfamiliar subjects with wind turbine noise.

    Science.gov (United States)

    Maffei, Luigi; Masullo, Massimiliano; Gabriele, Maria Di; Votsi, Nefta-Eleftheria P; Pantis, John D; Senese, Vincenzo Paolo

    2015-04-17

    Considering the wide growth of the wind turbine market over the last decade as well as their increasing power size, more and more potential conflicts have arisen in society due to the noise radiated by these plants. Our goal was to determine whether the annoyance caused by wind farms is related to aspects other than noise. To accomplish this, an auditory experiment on the recognition of wind turbine noise was conducted to people with long experience of wind turbine noise exposure and to people with no previous experience to this type of noise source. Our findings demonstrated that the trend of the auditory recognition is the same for the two examined groups, as far as the increase of the distance and the decrease of the values of sound equivalent levels and loudness are concerned. Significant differences between the two groups were observed as the distance increases. People with wind turbine noise experience showed a higher tendency to report false alarms than people without experience.

  4. Auditory Processing in Noise: A Preschool Biomarker for Literacy.

    Directory of Open Access Journals (Sweden)

    Travis White-Schwoch

    2015-07-01

    Full Text Available Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y, we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.

  5. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  6. Suprathreshold auditory processing deficits in noise: Effects of hearing loss and age.

    Science.gov (United States)

    Kortlang, Steffen; Mauermann, Manfred; Ewert, Stephan D

    2016-01-01

    People with sensorineural hearing loss generally suffer from a reduced ability to understand speech in complex acoustic listening situations, particularly when background noise is present. In addition to the loss of audibility, a mixture of suprathreshold processing deficits is possibly involved, like altered basilar membrane compression and related changes, as well as a reduced ability of temporal coding. A series of 6 monaural psychoacoustic experiments at 0.5, 2, and 6 kHz was conducted with 18 subjects, divided equally into groups of young normal-hearing, older normal-hearing and older hearing-impaired listeners, aiming at disentangling the effects of age and hearing loss on psychoacoustic performance in noise. Random frequency modulation detection thresholds (RFMDTs) with a low-rate modulator in wide-band noise, and discrimination of a phase-jittered Schroeder-phase from a random-phase harmonic tone complex are suggested to characterize the individual ability of temporal processing. The outcome was compared to thresholds of pure tones and narrow-band noise, loudness growth functions, auditory filter bandwidths, and tone-in-noise detection thresholds. At 500 Hz, results suggest a contribution of temporal fine structure (TFS) to pure-tone detection thresholds. Significant correlation with auditory thresholds and filter bandwidths indicated an impact of frequency selectivity on TFS usability in wide-band noise. When controlling for the effect of threshold sensitivity, the listener's age significantly correlated with tone-in-noise detection and RFMDTs in noise at 500 Hz, showing that older listeners were particularly affected by background noise at low carrier frequencies.

  7. Effect of noise pollution on hearing in auto-rickshaw drivers: A brainstem auditory-evoked potentials study

    Directory of Open Access Journals (Sweden)

    Bhupendra Marotrao Gathe

    2016-01-01

    Full Text Available Context: Auditory brainstem response is the most important tool in differential diagnosis and degree of hearing impairment. Many studies have been carried out to ascertain the effects of noise on human beings but very less on the transportation workers; hence, considering the need of time and use of brainstem auditory-evoked potentials (BAEP, this study was conducted to analyze the effect of noise pollution on auto-rickshaw drivers (ARDs. Aim: The aim of this study was to evaluate I, II, III, IV, and V wave latencies in ARDs and comparing it with control subjects in Central India. Settings and Design: This was a case-control study done on ARDs as participants and compared it with normal healthy individual BAEP pattern. Materials and Methods: We recorded BAEP from fifty healthy control subjects and fifty ARDs from the community of same sex and geographical setup. The absolute latencies were measured and compared. Recording was done using RMS EMG EP MARK II machine manufactured by RMS recorders and Medicare system, Chandigarh. Statistical Analysis Used: All the data related with subjects were filled in Excel sheet and analyzed with the help of EPI 6.0 info software with Student′s t-test. Results: There were prolongations of all absolute wave latencies of II, III, IV, and V in the ARDs as compared to control subjects. Conclusions: The prolongation of all absolute latencies of II, III, IV, and V suggests abnormality in brainstem auditory pathway mainly affecting the retrocochlear pathways in group of ARDs (noise exposure >10 years than other group who had exposed for <10 years and is more significant on the right ear than left.

  8. Effects of Age-Related Hearing Loss and Background Noise on Neuromagnetic Activityfrom Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Claude eAlain

    2014-01-01

    Full Text Available Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL were presented without, with low (45 dBA, SPL or with moderate (65 dBA SPL Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than normal-hearing adults. In addition, the object-related negativity (ORN elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.

  9. Assessment of an ICA-based noise reduction method for multi-channel auditory evoked potentials

    Science.gov (United States)

    Mirahmadizoghi, Siavash; Bell, Steven; Simpson, David

    2015-03-01

    In this work a new independent component analysis (ICA) based method for noise reduction in evoked potentials is evaluated on for auditory late responses (ALR) captured with a 63-channel electroencephalogram (EEG) from 10 normal-hearing subjects. The performance of the new method is compared with a single channel alternative in terms of signal to noise ratio (SNR), the number of channels with an SNR above an empirically derived statistical critical value and an estimate of hearing threshold. The results show that the multichannel signal processing method can significantly enhance the quality of the signal and also detected hearing thresholds significantly lower than with the single channel alternative.

  10. Comparison of Auditory Brainstem Response in Noise Induced Tinnitus and Non-Tinnitus Control Subjects

    Directory of Open Access Journals (Sweden)

    Ghassem Mohammadkhani

    2009-12-01

    Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.

  11. Lack of protection against gentamicin ototoxicity by auditory conditioning with noise

    Directory of Open Access Journals (Sweden)

    Alex Strose

    2014-10-01

    Full Text Available INTRODUCTION: Auditory conditioning consists of the pre-exposure to low levels of a potential harmful agent to protect against a subsequent harmful presentation. OBJECTIVE: To confirm if conditioning with an agent different from the used to cause the trauma can also be effective. METHOD: Experimental study with 17 guinea pigs divided as follows: group Som: exposed to 85 dB broadband noise centered at 4 kHz, 30 minutes a day for 10 consecutive days; group Cont: intramuscular administration of gentamicin 160 mg/kg a day for 10 consecutive days; group Expt: conditioned with noise similarly to group Som and, after each noise presentation, received gentamicin similarly to group Cont. The animals were evaluated by distortion product otoacoustic emissions (DPOAEs, brainstem auditory evoked potentials (BAEPs and scanning electron microscopy. RESULTS: The animals that were conditioned with noise did not show any protective effect compared to the ones that received only the ototoxic gentamicin administration. This lack of protection was observed functionally and morphologically. CONCLUSION: Conditioning with 85 dB broadband noise, 30 min a day for 10 consecutive days does not protect against an ototoxic gentamicin administration of 160 mg/kg a day for 10 consecutive days in the guinea pig.

  12. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    Directory of Open Access Journals (Sweden)

    Ann-Cathrine Lindblad

    Full Text Available The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group

  13. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  14. An anatomical and functional topography of human auditory cortical areas.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  15. Modeling hemodynamic responses in auditory cortex at 1.5 T using variable duration imaging acoustic noise.

    Science.gov (United States)

    Hu, Shuowen; Olulade, Olumide; Castillo, Javier Gonzalez; Santos, Joseph; Kim, Sungeun; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2010-02-15

    A confound for functional magnetic resonance imaging (fMRI), especially for auditory studies, is the presence of imaging acoustic noise generated mainly as a byproduct of rapid gradient switching during volume acquisition and, to a lesser extent, the radiofrequency transmit. This work utilized a novel pulse sequence to present actual imaging acoustic noise for characterization of the induced hemodynamic responses and assessment of linearity in the primary auditory cortex with respect to noise duration. Results show that responses to brief duration (46 ms) imaging acoustic noise is highly nonlinear while responses to longer duration (>1 s) imaging acoustic noise becomes approximately linear, with the right primary auditory cortex exhibiting a higher degree of nonlinearity than the left for the investigated noise durations. This study also assessed the spatial extent of activation induced by imaging acoustic noise, showing that the use of modeled responses (specific to imaging acoustic noise) as the reference waveform revealed additional activations in the auditory cortex not observed with a canonical gamma variate reference waveform, suggesting an improvement in detection sensitivity for imaging acoustic noise-induced activity. Longer duration (1.5 s) imaging acoustic noise was observed to induce activity that expanded outwards from Heschl's gyrus to cover the superior temporal gyrus as well as parts of the middle temporal gyrus and insula, potentially affecting higher level acoustic processing.

  16. Non-Monotonic Relation Between Noise Exposure Severity and Neuronal Hyperactivity in the Auditory Midbrain

    Directory of Open Access Journals (Sweden)

    Lara Li Hesse

    2016-08-01

    Full Text Available The occurrence of tinnitus can be linked to hearing loss in the majority of cases, but there is nevertheless a large degree of unexplained heterogeneity in the relation between hearing loss and tinnitus. Part of the problem might be that hearing loss is usually quantified in terms of increased hearing thresholds, which only provides limited information about the underlying cochlear damage. Moreover, noise exposure that does not cause hearing threshold loss can still lead to hidden hearing loss (HHL, i.e. functional deafferentation of auditory nerve fibres (ANFs through loss of synaptic ribbons in inner hair cells. Whilst it is known that increased hearing thresholds can trigger increases in spontaneous neural activity in the central auditory system, i.e. a putative neural correlate of tinnitus, the central effects of HHL have not yet been investigated. Here, we exposed mice to octave-band noise at 100 and 105 dB SPL, to generate HHL and permanent increases of hearing thresholds, respectively. Deafferentation of ANFs was confirmed through measurement of auditory brainstem responses and cochlear immunohistochemistry. Acute extracellular recordings from the auditory midbrain (inferior colliculus demonstrated increases in spontaneous neuronal activity (a putative neural correlate of tinnitus in both groups. Surprisingly the increase in spontaneous activity was most pronounced in the mice with HHL, suggesting that the relation between hearing loss and neuronal hyperactivity might be more complex than currently understood. Our computational model indicated that these differences in neuronal hyperactivity could arise from different degrees of deafferentation of low-threshold ANFs in the two exposure groups.Our results demonstrate that HHL is sufficient to induce changes in central auditory processing, and they also indicate a non-monotonic relationship between cochlear damage and neuronal hyperactivity, suggesting an explanation for why tinnitus might

  17. Conditions of auditory health at work: inquiry of the auditoy effect in workers exposed to the occupationl noise

    Directory of Open Access Journals (Sweden)

    Lopes, Andréa Cintra

    2009-03-01

    Full Text Available Introduction: Physiologically, the individuals exposed to the noise may develop a very common pathology; the occupational noise induced hearing loss. Objective: Research the by means of a cross-sectional study, prevalence of occupational hearing loss in workers exposed to noise pressure levels over 85 dB NPL. Method: 400 records of workers exposed to noise pressure levels above 85 db NPS, working in companies of different segments. Results: In this sample, statistically significant differences were observed between the low and high frequencies thresholds and that the work duration influenced in the worsening of high frequencies thresholds bilaterally. As for the laterality no significant differences were confirmed between the ears, as well as the absence of correlation between tinnitus and hearing loss. Conclusion: An intensive work of auditory health promotion and/or auditory loss prevention must be emphasized, especially for workers exposed to high level occupational noises, as well as the appropriate features of individual auditory protection equipment.

  18. Effects of contralateral noise on the 20-Hz auditory steady state response--magnetoencephalography study.

    Directory of Open Access Journals (Sweden)

    Hajime Usubuchi

    Full Text Available The auditory steady state response (ASSR is an oscillatory brain response, which is phase locked to the rhythm of an auditory stimulus. ASSRs have been recorded in response to a wide frequency range of modulation and/or repetition, but the physiological features of the ASSRs are somewhat different depending on the modulation frequency. Recently, the 20-Hz ASSR has been emphasized in clinical examinations, especially in the area of psychiatry. However, little is known about the physiological properties of the 20-Hz ASSR, compared to those of the 40-Hz and 80-Hz ASSRs. The effects of contralateral noise on the ASSR are known to depend on the modulation frequency to evoke ASSR. However, the effects of contralateral noise on the 20-Hz ASSR are not known. Here we assessed the effects of contralateral white noise at a level of 70 dB SPL on the 20-Hz and 40-Hz ASSRs using a helmet-shaped magnetoencephalography system in 9 healthy volunteers (8 males and 1 female, mean age 31.2 years. The ASSRs were elicited by monaural 1000-Hz 5-s tone bursts amplitude-modulated at 20 and 39 Hz and presented at 80 dB SPL. Contralateral noise caused significant suppression of both the 20-Hz and 40-Hz ASSRs, although suppression was significantly smaller for the 20-Hz ASSRs than the 40-Hz ASSRs. Moreover, the greatest suppression of both 20-Hz and 40-Hz ASSRs occurred in the right hemisphere when stimuli were presented to the right ear with contralateral noise. The present study newly showed that 20-Hz ASSRs are suppressed by contralateral noise, which may be important both for characterization of the 20-Hz ASSR and for interpretation in clinical situations. Physicians must be aware that the 20-Hz ASSR is significantly suppressed by sound (e.g. masking noise or binaural stimulation applied to the contralateral ear.

  19. Effects of contralateral noise on the 20-Hz auditory steady state response--magnetoencephalography study.

    Science.gov (United States)

    Usubuchi, Hajime; Kawase, Tetsuaki; Kanno, Akitake; Yahata, Izumi; Miyazaki, Hiromitsu; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2014-01-01

    The auditory steady state response (ASSR) is an oscillatory brain response, which is phase locked to the rhythm of an auditory stimulus. ASSRs have been recorded in response to a wide frequency range of modulation and/or repetition, but the physiological features of the ASSRs are somewhat different depending on the modulation frequency. Recently, the 20-Hz ASSR has been emphasized in clinical examinations, especially in the area of psychiatry. However, little is known about the physiological properties of the 20-Hz ASSR, compared to those of the 40-Hz and 80-Hz ASSRs. The effects of contralateral noise on the ASSR are known to depend on the modulation frequency to evoke ASSR. However, the effects of contralateral noise on the 20-Hz ASSR are not known. Here we assessed the effects of contralateral white noise at a level of 70 dB SPL on the 20-Hz and 40-Hz ASSRs using a helmet-shaped magnetoencephalography system in 9 healthy volunteers (8 males and 1 female, mean age 31.2 years). The ASSRs were elicited by monaural 1000-Hz 5-s tone bursts amplitude-modulated at 20 and 39 Hz and presented at 80 dB SPL. Contralateral noise caused significant suppression of both the 20-Hz and 40-Hz ASSRs, although suppression was significantly smaller for the 20-Hz ASSRs than the 40-Hz ASSRs. Moreover, the greatest suppression of both 20-Hz and 40-Hz ASSRs occurred in the right hemisphere when stimuli were presented to the right ear with contralateral noise. The present study newly showed that 20-Hz ASSRs are suppressed by contralateral noise, which may be important both for characterization of the 20-Hz ASSR and for interpretation in clinical situations. Physicians must be aware that the 20-Hz ASSR is significantly suppressed by sound (e.g. masking noise or binaural stimulation) applied to the contralateral ear.

  20. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakunina, Natalia [Kangwon National University, Institute of Medical Science, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kang, Eun Kyoung [Kangwon National University Hospital, Department of Rehabilitation Medicine, Chuncheon (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of); Min, Ji-Hoon [University of Michigan, Department of Biopsychology, Cognition, and Neuroscience, Ann Arbor, MI (United States); Kim, Sam Soo [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Radiology, Chuncheon (Korea, Republic of); Nam, Eui-Cheol [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of)

    2015-10-15

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  1. Selective attention and the auditory vertex potential. 2: Effects of signal intensity and masking noise

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.; Galambos, R.

    1975-01-01

    A randomized sequence of tone bursts was delivered to subjects at short inter-stimulus intervals with the tones originating from one of three spatially and frequency specific channels. The subject's task was to count the tones in one of the three channels at a time, ignoring the other two, and press a button after each tenth tone. In different conditions, tones were given at high and low intensities and with or without a background white noise to mask the tones. The N sub 1 component of the auditory vertex potential was found to be larger in response to attended channel tones in relation to unattended tones. This selective enhancement of N sub 1 was minimal for loud tones presented without noise and increased markedly for the lower tone intensity and in noise added conditions.

  2. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  3. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  4. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  5. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  6. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  7. A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise.

    Science.gov (United States)

    Clark, Nicholas R; Brown, Guy J; Jürgens, Tim; Meddis, Ray

    2012-09-01

    The potential contribution of the peripheral auditory efferent system to our understanding of speech in a background of competing noise was studied using a computer model of the auditory periphery and assessed using an automatic speech recognition system. A previous study had shown that a fixed efferent attenuation applied to all channels of a multi-channel model could improve the recognition of connected digit triplets in noise [G. J. Brown, R. T. Ferry, and R. Meddis, J. Acoust. Soc. Am. 127, 943-954 (2010)]. In the current study an anatomically justified feedback loop was used to automatically regulate separate attenuation values for each auditory channel. This arrangement resulted in a further enhancement of speech recognition over fixed-attenuation conditions. Comparisons between multi-talker babble and pink noise interference conditions suggest that the benefit originates from the model's ability to modify the amount of suppression in each channel separately according to the spectral shape of the interfering sounds.

  8. Auditory efferent activation in CBA mice exceeds that of C57s for varying levels of noise.

    Science.gov (United States)

    Frisina, Robert D; Newman, S R; Zhu, Xiaoxia

    2007-01-01

    The medial olivocochlear efferent (MOC) system enhances signals in noise and helps mediate auditory attention. Contralateral suppression (CS) of distortion product otoacoustic emissions (DPOAEs) has revealed age-related MOC declines. Here, differences in CS as a function of contralateral noise intensity (43-67 dB sound pressure level) were measured; 2f1-f2 DPOAE grams were recorded for young adult CBA and C57 mice. In CBAs, CS was a monotonic function of contralateral noise level. The C57s showed normal hearing, measured with DPOAE amplitudes and auditory brainstem response thresholds, but showed little CS, suggesting a loss of efferent dynamics preceding any deficiencies of the afferent auditory system.

  9. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  10. Broadened population-level frequency tuning in human auditory cortex of portable music player users.

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    Full Text Available Nowadays, many people use portable players to enrich their daily life with enjoyable music. However, in noisy environments, the player volume is often set to extremely high levels in order to drown out the intense ambient noise and satisfy the appetite for music. Extensive and inappropriate usage of portable music players might cause subtle damages in the auditory system, which are not behaviorally detectable in an early stage of the hearing impairment progress. Here, by means of magnetoencephalography, we objectively examined detrimental effects of portable music player misusage on the population-level frequency tuning in the human auditory cortex. We compared two groups of young people: one group had listened to music with portable music players intensively for a long period of time, while the other group had not. Both groups performed equally and normally in standard audiological examinations (pure tone audiogram, speech test, and hearing-in-noise test. However, the objective magnetoencephalographic data demonstrated that the population-level frequency tuning in the auditory cortex of the portable music player users was significantly broadened compared to the non-users, when attention was distracted from the auditory modality; this group difference vanished when attention was directed to the auditory modality. Our conclusion is that extensive and inadequate usage of portable music players could cause subtle damages, which standard behavioral audiometric measures fail to detect in an early stage. However, these damages could lead to future irreversible hearing disorders, which would have a huge negative impact on the quality of life of those affected, and the society as a whole.

  11. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  12. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  13. Auditory nerve representation of a complex communication sound in background noise.

    Science.gov (United States)

    Simmons, A M; Schwartz, J J; Ferragamo, M

    1992-05-01

    A population study of auditory nerve responses in the bullfrog, Rana catesbeiana, analyzed the relative contributions of spectral and temporal coding in representing a complex, species-specific communication signal at different stimulus intensities and in the presence of background noise. At stimulus levels of 70 and 80 dB SPL, levels which approximate that received during communication in the natural environment, average rate profiles plotted over fiber characteristic frequency do not reflect the detailed spectral fine structure of the synthetic call. Rate profiles do not change significantly in the presence of background noise. In ambient (no noise) and low noise conditions, both amphibian papilla and basilar papilla fibers phase lock strongly to the waveform periodicity (fundamental frequency) of the synthetic advertisement call. The higher harmonic spectral fine structure of the synthetic call is not accurately reflected in the timing of fiber firing, because firing is "captured" by the fundamental frequency. Only a small number of fibers synchronize preferentially to any harmonic in the call other than the first, and none synchronize to any higher than the third, even when fiber characteristic frequency is close to one of these higher harmonics. Background noise affects fiber temporal responses in two ways: It can reduce synchronization to the fundamental frequency, until fiber responses are masked; or it can shift synchronization from the fundamental to the second or third harmonic of the call. This second effect results in a preservation of temporal coding at high noise levels. These data suggest that bullfrog eighth nerve fibers extract the waveform periodicity of multiple-harmonic stimuli primarily by a temporal code.

  14. Differences in auditory timing between human and nonhuman primates

    NARCIS (Netherlands)

    Honing, H.; Merchant, H.

    2014-01-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such

  15. Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power.

    Science.gov (United States)

    Harris, Kelly C; Vaden, Kenneth I; Dubno, Judy R

    2014-09-01

    The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally,mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram(EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise,neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4-8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for

  16. High background noise shapes selective auditory filters in a tropical cricket.

    Science.gov (United States)

    Schmidt, Arne K D; Riede, Klaus; Römer, Heiner

    2011-05-15

    Because of call frequency overlap and masking interference, the airborne sound channel represents a limited resource for communication in a species-rich cricket community like the tropical rainforest. Here we studied the frequency tuning of an auditory neuron mediating phonotaxis in the rainforest cricket Paroecanthus podagrosus, suffering from strong competition, in comparison with the same homologous neuron in two species of European field crickets, where such competition does not exist. As predicted, the rainforest species exhibited a more selective tuning compared with the European counterparts. The filter reduced background nocturnal noise levels by 26 dB, compared with only 16 and 10 dB in the two European species. We also quantified the performance of the sensory filter under the different filter regimes by examining the representation of the species-specific amplitude modulation of the male calling song, when embedded in background noise. Again, the filter of the rainforest cricket performed significantly better in terms of representing this important signal parameter. The neuronal representation of the calling song pattern within receivers was maintained for a wide range of signal-to-noise ratios because of the more sharply tuned sensory system and selective attention mechanisms. Finally, the rainforest cricket also showed an almost perfect match between the filter for sensitivity and the peripheral filter for directional hearing, in contrast to its European counterparts. We discuss the consequences of these adaptations for intraspecific acoustic communication and reproductive isolation between species.

  17. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...

  18. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  19. Environmental noise and human prenatal growth

    Energy Technology Data Exchange (ETDEWEB)

    Schell, L.M.

    1981-09-01

    To determine whether chronic exposure to relatively loud noise has demonstrable biological effects in humans, a study was conducted on the effect of mother's exposure to airport noise while pregnant, and of social and biological characteristics of the family upon birthweight and gestation length. The sample of births was drawn from a community located adjacent to an international airport in the U.S., where noise levels had been measured previously. Mother's noise exposure was based upon noise levels near her residence in the community while she was pregnant. Data from 115 births were used, these being from mothers whose noise exposure history was most complete throughout the pregnancy. Using multivariate analysis to correct for family characteristics, the partial correlation coefficient for noise exposure and gestation length was negative, large, and significant in girls (r . -0.49, p less than 0.001). In boys the partial correlation coefficient was also negative but was smaller and did not quite reach statistical significance. Partial correlations with birthweight were smaller in both boys and girls and not significant. These results agree best with previous studies that suggest that noise may reduce prenatal growth. The size of the observed effects may be related to a conservative research design biased towards underestimation, as well as to the real effects of noise upon human prenatal growth.

  20. Natural auditory scene statistics shapes human spatial hearing.

    Science.gov (United States)

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing.

  1. Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex.

    Science.gov (United States)

    Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L

    2017-05-24

    Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments.SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the

  2. Stimulation of the human auditory nerve with optical radiation

    Science.gov (United States)

    Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter

    2009-02-01

    A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.

  3. Segregation of vowels and consonants in human auditory cortex: Evidence for distributed hierarchical organization

    Directory of Open Access Journals (Sweden)

    Jonas eObleser

    2010-12-01

    Full Text Available The speech signal consists of a continuous stream of consonants and vowels, which must be de– and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging (fMRI to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts.First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a classic subtraction-based design. Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the notable exception of a left anterior region, where speech–speech classification accuracies were significantly better.These data demonstrate that acoustic-phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations.

  4. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz, recorded by BOLD fMRI from an area centered on Heschl's gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  5. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  6. Association of auditory steady state responses with perception of temporal modulations and speech in noise.

    Science.gov (United States)

    Manju, Venugopal; Gopika, Kizhakke Kodiyath; Arivudai Nambi, Pitchai Muthu

    2014-01-01

    Amplitude modulations in the speech convey important acoustic information for speech perception. Auditory steady state response (ASSR) is thought to be physiological correlate of amplitude modulation perception. Limited research is available exploring association between ASSR and modulation detection ability as well as speech perception. Correlation of modulation detection thresholds (MDT) and speech perception in noise with ASSR was investigated in twofold experiments. 30 normal hearing individuals and 11 normal hearing individuals within age range of 18-24 years participated in experiments 1 and 2, respectively. MDTs were measured using ASSR and behavioral method at 60 Hz, 80 Hz, and 120 Hz modulation frequencies in the first experiment. ASSR threshold was obtained by estimating the minimum modulation depth required to elicit ASSR (ASSR-MDT). There was a positive correlation between behavioral MDT and ASSR-MDT at all modulation frequencies. In the second experiment, ASSR for amplitude modulation (AM) sweeps at four different frequency ranges (30-40 Hz, 40-50 Hz, 50-60 Hz, and 60-70 Hz) was recorded. Speech recognition threshold in noise (SRTn) was estimated using staircase procedure. There was a positive correlation between amplitude of ASSR for AM sweep with frequency range of 30-40 Hz and SRTn. Results of the current study suggest that ASSR provides substantial information about temporal modulation and speech perception.

  7. Notched-noise embedded frequency specific chirps for objective audiometry using auditory brainstem responses

    Directory of Open Access Journals (Sweden)

    Farah I. Corona-Strauss

    2012-02-01

    Full Text Available It has been shown recently that chirp-evoked auditory brainstem responses (ABRs show better performance than click stimulations, especially at low intensity levels. In this paper we present the development, test, and evaluation of a series of notched-noise embedded frequency specific chirps. ABRs were collected in healthy young control subjects using the developed stimuli. Results of the analysis of the corresponding ABRs using a time-scale phase synchronization stability (PSS measure are also reported. The resultant wave V amplitude and latency measures showed a similar behavior as for values reported in literature. The PSS of frequency specific chirp-evoked ABRs reflected the presence of the wave V for all stimulation intensities. The scales that resulted in higher PSS are in line with previous findings, where ABRs evoked by broadband chirps were analyzed, and which stated that low frequency channels are better for the recognition and analysis of chirp-evoked ABRs. We conclude that the development and test of the series of notched-noise embedded frequency specific chirps allowed the assessment of frequency specific ABRs, showing an identifiable wave V for different intensity levels. Future work may include the development of a faster automatic recognition scheme for these frequency specific ABRs.

  8. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    Directory of Open Access Journals (Sweden)

    Hansen John HL

    2005-01-01

    Full Text Available This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT in conjunction with a modified generalized minimum mean square error estimator (GMMSE for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs. Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI. Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito, along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  9. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  10. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    ) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...... for the individual pitch-discrimination abilities, the musically trained listeners still allocated lower processing effort than did the non-musicians to perform the task at the same performance level. This finding suggests an enhanced pitch representation along the auditory system in musicians, possibly as a result......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...

  11. The Effects of Aircraft Noise on the Auditory Language Processing Abilities of English First Language Primary School Learners in Durban, South Africa

    Science.gov (United States)

    Hollander, Cara; de Andrade, Victor Manuel

    2014-01-01

    Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…

  12. A quiet NICU for improved infants' health, development and well-being: a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently observ

  13. A quiet NICU for improved infants’ health, development and well-being: a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently observ

  14. A quiet NICU for improved infants' health, development and well-being: a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  15. A quiet NICU for improved infants’ health, development and well-being: a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; Van Stuijvenberg, M.; Van Goudoever, J.B.

    2012-01-01

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  16. A quiet NICU for improved infants' health, development and well-being : a systems approach to reducing noise and auditory alarms

    NARCIS (Netherlands)

    Freudenthal, A.; van Stuijvenberg, M.; van Goudoever, J. B.

    Noise is a direct cause of health problems, long-lasting auditory problems and development problems. Preterm infants are, especially, at risk for auditory and neurocognitive development. Sound levels are very high at the neonatal intensive care unit (NICU) and may contribute to the frequently

  17. Interaction between auditory and visual stimulus relating to the vowel sounds in the auditory cortex in humans: a magnetoencephalographic study.

    Science.gov (United States)

    Miki, Kensaku; Watanabe, Shoko; Kakigi, Ryusuke

    2004-03-11

    We investigated the interaction between auditory and visual stimulus relating to the vowel sounds in the auditory cortex in humans, using magnetoencephalography. We compared the difference in the main component, M100 generated in the auditory cortex, in terms of peak latency, amplitude, dipole location and moment, following the vowel sound_/a/_between two conditions: (1) showing a face with closed mouth; and (2) showing the same face with mouth movement appearing to pronounce/a/using an apparent motion method. We found no significant difference in the M100 component between the two conditions within or between the right and left hemispheres. These findings indicated that the vowel sound perception in the auditory cortex, at least in the primary processing stage, was not affected by viewing mouth movement.

  18. An examination of the effects of various noise on physiological sensibility responses by using human EEG

    Energy Technology Data Exchange (ETDEWEB)

    Cho, W. H.; Lee, J. K.; Son, T. Y.; Hwang, S. H.; Choi, H. [Sungkyunkwan University, Suwon (Korea, Republic of); Lee, M. S. [Hyundai Motor Company, Hwaseong (Korea, Republic of)

    2013-12-15

    This study investigated human stress levels based on electroencephalogram (EEG) data and carried out a subjective evaluation analysis about noise. Visual information is very important for finding human's emotional state. And relatively more previous works have been done than those using auditory stimulus. Since there are fewer previous works, we thought that using auditory stimulus is good choice for our study. Twelve human subjects were exposed to classic piano, ocean wave, army alarm, ambulance, and mosquito noises. We used two groups of comfortable and uncomfortable noises are to see the difference between the definitely different two groups to confirm usefulness of using this setting of experiment. EEG data were collected during the experimental session. The subjects were tested in a soundproof chamber and asked to minimize blinking, head movement, and swallowing during the experiment. The total time of the noise experiment included the time of the relaxation phase, during which the subjects relaxed in silence for 10 minutes. The relaxation phase was followed by a 20 -second noise exposure. The alpha band activities of the subjects were significantly decreased for the ambulance and mosquito noises, as it compared to the classic piano and ocean wave noises. The alpha band activities of the subjects decreased by 12.8 ± 2.3% for the ocean wave noise, decreased by 32.0 ± 5.4% for the army alarm noise, decreased by 34.5 ± 6.7% for the ambulance noise and decreased by 58.3 ± 9.1% for the mosquito noise compared to that of classic piano. On the other hand, their beta band activities were significantly increased for the ambulance and mosquito noises as it compared to classic piano and ocean wave. The beta band activities of the subjects increased by 7.9 ± 1.7% for the ocean wave noise, increased by 20.6 ± 5.3% for the army alarm noise, increased by 48.0 ± 7.5% for the ambulance noise and increased by 61.9 ± 11.2% for the mosquito noise, as it is compared

  19. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    OpenAIRE

    Ranganadh Narayanam*

    2015-01-01

    The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the resi...

  20. Bimodal stimulus timing-dependent plasticity in primary auditory cortex is altered after noise exposure with and without tinnitus.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2015-12-01

    Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit.

  1. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...

  2. The auditory representation of speech sounds in human motor cortex

    Science.gov (United States)

    Cheung, Connie; Hamilton, Liberty S; Johnson, Keith; Chang, Edward F

    2016-01-01

    In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information. DOI: http://dx.doi.org/10.7554/eLife.12577.001 PMID:26943778

  3. Design of a New Audio Watermarking System Based on Human Auditory System

    Energy Technology Data Exchange (ETDEWEB)

    Shin, D.H. [Maqtech Co., Ltd., (Korea); Shin, S.W.; Kim, J.W.; Choi, J.U. [Markany Co., Ltd., (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [The University of Seoul, Seoul (Korea)

    2002-07-01

    In this paper, we propose a robust digital copyright-protection technique based on the concept of human auditory system. First, we propose a watermarking technique that accepts the various attacks such as, time scaling, pitch shift, add noise and a lot of lossy compression such as MP3, AAC, WMA. Second, we implement audio PD(portable device) for copyright protection using proposed method. The proposed watermarking technique is developed using digital filtering technique. Being designed according to critical band of HAS(human auditory system), the digital filters embed watermark without nearly affecting audio quality. Before processing of digital filtering, wavelet transform decomposes the input audio signal into several signals that are composed of specific frequencies. Then, we embed watermark in the decomposed signal (0kHz-11kHz) by designed band-stop digital filter. Watermarking detection algorithm is implemented on audio PD(portable device). Proposed watermarking technology embeds 2bits information per 15 seconds. If PD detects watermark '11', which means illegal song, PD displays 'Illegal Song' message on LCD, skips the song and plays the next song. The implemented detection algorithm in PD requires 19 MHz computational power, 7.9kBytes ROM and 10kBytes RAM. The suggested technique satisfies SDMI(secure digital music initiative) requirements of platform3 based on ARM9E core. (author). 9 refs., 8 figs.

  4. Auditory brainstem responses for click and CE-chirp stimuli in individuals with and without occupational noise exposure

    Directory of Open Access Journals (Sweden)

    Zeena Venkatacheluvaiah Pushpalatha

    2016-01-01

    Full Text Available Introduction: Encoding of CE-chirp and click stimuli in auditory system was studied using auditory brainstem responses (ABRs among individuals with and without noise exposure. Materials and Methods: The study consisted of two groups. Group 1 (experimental group consisted of 20 (40 ears individuals exposed to occupational noise with hearing thresholds within 25 dB HL. They were further divided into three subgroups based on duration of noise exposure (0–5 years of exposure-T1, 5–10 years of exposure-T2, and >10 years of exposure-T3. Group 2 (control group consisted of 20 individuals (40 ears. Absolute latency and amplitude of waves I, III, and V were compared between the two groups for both click and CE-chirp stimuli. T1, T2, and T3 groups were compared for the same parameters to see the effect of noise exposure duration on CE-chirp and click ABR. Result: In Click ABR, while both the parameters for wave III were significantly poorer for the experimental group, wave V showed a significant decline in terms of amplitude only. There was no significant difference obtained for any of the parameters for wave I. In CE-Chirp ABR, the latencies for all three waves were significantly prolonged in the experimental group. However, there was a significant decrease in terms of amplitude in only wave V for the same group. Discussion: Compared to click evoked ABR, CE-Chirp ABR was found to be more sensitive in comparison of latency parameters in individuals with occupational noise exposure. Monitoring of early pathological changes at the brainstem level can be studied effectively by using CE-Chirp stimulus in comparison to click stimulus. Conclusion: This study indicates that ABR’s obtained with CE-chirp stimuli serves as an effective tool to identify the early pathological changes due to occupational noise exposure when compared to click evoked ABR.

  5. Wiener kernels of chinchilla auditory-nerve fibers : Verification using responses to tones, clicks, and noise and comparison with basilar-membrane vibrations

    NARCIS (Netherlands)

    Temchin, AN; Recio-Spinoso, A; van Dijk, P; Ruggero, MA

    2005-01-01

    Responses to tones, clicks, and noise were recorded from chinchilla auditory-nerve fibers (ANFs). The responses to noise were analyzed by computing the zeroth-, first-, and second-order Wiener kernels (h(0), h(1), and h(2)). The h(1) s correctly predicted the frequency tuning and phases of responses

  6. Modeling the anti-masking effects of the olivocochlear reflex in auditory nerve responses to tones in sustained noise.

    Science.gov (United States)

    Chintanpalli, Ananthakrishna; Jennings, Skyler G; Heinz, Michael G; Strickland, Elizabeth A

    2012-04-01

    The medial olivocochlear reflex (MOCR) has been hypothesized to provide benefit for listening in noise. Strong physiological support for an anti-masking role for the MOCR has come from the observation that auditory nerve (AN) fibers exhibit reduced firing to sustained noise and increased sensitivity to tones when the MOCR is elicited. The present study extended a well-established computational model for normal-hearing and hearing-impaired AN responses to demonstrate that these anti-masking effects can be accounted for by reducing outer hair cell (OHC) gain, which is a primary effect of the MOCR. Tone responses in noise were examined systematically as a function of tone level, noise level, and OHC gain. Signal detection theory was used to predict detection and discrimination for different spontaneous rate fiber groups. Decreasing OHC gain decreased the sustained noise response and increased maximum discharge rate to the tone, thus modeling the ability of the MOCR to decompress AN fiber rate-level functions. Comparing the present modeling results with previous data from AN fibers in decerebrate cats suggests that the ipsilateral masking noise used in the physiological study may have elicited up to 20 dB of OHC gain reduction in addition to that inferred from the contralateral noise effects. Reducing OHC gain in the model also extended the dynamic range for discrimination over a wide range of background noise levels. For each masker level, an optimal OHC gain reduction was predicted (i.e., where maximum discrimination was achieved without increased detection threshold). These optimal gain reductions increased with masker level and were physiologically realistic. Thus, reducing OHC gain can improve tone-in-noise discrimination even though it may produce a “hearing loss” in quiet. Combining MOCR effects with the sensorineural hearing loss effects already captured by this computational AN model will be beneficial for exploring the implications of their interaction

  7. Transient human auditory cortex activation during volitional attention shifting.

    Science.gov (United States)

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues.

  8. How can the auditory efferent system protect our ears from noise-induced hearing loss? Let us count the ways

    Science.gov (United States)

    Marshall, Lynne; Miller, Judi A. Lapsley

    2015-12-01

    It is a cause for some debate as to how the auditory olivocochlear (OC) efferent system could protect hearing from noise trauma. In this review, we examined physiological research to find mechanisms that could effectively attenuate the response to sound. For each purported mechanism, we indicate which part of the OC-efferent system is responsible for the function and the site of action. These mechanisms include basilar-membrane phase shifts at high stimulus levels; changes in outer-hair-cell stiffness and phase lag associated with efferent slow effects; small decreases in endocochlear potentials causing small decreases in outer- and inner-hair-cell output; low-spontaneous-rate and medium-spontaneous-rate fibers showing OC-induced decrements at high levels; auditory-nerve initial-peak reduction; OC effect increasing over minutes; cholinergic activation of anti-apoptotic pathways; and anti-excitotoxicity. There are clearly multiple opportunities for the OC-efferent system to protect the inner ear from noise trauma. From further exploration into the mechanisms outlined here, as well as to-be-discovered mechanisms, we will gain a greater understanding of the protective nature of the OC-efferent system. These findings could aid our ability to design better predictive tests for people at risk for noise-induced hearing loss.

  9. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  10. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  11. Different Measures of Auditory and Visual Stroop Interference and Their Relationship to Speech Intelligibility in Noise.

    Science.gov (United States)

    Knight, Sarah; Heinrich, Antje

    2017-01-01

    Inhibition-the ability to suppress goal-irrelevant information-is thought to be an important cognitive skill in many situations, including speech-in-noise (SiN) perception. One way to measure inhibition is by means of Stroop tasks, in which one stimulus dimension must be named while a second, more prepotent dimension is ignored. The to-be-ignored dimension may be relevant or irrelevant to the target dimension, and the inhibition measure-Stroop interference (SI)-is calculated as the reaction time difference between the relevant and irrelevant conditions. Both SiN perception and inhibition are suggested to worsen with age, yet attempts to connect age-related declines in these two abilities have produced mixed results. We suggest that the inconsistencies between studies may be due to methodological issues surrounding the use of Stroop tasks. First, the relationship between SI and SiN perception may differ depending on the modality of the Stroop task; second, the traditional SI measure may not account for generalized slowing or sensory declines, and thus may not provide a pure interference measure. We investigated both claims in a group of 50 older adults, who performed two Stroop tasks (visual and auditory) and two SiN perception tasks. For each Stroop task, we calculated interference scores using both the traditional difference measure and methods designed to address its various problems, and compared the ability of these different scoring methods to predict SiN performance, alone and in combination with hearing sensitivity. Results from the two Stroop tasks were uncorrelated and had different relationships to SiN perception. Changing the scoring method altered the nature of the predictive relationship between Stroop scores and SiN perception, which was additionally influenced by hearing sensitivity. These findings raise questions about the extent to which different Stroop tasks and/or scoring methods measure the same aspect of cognition. They also highlight the

  12. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    Science.gov (United States)

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  13. Effects of background noise on inter-trial phase coherence and auditory N1-P2 responses to speech stimuli.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2015-10-01

    This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures.

  14. NOISE-INDUCED TOUGHENING EFFECT IN WISTAR RATS: ENHANCED AUDITORY BRAINSTEM RESPONSES ARE RELATED TO CALRETININ AND NITRIC OXIDE SYNTHASE UPREGULATION.

    Directory of Open Access Journals (Sweden)

    Juan Carlos eAlvarado

    2016-03-01

    Full Text Available An appropriate conditioning noise exposure may reduce a subsequent noise-induced threshold shift. Although this toughening effect helps to protect the auditory system from a subsequent traumatic noise exposure, the mechanisms that regulate this protective process are not fully understood yet. Accordingly, the goal of the present study was to characterize physiological processes associated with ‘toughening’ and to determine their relationship to metabolic changes in the cochlea and cochlear nucleus (CN. Auditory brainstem responses (ABR were evaluated in Wistar rats before and after exposures to a sound conditioning protocol consisting of a broad-band white noise of 118 dB SPL for 1h every 72h, 4 times. After the last ABR evaluation, animals were perfused and their cochleae and brains removed and processed for the activity markers calretinin (CR and neuronal nitric oxide synthase (nNOS. Toughening was demonstrated by a progressively faster recovery of the threshold shift, as well as wave amplitudes and latencies over time. Immunostaining revealed an increase in CR and nNOS levels in the spiral ganglion, spiral ligament and CN in noise-conditioned rats. Overall, these results suggest that the protective mechanisms of the auditory toughening effect initiate in the cochlea and extend to the central auditory system. Such phenomenon might be in part related to an interplay between CR and nitric oxide signalling pathways, and involve an increased cytosolic calcium buffering capacity induced by the noise conditioning protocol.

  15. 1/f Noise Outperforms White Noise in Sensitizing Baroreflex Function in the Human Brain

    Science.gov (United States)

    Soma, Rika; Nozaki, Daichi; Kwak, Shin; Yamamoto, Yoshiharu

    2003-08-01

    We show that externally added 1/f noise more effectively sensitizes the baroreflex centers in the human brain than white noise. We examined the compensatory heart rate response to a weak periodic signal introduced via venous blood pressure receptors while adding 1/f or white noise with the same variance to the brain stem through bilateral cutaneous stimulation of the vestibular afferents. In both cases, this noisy galvanic vestibular stimulation optimized covariance between the weak input signals and the heart rate responses. However, the optimal level with 1/f noise was significantly lower than with white noise, suggesting a functional benefit of 1/f noise for neuronal information transfer in the brain.

  16. The effects of background noise on the neural responses to natural sounds in cat primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Omer Bar-Yosef

    2007-11-01

    Full Text Available Animal vocalizations in natural settings are invariably accompanied by an acoustic background with a complex statistical structure. We have previously demonstrated that neuronal responses in primary auditory cortex of halothane-anesthetized cats depend strongly on the natural background. Here, we study in detail the neuronal responses to the background sounds and their relationships to the responses to the foreground sounds. Natural bird chirps as well as modifications of these chirps were used. The chirps were decomposed into three components: the clean chirps, their echoes, and the background noise. The last two were weaker than the clean chirp by 13 and 29 dB on average respectively. The test stimuli consisted of the full natural stimulus, the three basic components, and their three pairwise combinations. When the level of the background components (echoes and background noise presented alone was sufficiently loud to evoke neuronal activity, these background components had an unexpectedly strong effect on the responses of the neurons to the main bird chirp. In particular, the responses to the original chirps were more similar on average to the responses evoked by the two background components than to the responses evoked by the clean chirp, both in terms of the evoked spike count and in terms of the temporal pattern of the responses. These results suggest that some of the neurons responded specifically to the acoustic background even when presented together with the substantially louder main chirp, and may imply that neurons in A1 already participate in auditory source segregation.

  17. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  18. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  19. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  20. Effects of continuous conditioning noise and light on the auditory- and visual-evoked potentials of the guinea pig.

    Science.gov (United States)

    Goksoy, Cuneyt; Demirtas, Serdar; Ates, Kahraman

    2005-11-01

    Neurophysiological studies aiming to explore how the brain integrates information from different brain regions are increasing in the literature. The aim of the present study is to explore intramodal (binaural, binocular) and intermodal (audio-visual) interactions in the guinea pig brain through the observation of changes in evoked potentials by generalized continuous background activity. Seven chronically prepared animals were used in the study and the recordings were made as they were awake. Epidural electrodes were implanted to the skulls by using stereotaxic methods. Continuous light for retinal or continuous white noise for cochlear receptors were used as continuous conditioning stimuli for generalized stimulation. To evoke auditory or visual potentials, click or flash were used as transient imperative stimuli. The study data suggest that (a) white noise applied to one ear modifies the response to click in the contralateral ear which is a binaural interaction; (b) continuous light applied to one eye modifies the response to flash applied to the contralateral eye which is interpreted as a binocular interaction; (c) regardless of the application side, white noise similarly modified the response to flash applied to the either eye connoting a nonspecific effect of white noise on vision, independent from spatial hearing mechanisms; (d) on the other hand, continuous light, in either eye, did not affect the response to click applied to any ear, reminding a 'one-way' interaction that continuous aural stimulation affects visual response.

  1. Noise-gated encoding of slow inputs by auditory brain stem neurons with a low-threshold K+ current.

    Science.gov (United States)

    Gai, Yan; Doiron, Brent; Kotak, Vibhakar; Rinzel, John

    2009-12-01

    Phasic neurons, which do not fire repetitively to steady depolarization, are found at various stages of the auditory system. Phasic neurons are commonly described as band-pass filters because they do not respond to low-frequency inputs even when the amplitude is large. However, we show that phasic neurons can encode low-frequency inputs when noise is present. With a low-threshold potassium current (I(KLT)), a phasic neuron model responds to rising and falling phases of a subthreshold low-frequency signal with white noise. When the white noise was low-pass filtered, the phasic model also responded to the signal's trough but still not to the peak. In contrast, a tonic neuron model fired mostly to the signal's peak. To test the model predictions, whole cell slice recordings were obtained in the medial (MSO) and lateral (LSO) superior olivary neurons in gerbil from postnatal day 10 (P10) to 22. The phasic MSO neurons with strong I(KLT), mostly from gerbils aged P17 or older, showed firing patterns consistent with the preceding predictions. Moreover, injecting a virtual I(KLT) into weak-phasic MSO and tonic LSO neurons with putative weak or no I(KLT) (from gerbils younger than P17) shifted the neural response from the signal's peak to the rising phase. These findings advance our knowledge about how noise gates the signal pathway and how phasic neurons encode slow envelopes of sounds with high-frequency carriers.

  2. Predicting phoneme and word recognition in noise using a computational model of the auditory periphery.

    Science.gov (United States)

    Moncada-Torres, Arturo; van Wieringen, Astrid; Bruce, Ian C; Wouters, Jan; Francart, Tom

    2017-01-01

    Several filterbank-based metrics have been proposed to predict speech intelligibility (SI). However, these metrics incorporate little knowledge of the auditory periphery. Neurogram-based metrics provide an alternative, incorporating knowledge of the physiology of hearing by using a mathematical model of the auditory nerve response. In this work, SI was assessed utilizing different filterbank-based metrics (the speech intelligibility index and the speech-based envelope power spectrum model) and neurogram-based metrics, using the biologically inspired model of the auditory nerve proposed by Zilany, Bruce, Nelson, and Carney [(2009), J. Acoust. Soc. Am. 126(5), 2390-2412] as a front-end and the neurogram similarity metric and spectro temporal modulation index as a back-end. Then, the correlations with behavioural scores were computed. Results showed that neurogram-based metrics representing the speech envelope showed higher correlations with the behavioural scores at a word level. At a per-phoneme level, it was found that phoneme transitions contribute to higher correlations between objective measures that use speech envelope information at the auditory periphery level and behavioural data. The presented framework could function as a useful tool for the validation and tuning of speech materials, as well as a benchmark for the development of speech processing algorithms.

  3. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  4. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  5. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  6. Reverse correlation analysis of auditory-nerve fiber responses to broadband noise in a bird, the barn owl.

    Science.gov (United States)

    Fontaine, Bertrand; Köppl, Christine; Peña, Jose L

    2015-02-01

    While the barn owl has been extensively used as a model for sound localization and temporal coding, less is known about the mechanisms at its sensory organ, the basilar papilla (homologous to the mammalian cochlea). In this paper, we characterize, for the first time in the avian system, the auditory nerve fiber responses to broadband noise using reverse correlation. We use the derived impulse responses to study the processing of sounds in the cochlea of the barn owl. We characterize the frequency tuning, phase, instantaneous frequency, and relationship to input level of impulse responses. We show that, even features as complex as the phase dependence on input level, can still be consistent with simple linear filtering. Where possible, we compare our results with mammalian data. We identify salient differences between the barn owl and mammals, e.g., a much smaller frequency glide slope and a bimodal impulse response for the barn owl, and discuss what they might indicate about cochlear mechanics. While important for research on the avian auditory system, the results from this paper also allow us to examine hypotheses put forward for the mammalian cochlea.

  7. Modulatory effects of spectral energy contrasts on lateral inhibition in the human auditory cortex: an MEG study.

    Directory of Open Access Journals (Sweden)

    Alwina Stein

    Full Text Available We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG. In the first experiment, five acoustic masking stimuli (MS, consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch. N1m and auditory steady state responses (ASSR, evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training.

  8. Changes in size and shape of auditory hair cells in vivo during noise-induced temporary threshold shift.

    Science.gov (United States)

    Dew, L A; Owen, R G; Mulroy, M J

    1993-03-01

    In this study we describe changes in the size and shape of auditory hair cells of the alligator lizard in vivo during noise-induced temporary threshold shift. These changes consist of a decrease in cell volume, a decrease in cell length and an increase in cell width. We speculate that these changes are due to relaxation of cytoskeletal contractile elements and osmotic loss of intracellular water. We also describe a decrease in the surface area of the hair cell plasmalemma, and speculate that it is related to the endocytosis and intracellular accumulation of cell membrane during synaptic vesicle recycling. Finally we describe an increase in the endolymphatic surface area of the hair cell, and speculate that this could alter the micromechanics of the stereociliary tuft to attenuate the effective stimulus.

  9. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  10. Human auditory steady state responses to binaural and monaural beats.

    Science.gov (United States)

    Schwarz, D W F; Taylor, P

    2005-03-01

    Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.

  11. In Vitro Studies and Preliminary Mathematical Model for Jet Fuel and Noise Induced Auditory Impairment

    Science.gov (United States)

    2015-06-01

    impairment in stimulus encoding was exacerbated by low level (non-damaging) noise (8 kHz octave band at 85 dB sound pressure level) exposure. The results...and necrosis were enhanced when cells were exposed to JP-8 or hydrocarbons with oligomycin (in vitro noise surrogate). The PD model is designed to...death. The PD model will be parameterized using results from in vitro studies, and is designed to be interfaced with a physiologically-based

  12. BDNF in Lower Brain Parts Modifies Auditory Fiber Activity to Gain Fidelity but Increases the Risk for Generation of Central Noise After Injury.

    Science.gov (United States)

    Chumak, Tetyana; Rüttiger, Lukas; Lee, Sze Chim; Campanelli, Dario; Zuccotti, Annalisa; Singer, Wibke; Popelář, Jiří; Gutsche, Katja; Geisler, Hyun-Soon; Schraven, Sebastian Philipp; Jaumann, Mirko; Panford-Walsh, Rama; Hu, Jing; Schimmang, Thomas; Zimmermann, Ulrike; Syka, Josef; Knipper, Marlies

    2016-10-01

    For all sensory organs, the establishment of spatial and temporal cortical resolution is assumed to be initiated by the first sensory experience and a BDNF-dependent increase in intracortical inhibition. To address the potential of cortical BDNF for sound processing, we used mice with a conditional deletion of BDNF in which Cre expression was under the control of the Pax2 or TrkC promoter. BDNF deletion profiles between these mice differ in the organ of Corti (BDNF (Pax2) -KO) versus the auditory cortex and hippocampus (BDNF (TrkC) -KO). We demonstrate that BDNF (Pax2) -KO but not BDNF (TrkC) -KO mice exhibit reduced sound-evoked suprathreshold ABR waves at the level of the auditory nerve (wave I) and inferior colliculus (IC) (wave IV), indicating that BDNF in lower brain regions but not in the auditory cortex improves sound sensitivity during hearing onset. Extracellular recording of IC neurons of BDNF (Pax2) mutant mice revealed that the reduced sensitivity of auditory fibers in these mice went hand in hand with elevated thresholds, reduced dynamic range, prolonged latency, and increased inhibitory strength in IC neurons. Reduced parvalbumin-positive contacts were found in the ascending auditory circuit, including the auditory cortex and hippocampus of BDNF (Pax2) -KO, but not of BDNF (TrkC) -KO mice. Also, BDNF (Pax2) -WT but not BDNF (Pax2) -KO mice did lose basal inhibitory strength in IC neurons after acoustic trauma. These findings suggest that BDNF in the lower parts of the auditory system drives auditory fidelity along the entire ascending pathway up to the cortex by increasing inhibitory strength in behaviorally relevant frequency regions. Fidelity and inhibitory strength can be lost following auditory nerve injury leading to diminished sensory outcome and increased central noise.

  13. The effect of precision and power grips on activations in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Patrik Alexander Wikman

    2015-10-01

    Full Text Available The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC. Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks. During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.

  14. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    Directory of Open Access Journals (Sweden)

    Eric Larson

    Full Text Available Human-machine interface (HMI designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis. Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group, and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel targets with an unambiguous cue.

  15. Long-term, passive exposure to non-traumatic acoustic noise induces neural adaptation in the adult rat medial geniculate body and auditory cortex.

    Science.gov (United States)

    Lau, Condon; Zhang, Jevin W; McPherson, Bradley; Pienkowski, Martin; Wu, Ed X

    2015-02-15

    Exposure to loud sounds can lead to permanent hearing loss, i.e., the elevation of hearing thresholds. Exposure at more moderate sound pressure levels (SPLs) (non-traumatic and within occupational limits) may not elevate thresholds, but could in the long-term be detrimental to speech intelligibility by altering its spectrotemporal representation in the central auditory system. In support of this, electrophysiological and behavioral changes following long-term, passive (no conditioned learning) exposure at moderate SPLs have recently been observed in adult animals. To assess the potential effects of moderately loud noise on the entire auditory brain, we employed functional magnetic resonance imaging (fMRI) to study noise-exposed adult rats. We find that passive, pulsed broadband noise exposure for two months at 65 dB SPL leads to a decrease of the sound-evoked blood oxygenation level-dependent fMRI signal in the thalamic medial geniculate body (MGB) and in the auditory cortex (AC). This points to the thalamo-cortex as the site of the neural adaptation to the moderately noisy environment. The signal reduction is statistically significant during 10 Hz pulsed acoustic stimulation (MGB: pnoise exposure has a greater effect on the processing of higher pulse rate sounds. This study has enhanced our understanding of functional changes following exposure by mapping changes across the entire auditory brain. These findings have important implications for speech processing, which depends on accurate processing of sounds with a wide spectrum of pulse rates.

  16. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  17. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  18. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  19. On the Origin of the 1,000 Hz Peak in the Spectrum of the Human Tympanic Electrical Noise

    Directory of Open Access Journals (Sweden)

    Javiera Pardo-Jadue

    2017-07-01

    Full Text Available The spectral analysis of the spontaneous activity recorded with an electrode positioned near the round window of the guinea pig cochlea shows a broad energy peak between 800 and 1,000 Hz. This spontaneous electric activity is called round window noise or ensemble background activity. In guinea pigs, the proposed origin of this peak is the random sum of the extracellular field potentials generated by action potentials of auditory nerve neurons. In this study, we used a non-invasive method to record the tympanic electric noise (TEN in humans by means of a tympanic wick electrode. We recorded a total of 24 volunteers, under silent conditions or in response to stimuli of different modalities, including auditory, vestibular, and motor activity. Our results show a reliable peak of spontaneous activity at ~1,000 Hz in all studied subjects. In addition, we found stimulus-driven responses with broad-band noise that in most subjects produced an increase in the magnitude of the energy band around 1,000 Hz (between 650 and 1,200 Hz. Our results with the vestibular stimulation were not conclusive, as we found responses with all caloric stimuli, including 37°C. No responses were observed with motor tasks, like eye movements or blinking. We demonstrate the feasibility of recording neural activity from the electric noise of the tympanic membrane with a non-invasive method. From our results, we suggest that the 1,000 Hz component of the TEN has a mixed origin including peripheral and central auditory pathways. This research opens up the possibility of future clinical non-invasive techniques for the functional study of auditory and vestibular nerves in humans.

  20. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  1. Dichotic multiple-frequency auditory steady-state responses in evaluating the hearing thresholds of occupational noise-exposed workers

    Directory of Open Access Journals (Sweden)

    Ruey-Fen Hsu

    2011-08-01

    Full Text Available An objective, fast, and reasonably accurate assessment test that allows for easy interpretation of the responses of the hearing thresholds at all frequencies of a conventional audiogram is needed to resolve the medicolegal aspects of an occupational hearing injury. This study evaluated the use of dichotic multiple-frequency auditory steady-state responses (Mf-ASSR to predict the hearing thresholds in workers exposed to high levels of noise. The study sample included 34 workers with noise-induced hearing impairment. Thresholds of pure-tone audiometry (PTA and Mf-ASSRs at four frequencies were assessed. The differences and correlations between the thresholds of Mf-ASSRs and PTA were determined. The results showed that, on average, Mf-ASSR curves corresponded well with the thresholds of the PTA contours averaged across subjects. The Mf-ASSRs were 20±8 dB, 16±9 dB, 12±9 dB, and 11±12 dB above the thresholds of the PTA for 500 Hz, 1,000 Hz, 2,000 Hz, and 4,000 Hz, respectively. The thresholds of the PTA and the Mf-ASSRs were significantly correlated (r=0.77–0.89. We found that the measurement of Mf-ASSRs is easy and potentially time saving, provides a response at all dichotic multiple frequencies of the conventional audiogram, reduces variability in the interpretation of the responses, and correlates well with the behavioral hearing thresholds in subjects with occupational noise-induced hearing impairment. Mf-ASSR can be a valuable aid in the adjustment of compensation cases.

  2. Exponential processes in human auditory excitation and adaptation.

    Science.gov (United States)

    Formby, C; Rutledge, J C; Sherlock, L P

    2002-02-01

    Peripheral auditory adaptation has been studied extensively in animal models, and multiple exponential components have been identified. This study explores the feasibility of estimating these component processes for human listeners with a peripheral model of adaptation. The processes were estimated from off-frequency masked detection data that probed temporal masking responses to a gated narrowband masker. The resulting response patterns reflected step-like onset and offset features with characteristically little evidence of confounding backward and forward masking. The model was implemented with linear combinations of exponential functions to represent the unadapted excitation response to gating the masker on and then off and the opposing effects of adaptation in each instance. The onset and offset of the temporal masking response were assumed to be approximately inverse operations and were modeled independently in this scheme. The unadapted excitation response at masker onset and the reversed excitation response at masker offset were each represented in the model by a single exponential function. The adaptation processes were modeled by three independent exponential functions, which were reversed at masker offset. Each adaptation component was subtractive and partially negated the unadapted excitation response to the dynamic masker. This scheme allowed for quantification of the response amplitude, action latency, and time constant for the unadapted excitation component and for each adaptation component. The results reveal that (1) the amplitudes of the unadapted excitation and reversed excitation components grow nonlinearly with masker level and mirror the 'compressive' input-output velocity response of the basilar membrane; (2) the time constants for the unadapted excitation and reversed excitation components are related inversely to masker intensity, which is compatible with neural synchrony increasing at masker onset (or offset) with increasing masker strength

  3. Qualitative and quantitative assessment of noise at moderate intensities on extra-auditory system in adult rats

    Directory of Open Access Journals (Sweden)

    Noura Gannouni

    2013-01-01

    Full Text Available Noise has long been realized as an environmental stress causing physiological, psychological and behavioral changes in humans. The aim of the present study was to determinate the effect of chronic noise at moderate intensities on both glandular and cardiac function and oxidative status. Our problem comes from working conditions in call centers where operators are responsible for making simple and repetitive tasks. One wishes to ascertain the effects of moderate sound levels on rats exposed to the same noise levels during similar periods to those experienced by call center operators. Male Wistar rats were exposed to 70 and 85 dB(A to an octave-band noise (8-16 kHz 6 h/day for 3 month. Corticosterone levels, oxidative status and functional exploration of adrenal and thyroid glands and cardiac tissue were determined. Exposure to long-term noise for different intensities (70 and 85 dB(A resulted in increased corticosterone levels, affected various parameters of the endocrine glands and cardiac function. Markers of oxidative stress (catalase, superoxide dismutase and lipid peroxidation were increased. These results imply that long-term exposure to noise even at moderate levels may enhance physiological function related to neuroendocrine modulation and oxidative imbalance. In these data, the physiological changes occur during the different sounds suggests the concept of allostatic load or homeostatic response of the body.

  4. Noise exposure and auditory thresholds of German airline pilots: a cross-sectional study.

    Science.gov (United States)

    Müller, Reinhard; Schneider, Joachim

    2017-05-30

    The cockpit workplace of airline pilots is a noisy environment. This study examines the hearing thresholds of pilots with respect to ambient noise and communication sound. The hearing of 487 German pilots was analysed by audiometry in the frequency range of 125 Hz-16 kHz in varying age groups. Cockpit noise (free-field) data and communication sound (acoustic manikin) measurements were evaluated. The ambient noise levels in cockpits were found to be between 74 and 80 dB(A), and the sound pressure levels under the headset were found to be between 84 and 88 dB(A).The left-right threshold differences at 3, 4 and 6 kHz show evidence of impaired hearing at the left ear, which worsens by age.In the age groups <40/≥40 years the mean differences at 3 kHz are 2/3 dB, at 4 kHz 2/4 dB and at 6 kHz 1/6 dB.In the pilot group which used mostly the left ear for communication tasks (43 of 45 are in the older age group) the mean difference at 3 kHz is 6 dB, at 4 kHz 7 dB and at 6 kHz 10 dB. The pilots who used the headset only at the right ear also show worse hearing at the left ear of 2 dB at 3 kHz, 3 dB at 4 kHz and at 6 kHz. The frequency-corrected exposure levels under the headset are 7-11 dB(A) higher than the ambient noise with an averaged signal-to-noise ratio for communication of about 10 dB(A). The left ear seems to be more susceptible to hearing loss than the right ear. Active noise reduction systems allow for a reduced sound level for the communication signal below the upper exposure action value of 85 dB(A) and allow for a more relaxed working environment for pilots. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  6. Noise exposure of immature rats can induce different age-dependent extra-auditory alterations that can be partially restored by rearing animals in an enriched environment.

    Science.gov (United States)

    Molina, S J; Capani, F; Guelman, L R

    2016-04-01

    It has been previously shown that different extra-auditory alterations can be induced in animals exposed to noise at 15 days. However, data regarding exposure of younger animals, that do not have a functional auditory system, have not been obtained yet. Besides, the possibility to find a helpful strategy to restore these changes has not been explored so far. Therefore, the aims of the present work were to test age-related differences in diverse hippocampal-dependent behavioral measurements that might be affected in noise-exposed rats, as well as to evaluate the effectiveness of a potential neuroprotective strategy, the enriched environment (EE), on noise-induced behavioral alterations. Male Wistar rats of 7 and 15 days were exposed to moderate levels of noise for two hours. At weaning, animals were separated and reared either in standard or in EE cages for one week. At 28 days of age, different hippocampal-dependent behavioral assessments were performed. Results show that rats exposed to noise at 7 and 15 days were differentially affected. Moreover, EE was effective in restoring all altered variables when animals were exposed at 7 days, while a few were restored in rats exposed at 15 days. The present findings suggest that noise exposure was capable to trigger significant hippocampal-related behavioral alterations that were differentially affected, depending on the age of exposure. In addition, it could be proposed that hearing structures did not seem to be necessarily involved in the generation of noise-induced hippocampal-related behaviors, as they were observed even in animals with an immature auditory pathway. Finally, it could be hypothesized that the differential restoration achieved by EE rearing might also depend on the degree of maturation at the time of exposure and the variable evaluated, being younger animals more susceptible to environmental manipulations.

  7. High resolution 1H NMR-based metabonomic study of the auditory cortex analogue of developing chick (Gallus gallus domesticus) following prenatal chronic loud music and noise exposure.

    Science.gov (United States)

    Kumar, Vivek; Nag, Tapas Chandra; Sharma, Uma; Mewar, Sujeet; Jagannathan, Naranamangalam R; Wadhwa, Shashi

    2014-10-01

    Proper functional development of the auditory cortex (ACx) critically depends on early relevant sensory experiences. Exposure to high intensity noise (industrial/traffic) and music, a current public health concern, may disrupt the proper development of the ACx and associated behavior. The biochemical mechanisms associated with such activity dependent changes during development are poorly understood. Here we report the effects of prenatal chronic (last 10 days of incubation), 110dB sound pressure level (SPL) music and noise exposure on metabolic profile of the auditory cortex analogue/field L (AuL) in domestic chicks. Perchloric acid extracts of AuL of post hatch day 1 chicks from control, music and noise groups were subjected to high resolution (700MHz) (1)H NMR spectroscopy. Multivariate regression analysis of the concentration data of 18 metabolites revealed a significant class separation between control and loud sound exposed groups, indicating a metabolic perturbation. Comparison of absolute concentration of metabolites showed that overstimulation with loud sound, independent of spectral characteristics (music or noise) led to extensive usage of major energy metabolites, e.g., glucose, β-hydroxybutyrate and ATP. On the other hand, high glutamine levels and sustained levels of neuromodulators and alternate energy sources, e.g., creatine, ascorbate and lactate indicated a systems restorative measure in a condition of neuronal hyperactivity. At the same time, decreased aspartate and taurine levels in the noise group suggested a differential impact of prenatal chronic loud noise over music exposure. Thus prenatal exposure to loud sound especially noise alters the metabolic activity in the AuL which in turn can affect the functional development and later auditory associated behaviour.

  8. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Rick L Jenison

    Full Text Available Spectro-Temporal Receptive Fields (STRFs were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM. A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl's gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl's gyrus recordings elicited by click-train stimuli.

  9. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  10. Positive and negative reinforcement activate human auditory cortex.

    Science.gov (United States)

    Weis, Tina; Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-01-01

    Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent) depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

  11. Positive and negative reinforcement activate human auditory cortex

    Directory of Open Access Journals (Sweden)

    Tina eWeis

    2013-12-01

    Full Text Available Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

  12. Early influence of auditory stimuli on upper-limb movements in young human infants: an overview

    Directory of Open Access Journals (Sweden)

    Priscilla Augusta Monteiro Ferronato

    2014-09-01

    Full Text Available Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual and cognitive development. At-risk infants (e.g., those born preterm may display increasing central auditory processing disorders, negatively affecting early sensory-motor integration, and resulting in long-term consequences on gesturing, language development and social communication. Consequently, there is a need for more studies on such implications

  13. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  14. Mapping the Tonotopic Organization in Human Auditory Cortex with Minimally Salient Acoustic Stimulation

    NARCIS (Netherlands)

    Langers, Dave R. M.; van Dijk, Pim

    2012-01-01

    Despite numerous neuroimaging studies, the tonotopic organization in human auditory cortex is not yet unambiguously established. In this functional magnetic resonance imaging study, 20 subjects were presented with low-level task-irrelevant tones to avoid spread of cortical activation. Data-driven an

  15. Basic fibroblast growth factor protects auditory neurons and hair cells from noise exposure and glutamate neurotoxicity

    Institute of Scientific and Technical Information of China (English)

    翟所强; 王大君; 王嘉陵

    2003-01-01

    The purpose of the present study was to determine protectivie effects of basic fibroblast growth factor (bFGF) on cochlear neurons and hair cells in vitro and in vivo. In experiment I, cultured spiral ganglion neurons (SGNs) prepared from P3 mice were exposed to 20mM glutamate for 2 hours before the culture medium was replaced with fresh medium containing 0, 25, 50, and 100 ng/ml bFGF, respectively. Fourteen days later, all cultures were fixed with 4% paraformaldehyde, and stained with 1% toluidine blue. The number of surviving SGNs were counted and the length of SGNs neurites were measured. Exposure to 20 mM glutamate for 24 hours resulted in an inhibition on neurite outgrowth of SGNs and elevated cell death. Treatment of the cultures with bFGF led to promotion of neurite outgrowth and elevated number of surviving SGNs. Effects of bFGF were dose dependent with the highest potency at 100 ng/ml. In experiment Ⅱ, in vivo studies were carried out with guinea pigs in which bFGF or artificial perilymph was perfused into the cochlea to assess possible protective effects of bFGF on cochlear hair cells and compound action potentials(CAP). The CAPs were measured before, immediatly and 48 hours after exposure to noise. Significant differences in CAP were observed (p<0. 05 ) among the bFGF perfused group, control group(t =3. 896 ) and artificial perilymph perfused group (t =2. 520) at 48 hours after noise exposure, Cochleae were removed and hair cell Loss was analyzed in surface preparations prepared from all experimental animals. Acoustic trauma caused loss of 651 and 687 inner hair cells in the control and artificial perilymph perfused group, respectively. In sharp contrast, only 31 inner hair cells were lost in the bFGF perfused ears. Similarly, more outer hair cells died in the control and perilymph perfuesed group (41830 and 41968, respectively) than in the group treated with bFGF (34258). Our results demonstrate that bFGF protected SGNs against glutmate

  16. Speaking modifies voice-evoked activity in the human auditory cortex.

    Science.gov (United States)

    Curio, G; Neuloh, G; Numminen, J; Jousmäki, V; Hari, R

    2000-04-01

    The voice we most often hear is our own, and proper interaction between speaking and hearing is essential for both acquisition and performance of spoken language. Disturbed audiovocal interactions have been implicated in aphasia, stuttering, and schizophrenic voice hallucinations, but paradigms for a noninvasive assessment of auditory self-monitoring of speaking and its possible dysfunctions are rare. Using magnetoencephalograpy we show here that self-uttered syllables transiently activate the speaker's auditory cortex around 100 ms after voice onset. These phasic responses were delayed by 11 ms in the speech-dominant left hemisphere relative to the right, whereas during listening to a replay of the same utterances the response latencies were symmetric. Moreover, the auditory cortices did not react to rare vowel changes interspersed randomly within a series of repetitively spoken vowels, in contrast to regular change-related responses evoked 100-200 ms after replayed rare vowels. Thus, speaking primes the human auditory cortex at a millisecond time scale, dampening and delaying reactions to self-produced "expected" sounds, more prominently in the speech-dominant hemisphere. Such motor-to-sensory priming of early auditory cortex responses during voicing constitutes one element of speech self-monitoring that could be compromised in central speech disorders.

  17. The effect of occupational noise exposure on tinnitus and sound-induced auditory fatigue among obstetrics personnel: a cross-sectional study.

    Science.gov (United States)

    Fredriksson, Sofie; Hammar, Oscar; Torén, Kjell; Tenenbaum, Artur; Waye, Kerstin Persson

    2015-03-27

    There is a lack of research on effects of occupational noise exposure in traditionally female-dominated workplaces. Therefore, the aim of this study was to assess risk of noise-induced hearing-related symptoms among obstetrics personnel. A cross-sectional study was performed at an obstetric ward in Sweden including a questionnaire among all employees and sound level measurements in 61 work shifts at the same ward. 115 female employees responded to a questionnaire (72% of all 160 employees invited). Self-reported hearing-related symptoms in relation to calculated occupational noise exposure dose and measured sound levels. Sound levels exceeded the 80 dB LAeq limit for protection of hearing in 46% of the measured work shifts. One or more hearing-related symptoms were reported by 55% of the personnel. In logistic regression models, a significant association was found between occupational noise exposure dose and tinnitus (OR=1.04, 95% CI 1.00 to 1.09) and sound-induced auditory fatigue (OR=1.04, 95% CI 1.00 to 1.07). Work-related stress and noise annoyance at work were reported by almost half of the personnel. Sound-induced auditory fatigue was associated with work-related stress and noise annoyance at work, although stress slightly missed significance in a multivariable model. No significant interactions were found. This study presents new results showing that obstetrics personnel are at risk of noise-induced hearing-related symptoms. Current exposure levels at the workplace are high and occupational noise exposure dose has significant effects on tinnitus and sound-induced auditory fatigue among the personnel. These results indicate that preventative action regarding noise exposure is required in obstetrics care and that risk assessments may be needed in previously unstudied non-industrial communication-intense sound environments. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  19. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  20. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise.

    Science.gov (United States)

    Söderlund, Göran B W; Jobs, Elisabeth Nilsson

    2016-01-01

    The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6-9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman's speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.

  1. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear when Exposed to 65 dB of Auditory Noise

    Directory of Open Access Journals (Sweden)

    Göran B W Söderlund

    2016-01-01

    Full Text Available The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD, affecting approximately 6-9 % of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB. Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children (TDC and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.

  2. Rapid Increase in Neural Conduction Time in the Adult Human Auditory Brainstem Following Sudden Unilateral Deafness.

    Science.gov (United States)

    Maslin, M R D; Lloyd, S K; Rutherford, S; Freeman, S; King, A; Moore, D R; Munro, K J

    2015-10-01

    Individuals with sudden unilateral deafness offer a unique opportunity to study plasticity of the binaural auditory system in adult humans. Stimulation of the intact ear results in increased activity in the auditory cortex. However, there are no reports of changes at sub-cortical levels in humans. Therefore, the aim of the present study was to investigate changes in sub-cortical activity immediately before and after the onset of surgically induced unilateral deafness in adult humans. Click-evoked auditory brainstem responses (ABRs) to stimulation of the healthy ear were recorded from ten adults during the course of translabyrinthine surgery for the removal of a unilateral acoustic neuroma. This surgical technique always results in abrupt deafferentation of the affected ear. The results revealed a rapid (within minutes) reduction in latency of wave V (mean pre = 6.55 ms; mean post = 6.15 ms; p < 0.001). A latency reduction was also observed for wave III (mean pre = 4.40 ms; mean post = 4.13 ms; p < 0.001). These reductions in response latency are consistent with functional changes including disinhibition or/and more rapid intra-cellular signalling affecting binaurally sensitive neurons in the central auditory system. The results are highly relevant for improved understanding of putative physiological mechanisms underlying perceptual disorders such as tinnitus and hyperacusis.

  3. Tram squealing noise and its impact on human health

    Directory of Open Access Journals (Sweden)

    Eva Panulinova

    2016-01-01

    Full Text Available Introduction: Tramway has become a serious urban noise source in densely populated areas. The disturbance from squealing noise is significant. Curve squeal is the very loud, tonal noise emitted by tram operation in tight radius curves. Studies had reported a relationship between noise levels and health effects, such as annoyance, sleep disturbance, and elevated systolic and diastolic blood pressure. Materials and Methods: This study aimed to analyze the wheel squeal noise along the tramway line in Košice, Slovakia, review the effects on human health, and discuss its inclusion in the design method. To observe the influence of a track curve on noise emission, several measurement points were selected, and the noise emission was measured both in the curve and in the straight lines employing the same type of permanent way. Results: The results in the sections with the radius below 50 m were greatly affected by the presence of a squeal noise, while the resulting noise level in the sections with the radius above 50 m depended on their radius. The difference between the average values of LAeq with and without the squeal in the measurement points with the radius below 50 m was 9 dB. The difference between the measurements in the curve sections with the radius below 50 m and those in the straight line was 2.7 dB. Conclusion: The resulting noise level in general was influenced by the car velocity and the technical shape of the permanent way. These results can be used in noise prognoses and in the health effect predictions.

  4. Noise-induced transition in human reaction times

    Science.gov (United States)

    Medina, José M.; Díaz, José A.

    2016-09-01

    The human reaction/response time can be defined as the time elapsed from the onset of stimulus presentation until a response occurs in many sensory and cognitive processes. A reaction time model based on Piéron’s law is investigated. The model shows a noise-induced transition in the moments of reaction time distributions due to the presence of strong additive noise. The model also demonstrates that reaction times do not follow fluctuation scaling between the mean and the variance but follow a generalized version between the skewness and the kurtosis. The results indicate that noise-induced transitions in the moments govern fluctuations in sensory-motor transformations and open an insight into the macroscopic effects of noise in human perception and action. The conditions that lead to extreme reaction times are discussed based on the transfer of information in neurons.

  5. Consonance and dissonance of musical chords: neural correlates in auditory cortex of monkeys and humans.

    Science.gov (United States)

    Fishman, Y I; Volkov, I O; Noh, M D; Garell, P C; Bakken, H; Arezzo, J C; Howard, M A; Steinschneider, M

    2001-12-01

    Some musical chords sound pleasant, or consonant, while others sound unpleasant, or dissonant. Helmholtz's psychoacoustic theory of consonance and dissonance attributes the perception of dissonance to the sensation of "beats" and "roughness" caused by interactions in the auditory periphery between adjacent partials of complex tones comprising a musical chord. Conversely, consonance is characterized by the relative absence of beats and roughness. Physiological studies in monkeys suggest that roughness may be represented in primary auditory cortex (A1) by oscillatory neuronal ensemble responses phase-locked to the amplitude-modulated temporal envelope of complex sounds. However, it remains unknown whether phase-locked responses also underlie the representation of dissonance in auditory cortex. In the present study, responses evoked by musical chords with varying degrees of consonance and dissonance were recorded in A1 of awake macaques and evaluated using auditory-evoked potential (AEP), multiunit activity (MUA), and current-source density (CSD) techniques. In parallel studies, intracranial AEPs evoked by the same musical chords were recorded directly from the auditory cortex of two human subjects undergoing surgical evaluation for medically intractable epilepsy. Chords were composed of two simultaneous harmonic complex tones. The magnitude of oscillatory phase-locked activity in A1 of the monkey correlates with the perceived dissonance of the musical chords. Responses evoked by dissonant chords, such as minor and major seconds, display oscillations phase-locked to the predicted difference frequencies, whereas responses evoked by consonant chords, such as octaves and perfect fifths, display little or no phase-locked activity. AEPs recorded in Heschl's gyrus display strikingly similar oscillatory patterns to those observed in monkey A1, with dissonant chords eliciting greater phase-locked activity than consonant chords. In contrast to recordings in Heschl's gyrus

  6. Innervation of the Human Cavum Conchae and Auditory Canal: Anatomical Basis for Transcutaneous Auricular Nerve Stimulation

    Science.gov (United States)

    Bermejo, P.; López, M.; Larraya, I.; Chamorro, J.; Cobo, J. L.; Ordóñez, S.

    2017-01-01

    The innocuous transcutaneous stimulation of nerves supplying the outer ear has been demonstrated to be as effective as the invasive direct stimulation of the vagus nerve for the treatment of some neurological and nonneurological disturbances. Thus, the precise knowledge of external ear innervation is of maximal interest for the design of transcutaneous auricular nerve stimulation devices. We analyzed eleven outer ears, and the innervation was assessed by Masson's trichrome staining, immunohistochemistry, or immunofluorescence (neurofilaments, S100 protein, and myelin-basic protein). In both the cavum conchae and the auditory canal, nerve profiles were identified between the cartilage and the skin and out of the cartilage. The density of nerves and of myelinated nerve fibers was higher out of the cartilage and in the auditory canal with respect to the cavum conchae. Moreover, the nerves were more numerous in the superior and posterior-inferior than in the anterior-inferior segments of the auditory canal. The present study established a precise nerve map of the human cavum conchae and the cartilaginous segment of the auditory canal demonstrating regional differences in the pattern of innervation of the human outer ear. These results may provide additional neuroanatomical basis for the accurate design of auricular transcutaneous nerve stimulation devices.

  7. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Nathan C. Higgins

    2017-05-01

    Full Text Available Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical and onset-only (illusory lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.

  8. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) With Hearing Protection Software, Release MIL-STD-1474E

    Science.gov (United States)

    2013-12-01

    Conservation Conference, Albuquerque, NM, 10 pp (invited paper). Price, G. R. (1998) “Modeling impulse noise susceptibility in marine mammals ...Invited presentation to USNRL workshop on Noise and Marine Mammals , Washington, DC. Price, G. R. (1998) “Engineering issues in reducing auditory hazard...HC) MATL CMND MCMR RTB M J LEGGIERI FORT DETRICK MD 21702-5012 1 UNIV OF ALABAMA (HC) S A MCINERNY 1530 3RD AVE S BEC 356D

  9. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    the location of the lateral belt and parabelt with respect to gross anatomical landmarks. Architectonic criteria for the core, lateral belt, and parabelt were readily adapted from monkey to human. Additionally, we found evidence for an architectonic subdivision within the parabelt, present in both species......The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...... that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  10. Guide to the evaluation of human exposure to noise from large wind turbines

    Science.gov (United States)

    Stephens, D. G.; Shepherd, K. P.; Hubbard, H. H.; Grosveld, F.

    1982-01-01

    Guidance for evaluating human exposure to wind turbine noise is provided and includes consideration of the source characteristics, the propagation to the receiver location, and the exposure of the receiver to the noise. The criteria for evaluation of human exposure are based on comparisons of the noise at the receiver location with the human perception thresholds for wind turbine noise and noise-induced building vibrations in the presence of background noise.

  11. Functional Magnetic Resonance Imaging Measures of Blood Flow Patterns in the Human Auditory Cortex in Response to Sound.

    Science.gov (United States)

    Huckins, Sean C.; Turner, Christopher W.; Doherty, Karen A.; Fonte, Michael M.; Szeverenyi, Nikolaus M.

    1998-01-01

    This study examined the feasibility of using functional magnetic resonance imaging (fMRI) in auditory research by testing the reliability of scanning parameters using high resolution and high signal-to-noise ratios. Findings indicated reproducibility within and across listeners for consonant-vowel speech stimuli and reproducible results within and…

  12. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Science.gov (United States)

    Lehmann, Alexandre; Schönwiesner, Marc

    2014-01-01

    Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  13. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  14. Effects of passive, moderate-level sound exposure on the mature auditory cortex: spectral edges, spectrotemporal density, and real-world noise.

    Science.gov (United States)

    Pienkowski, Martin; Munguia, Raymundo; Eggermont, Jos J

    2013-02-01

    Persistent, passive exposure of adult cats to bandlimited tone pip ensembles or sharply-filtered white noise at moderate levels (∼70 dB SPL) leads to a long-term suppression of spontaneous and sound-evoked activity in the region(s) of primary auditory cortex (AI) normally tuned to the exposure spectrum, and to an enhancement of activity in one or more neighboring regions of AI, all in the apparent absence of hearing loss. Here, we first examined the effects of passive exposure to a more structured, real-world noise, consisting of a mix of power tool and construction sounds. This "factory noise" had less pronounced effects on adult cat AI than our previous random tone pip ensembles and white noise, and these effects appeared limited to the region of AI tuned to frequencies near the sharp factory noise cutoff at 16 kHz. To further investigate the role of sharp spectral edges in passive exposure-induced cortical plasticity, a second group of adult cats was exposed to a tone pip ensemble with a flat spectrum between 2 and 4 kHz and shallow cutoff slopes (12 dB/oct) on either side. Compared to our previous ensemble with the same power in the 2-4 kHz band but very steep slopes, exposure to the overall more intense, sloped stimulus had much weaker effects on AI. Finally, we explored the issue of exposure stimulus spectrotemporal density and found that low aggregate tone pip presentation rates of about one per second sufficed to induce changes in the adult AI similar to those characteristic of our previous, much denser exposures. These results are discussed in light of the putative mechanisms underlying exposure-induced auditory cortical plasticity, and the potential adverse consequences of working or living in moderately noisy environments.

  15. High Intensity Pressure Noise Transmission in Human Ear: A Three Dimensional Simulation Study

    Science.gov (United States)

    Hawa, Takumi; Gan, Rong; Leckness, Kegan

    2015-03-01

    High intensity pressure noise generated by explosions and jet engines causes auditory damage and hearing loss of the military service personals, which are the most common disabilities in the veterans. Authors have investigated the high intensity pressure noise transmission from the ear canal to middle ear cavity. A fluid-structure interaction with a viscoelastic model for the tympanic membrane (TM) as well as the ossicular chain has been considered in the study. For the high intensity pressure simulation the geometry of the ear was based on a 3D finite element (FE) model of the human ear reported by Gan et al. (Ann Biomed Eng 2004). The model consists of the ear canal, TM, ossicular chain, and the middle ear cavity. The numerical approach includes two steps: 1) FE based finite-volume method simulation to compute pressure distributions in the ear canal and the middle ear cavity using CFX; and 2) FE modeling of TM and middle ear ossicles in response to high intensity sound using multi-physics analysis in ANSYS. The simulations provide the displacement of the TM/ossicular chain and the pressure fields in the ear canal and the middle ear cavity. These results are compared with human temporal bone experimental data obtained in our group. This work was supported by DOD W81XWH-14-1-0228.

  16. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    Directory of Open Access Journals (Sweden)

    Ranganadh Narayanam

    2015-10-01

    Full Text Available The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the residual noise and improve the intelligibility of speech a psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise. This is a generalized time frequency subtraction algorithm which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. To calculate the bark spreading energy and temporal spreading energy the wavelet coefficients are used from which a time frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the discussed method. To increase the intelligibility of speech an unvoiced speech enhancement algorithm also integrated into the system.

  17. Reexamining the evidence for a pitch-sensitive region: a human fMRI study using iterated ripple noise.

    Science.gov (United States)

    Barker, Daphne; Plack, Christopher J; Hall, Deborah A

    2012-04-01

    Human neuroimaging studies have identified a region of auditory cortex, lateral Heschl's gyrus (HG), that shows a greater response to iterated ripple noise (IRN) than to a Gaussian noise control. Based in part on results using IRN as a pitch-evoking stimulus, it has been argued that lateral HG is a general "pitch center." However, IRN contains slowly varying spectrotemporal modulations, unrelated to pitch, that are not found in the control stimulus. Hence, it is possible that the cortical response to IRN is driven in part by these modulations. The current study reports the first attempt to control for these modulations. This was achieved using a novel type of stimulus that was generated by processing IRN to remove the fine temporal structure (and thus the pitch) but leave the slowly varying modulations. This "no-pitch IRN" stimulus is referred to as IRNo. Results showed a widespread response to the spectrotemporal modulations across auditory cortex. When IRN was contrasted with IRNo rather than with Gaussian noise, the apparent effect of pitch was no longer statistically significant. Our findings raise the possibility that a cortical response unrelated to pitch could previously have been errantly attributed to pitch coding.

  18. Predicting dynamic range and intensity discrimination for electrical pulse-train stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2005-06-01

    This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.

  19. Auditory event-related response in visual cortex modulates subsequent visual responses in humans.

    Science.gov (United States)

    Naue, Nicole; Rach, Stefan; Strüber, Daniel; Huster, Rene J; Zaehle, Tino; Körner, Ursula; Herrmann, Christoph S

    2011-05-25

    Growing evidence from electrophysiological data in animal and human studies suggests that multisensory interaction is not exclusively a higher-order process, but also takes place in primary sensory cortices. Such early multisensory interaction is thought to be mediated by means of phase resetting. The presentation of a stimulus to one sensory modality resets the phase of ongoing oscillations in another modality such that processing in the latter modality is modulated. In humans, evidence for such a mechanism is still sparse. In the current study, the influence of an auditory stimulus on visual processing was investigated by measuring the electroencephalogram (EEG) and behavioral responses of humans to visual, auditory, and audiovisual stimulation with varying stimulus-onset asynchrony (SOA). We observed three distinct oscillatory EEG responses in our data. An initial gamma-band response around 50 Hz was followed by a beta-band response around 25 Hz, and a theta response around 6 Hz. The latter was enhanced in response to cross-modal stimuli as compared to either unimodal stimuli. Interestingly, the beta response to unimodal auditory stimuli was dominant in electrodes over visual areas. The SOA between auditory and visual stimuli--albeit not consciously perceived--had a modulatory impact on the multisensory evoked beta-band responses; i.e., the amplitude depended on SOA in a sinusoidal fashion, suggesting a phase reset. These findings further support the notion that parameters of brain oscillations such as amplitude and phase are essential predictors of subsequent brain responses and might be one of the mechanisms underlying multisensory integration.

  20. Disentangling the effects of phonation and articulation: Hemispheric asymmetries in the auditory N1m response of the human brain

    Directory of Open Access Journals (Sweden)

    Mäkinen Ville

    2005-10-01

    Full Text Available Abstract Background The cortical activity underlying the perception of vowel identity has typically been addressed by manipulating the first and second formant frequency (F1 & F2 of the speech stimuli. These two values, originating from articulation, are already sufficient for the phonetic characterization of vowel category. In the present study, we investigated how the spectral cues caused by articulation are reflected in cortical speech processing when combined with phonation, the other major part of speech production manifested as the fundamental frequency (F0 and its harmonic integer multiples. To study the combined effects of articulation and phonation we presented vowels with either high (/a/ or low (/u/ formant frequencies which were driven by three different types of excitation: a natural periodic pulseform reflecting the vibration of the vocal folds, an aperiodic noise excitation, or a tonal waveform. The auditory N1m response was recorded with whole-head magnetoencephalography (MEG from ten human subjects in order to resolve whether brain events reflecting articulation and phonation are specific to the left or right hemisphere of the human brain. Results The N1m responses for the six stimulus types displayed a considerable dynamic range of 115–135 ms, and were elicited faster (~10 ms by the high-formant /a/ than by the low-formant /u/, indicating an effect of articulation. While excitation type had no effect on the latency of the right-hemispheric N1m, the left-hemispheric N1m elicited by the tonally excited /a/ was some 10 ms earlier than that elicited by the periodic and the aperiodic excitation. The amplitude of the N1m in both hemispheres was systematically stronger to stimulation with natural periodic excitation. Also, stimulus type had a marked (up to 7 mm effect on the source location of the N1m, with periodic excitation resulting in more anterior sources than aperiodic and tonal excitation. Conclusion The auditory brain areas

  1. Noise

    Science.gov (United States)

    Noise is all around you, from televisions and radios to lawn mowers and washing machines. Normally, you ... sensitive structures of the inner ear and cause noise-induced hearing loss. More than 30 million Americans ...

  2. Plasticity of the human auditory cortex related to musical training.

    Science.gov (United States)

    Pantev, Christo; Herholz, Sibylle C

    2011-11-01

    During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  4. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    Science.gov (United States)

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  5. Robustness of intrinsic connectivity networks in the human brain to the presence of acoustic scanner noise

    NARCIS (Netherlands)

    Langers, Dave R. M.; van Dijk, Pim

    2011-01-01

    Evoked responses in functional magnetic resonance imaging (fMRI) are affected by the presence of acoustic scanner noise (ASN). Particularly, stimulus-related activation of the auditory system and deactivation of the default mode network have repeatedly been shown to diminish. In contrast, little is

  6. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain

    Science.gov (United States)

    Martínez, I.; Rosa, M.; Arsuaga, J.-L.; Jarabo, P.; Quam, R.; Lorenzo, C.; Gracia, A.; Carretero, J.-M.; de Castro, J.-M. Bermúdez; Carbonell, E.

    2004-01-01

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range. PMID:15213327

  7. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    Science.gov (United States)

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  8. The possible influence of noise frequency components on the health of exposed industrial workers - A review

    Directory of Open Access Journals (Sweden)

    K V Mahendra Prashanth

    2011-01-01

    Full Text Available Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency

  9. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  10. The effect of human activity noise on the acoustic quality in open plan office

    DEFF Research Database (Denmark)

    Dehlbæk, Tania Stenholt; Jeong, Cheol-Ho; Brunskog, Jonas

    2016-01-01

    A disadvantage of open plan offices is the noise annoyance. Noise problems in open plan offices have been dealt with in several studies, and standards have been set up. Still, what has not been taken into account is the effect of human activity noise on acoustic conditions. In this study, measure...... D2,S have an impact on the variation in the activity noise. At 1 kHz, the technical background noise influences human activity noise positively. In both octave bands, the human activity noise level varies significantly with the office type, from a call center to a lawyer’s office....

  11. Tonotopic representation of missing fundamental complex sounds in the human auditory cortex.

    Science.gov (United States)

    Fujioka, Takako; Ross, Bernhard; Okamoto, Hidehiko; Takeshima, Yasuyuki; Kakigi, Ryusuke; Pantev, Christo

    2003-07-01

    The N1m component of the auditory evoked magnetic field in response to tones and complex sounds was examined in order to clarify whether the tonotopic representation in the human secondary auditory cortex is based on perceived pitch or the physical frequency spectrum of the sound. The investigated stimulus parameters were the fundamental frequencies (F0 = 250, 500 and 1000 Hz), the spectral composition of the higher harmonics of the missing fundamental sounds (2nd to 5th, 6th to 9th and 10th to 13th harmonic) and the frequencies of pure tones corresponding to F0 and to the lowest component of each complex sound. Tonotopic gradients showed that high frequencies were more medially located than low frequencies for the pure tones and for the centre frequency of the complex tones. Furthermore, in the superior-inferior direction, the tonotopic gradients were different between pure tones and complex sounds. The results were interpreted as reflecting different processing in the auditory cortex for pure tones and complex sounds. This hypothesis was supported by the result of evoked responses to complex sounds having longer latencies. A more pronounced tonotopic representation in the right hemisphere gave evidence for right hemispheric dominance in spectral processing.

  12. Quantification of airport community noise impact in terms of noise levels, population density, and human subjective response

    Science.gov (United States)

    Deloach, R.

    1981-01-01

    The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.

  13. Lipreading and covert speech production similarly modulate human auditory-cortex responses to pure tones.

    Science.gov (United States)

    Kauramäki, Jaakko; Jääskeläinen, Iiro P; Hari, Riitta; Möttönen, Riikka; Rauschecker, Josef P; Sams, Mikko

    2010-01-27

    Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.

  14. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  15. Individual differences in sound-in-noise perception are related to the strength of short-latency neural responses to noise.

    Directory of Open Access Journals (Sweden)

    Ekaterina Vinnik

    Full Text Available Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40-66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes.

  16. Social and emotional values of sounds influence human (Homo sapiens) and non-human primate (Cercopithecus campbelli) auditory laterality.

    Science.gov (United States)

    Basile, Muriel; Lemasson, Alban; Blois-Heulin, Catherine

    2009-07-17

    The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  17. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  18. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  19. Effect of prenatal loud music and noise on total number of neurons and glia, neuronal nuclear area and volume of chick brainstem auditory nuclei, field L and hippocampus: a stereological investigation.

    Science.gov (United States)

    Sanyal, Tania; Palanisamy, Pradeep; Nag, T C; Roy, T S; Wadhwa, Shashi

    2013-06-01

    The present study explores whether prenatal patterned and unpatterned sound of high sound pressure level (110 dB) has any differential effect on the morphology of brainstem auditory nuclei, field L (auditory cortex analog) and hippocampus in chicks (Gallus domesticus). The total number of neurons and glia, mean neuronal nuclear area and total volume of the brainstem auditory nuclei, field L and hippocampus of post-hatch day 1 chicks were determined in serial, cresyl violet-stained sections, using stereology software. All regions studied showed a significantly increased total volume with increase in total neuron number and mean neuronal nuclear area in the patterned music stimulated group as compared to control. Contrastingly the unpatterned noise stimulated group showed an attenuated volume with reduction in the total neuron number. The mean neuronal nuclear area was significantly reduced in the auditory nuclei and hippocampus but increased in the field L. Glial cell number was significantly increased in both experimental groups, being highest in the noise group. The brainstem auditory nuclei and field L showed an increase in glia to neuron ratio in the experimental groups as compared to control. In the hippocampus the ratio remained unaltered between control and music groups, but was higher in the noise group. It is thus evident that though the sound pressure level in both experimental groups was the same there were differential changes in the morphological parameters of the brain regions studied, indicating that the characteristics of the sound had a role in mediating these effects.

  20. Auditory Contagious Yawning in Humans: An Investigation into Affiliation and Status Effects

    Directory of Open Access Journals (Sweden)

    Jorg J.M. Massen

    2015-11-01

    Full Text Available While comparative research on contagious yawning has grown substantially in the past few years, both the interpersonal factors influencing this response and the sensory modalities involved in its activation in humans remain relatively unknown. Extending upon previous studies showing various in-group and status effects in non-human great apes, we performed an initial study to investigate how the political affiliation (Democrat versus Republican and status (high versus low of target stimuli influences auditory contagious yawning, as well as the urge to yawn, in humans. Self-report responses and a subset of video recordings were analyzed from 118 undergraduate students in the US following exposure to either breathing (control or yawning (experimental vocalizations paired with images of former US Presidents (high status and their respective Cabinet Secretaries of Commerce (low status. The overall results validate the use of auditory stimuli to prompt yawn contagion, with greater response in the experimental than the control condition. There was also a negative effect of political status on self-reported yawning and the self-reported urge to yawn irrespective of the condition. In contrast, we found no evidence for a political affiliation bias in this response. These preliminary findings are discussed in terms of the existing comparative evidence, though we highlight limitations in the current investigation and we provide suggestions for future research in this area.

  1. Differential effects of prenatal chronic high-decibel noise and music exposure on the excitatory and inhibitory synaptic components of the auditory cortex analog in developing chicks (Gallus gallus domesticus).

    Science.gov (United States)

    Kumar, V; Nag, T C; Sharma, U; Jagannathan, N R; Wadhwa, S

    2014-06-06

    Proper development of the auditory cortex depends on early acoustic experience that modulates the balance between excitatory and inhibitory (E/I) circuits. In the present social and occupational environment exposure to chronic loud sound in the form of occupational or recreational noise, is becoming inevitable. This could especially disrupt the functional auditory cortex development leading to altered processing of complex sound and hearing impairment. Here we report the effects of prenatal chronic loud sound (110-dB sound pressure level (SPL)) exposure (rhythmic [music] and arrhythmic [noise] forms) on the molecular components involved in regulation of the E/I balance in the developing auditory cortex analog/Field L (AuL) in domestic chicks. Noise exposure at 110-dB SPL significantly enhanced the E/I ratio (increased expression of AMPA receptor GluR2 subunit and glutamate with decreased expression of GABA(A) receptor gamma 2 subunit and GABA), whereas loud music exposure maintained the E/I ratio. Expressions of markers of synaptogenesis, synaptic stability and plasticity i.e., synaptophysin, PSD-95 and gephyrin were reduced with noise but increased with music exposure. Thus our results showed differential effects of prenatal chronic loud noise and music exposures on the E/I balance and synaptic function and stability in the developing auditory cortex. Loud music exposure showed an overall enrichment effect whereas loud noise-induced significant alterations in E/I balance could later impact the auditory function and associated cognitive behavior. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators.

  3. Rate and adaptation effects on the auditory evoked brainstem response in human newborns and adults.

    Science.gov (United States)

    Lasky, R E

    1997-09-01

    Auditory evoked brainstem response (ABR) latencies increased and amplitudes decreased with increasing stimulus repetition rate for human newborns and adults. The wave V latency increases were larger for newborns than adults. The wave V amplitude decreases were smaller for newborns than adults. These differences could not be explained by developmental differences in frequency responsivity. The transition from the unadapted to the fully adapted response was less rapid in newborns than adults at short (= 10 ms) inter stimulus intervals (ISIs). At longer ISIs (= 20 ms) there were no developmental differences in the transition to the fully adapted response. The newborn transition occurred in a two stage process. The rapid initial stage observed in adults and newborns was complete by about 40 ms. A second slower stage was observed only in newborns although it has been observed in adults in other studies (Weatherby and Hecox, 1982; Lightfoot, 1991; Lasky et al., 1996). These effects were replicated at different stimulus intensities. After the termination of stimulation the return to the wave V unadapted response took nearly 500 ms in newborns. Neither the newborn nor the adult data can be explained by forward masking of one click on the next click. These results indicate human developmental differences in adaptation to repetitive auditory stimulation at the level of the brainstem.

  4. Dynamic Range Adaptation to Spectral Stimulus Statistics in Human Auditory Cortex

    Science.gov (United States)

    Schlichting, Nadine; Obleser, Jonas

    2014-01-01

    Classically, neural adaptation refers to a reduction in response magnitude by sustained stimulation. In human electroencephalography (EEG), neural adaptation has been measured, for example, as frequency-specific response decrease by previous stimulation. Only recently and mainly based on animal studies, it has been suggested that statistical properties in the stimulation lead to adjustments of neural sensitivity and affect neural response adaptation. However, it is thus far unresolved which statistical parameters in the acoustic stimulation spectrum affect frequency-specific neural adaptation, and on which time scales the effects take place. The present human EEG study investigated the potential influence of the overall spectral range as well as the spectral spacing of the acoustic stimulation spectrum on frequency-specific neural adaptation. Tones randomly varying in frequency were presented passively and computational modeling of frequency-specific neural adaptation was used. Frequency-specific adaptation was observed for all presentation conditions. Critically, however, the spread of adaptation (i.e., degree of coadaptation) in tonotopically organized regions of auditory cortex changed with the spectral range of the acoustic stimulation. In contrast, spectral spacing did not affect the spread of frequency-specific adaptation. Therefore, changes in neural sensitivity in auditory cortex are directly coupled to the overall spectral range of the acoustic stimulation, which suggests that neural adjustments to spectral stimulus statistics occur over a time scale of multiple seconds. PMID:24381293

  5. Short GSM mobile phone exposure does not alter human auditory brainstem response

    Directory of Open Access Journals (Sweden)

    Thuróczy György

    2007-11-01

    Full Text Available Abstract Background There are about 1.6 billion GSM cellular phones in use throughout the world today. Numerous papers have reported various biological effects in humans exposed to electromagnetic fields emitted by mobile phones. The aim of the present study was to advance our understanding of potential adverse effects of the GSM mobile phones on the human hearing system. Methods Auditory Brainstem Response (ABR was recorded with three non-polarizing Ag-AgCl scalp electrodes in thirty young and healthy volunteers (age 18–26 years with normal hearing. ABR data were collected before, and immediately after a 10 minute exposure to 900 MHz pulsed electromagnetic field (EMF emitted by a commercial Nokia 6310 mobile phone. Fifteen subjects were exposed to genuine EMF and fifteen to sham EMF in a double blind and counterbalanced order. Possible effects of irradiation was analyzed by comparing the latency of ABR waves I, III and V before and after genuine/sham EMF exposure. Results Paired sample t-test was conducted for statistical analysis. Results revealed no significant differences in the latency of ABR waves I, III and V before and after 10 minutes of genuine/sham EMF exposure. Conclusion The present results suggest that, in our experimental conditions, a single 10 minute exposure of 900 MHz EMF emitted by a commercial mobile phone does not produce measurable immediate effects in the latency of auditory brainstem waves I, III and V.

  6. The possible influence of noise frequency components on the health of exposed industrial workers--a review.

    Science.gov (United States)

    Mahendra Prashanth, K V; Venugopalachar, Sridhar

    2011-01-01

    Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency contribution in

  7. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  8. Tracing the emergence of categorical speech perception in the human auditory system.

    Science.gov (United States)

    Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude

    2013-10-01

    Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the

  9. Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Directory of Open Access Journals (Sweden)

    Tiitinen Hannu

    2010-02-01

    Full Text Available Abstract Background Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG. Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/, complex non-speech stimuli (a composite of five sinusoidals, and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs were measured in the left and right hemisphere of sixteen healthy subjects. Results We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. Conclusions We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.

  10. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    Science.gov (United States)

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  11. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    Science.gov (United States)

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  12. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  13. Effect of Bluetooth headset and mobile phone electromagnetic fields on the human auditory nerve.

    Science.gov (United States)

    Mandalà, Marco; Colletti, Vittorio; Sacchetto, Luca; Manganotti, Paolo; Ramat, Stefano; Marcocci, Alessandro; Colletti, Liliana

    2014-01-01

    The possibility that long-term mobile phone use increases the incidence of astrocytoma, glioma and acoustic neuroma has been investigated in several studies. Recently, our group showed that direct exposure (in a surgical setting) to cell phone electromagnetic fields (EMFs) induces deterioration of auditory evoked cochlear nerve compound action potential (CNAP) in humans. To verify whether the use of Bluetooth devices reduces these effects, we conducted the present study with the same experimental protocol. Randomized trial. Twelve patients underwent retrosigmoid vestibular neurectomy to treat definite unilateral Ménière's disease while being monitored with acoustically evoked CNAPs to assess direct mobile phone exposure or alternatively the EMF effects of Bluetooth headsets. We found no short-term effects of Bluetooth EMFs on the auditory nervous structures, whereas direct mobile phone EMF exposure confirmed a significant decrease in CNAPs amplitude and an increase in latency in all subjects. The outcomes of the present study show that, contrary to the finding that the latency and amplitude of CNAPs are very sensitive to EMFs produced by the tested mobile phone, the EMFs produced by a common Bluetooth device do not induce any significant change in cochlear nerve activity. The conditions of exposure, therefore, differ from those of everyday life, in which various biological tissues may reduce the EMF affecting the cochlear nerve. Nevertheless, these novel findings may have important safety implications. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  14. Context-dependent encoding in the auditory brainstem subserves enhanced speech-in-noise perception in musicians.

    Science.gov (United States)

    Parbery-Clark, A; Strait, D L; Kraus, N

    2011-10-01

    Musical training strengthens speech perception in the presence of background noise. Given that the ability to make use of speech sound regularities, such as pitch, underlies perceptual acuity in challenging listening environments, we asked whether musicians' enhanced speech-in-noise perception is facilitated by increased neural sensitivity to acoustic regularities. To this aim we examined subcortical encoding of the same speech syllable presented in predictable and variable conditions and speech-in-noise perception in 31 musicians and nonmusicians. We anticipated that musicians would demonstrate greater neural enhancement of speech presented in the predictable compared to the variable condition than nonmusicians. Accordingly, musicians demonstrated more robust neural encoding of the fundamental frequency (i.e., pitch) of speech presented in the predictable relative to the variable condition than nonmusicians. The degree of neural enhancement observed to predictable speech correlated with subjects' musical practice histories as well as with their speech-in-noise perceptual abilities. Taken together, our findings suggest that subcortical sensitivity to speech regularities is shaped by musical training and may contribute to musicians' enhanced speech-in-noise perception. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Noise Effects on Human Performance: A Meta-Analytic Synthesis

    Science.gov (United States)

    Szalma, James L.; Hancock, Peter A.

    2011-01-01

    Noise is a pervasive and influential source of stress. Whether through the acute effects of impulse noise or the chronic influence of prolonged exposure, the challenge of noise confronts many who must accomplish vital performance duties in its presence. Although noise has diffuse effects, which are shared in common with many other chronic forms of…

  16. Signals and noise in the octavolateralis systems: what is the impact of human activities on fish sensory function?

    Science.gov (United States)

    Braun, Christopher B

    2015-01-01

    The octavolateralis systems of fishes include the vestibular, auditory, lateral line and electrosensory systems. They are united by common developmental and neuro-computational features, including hair cell sensors and computations based on cross-neuron analyses of differential hair cell stimulation patterns. These systems also all use both spectral and temporal filters to separate signals from each other and from noise, and the distributed senses (lateral line and electroreception) add spatial filters as well. Like all sensory systems, these sensors must provide the animal with guidance for adaptive behavior within a sensory scene composed of multiple stimuli and varying levels of ambient noise, including that created by human activities. In the extreme, anthropogenic activities impact the octavolateralis systems by destroying or degrading the habitats that provide ecological resources and sensory inputs. At slightly lesser levels of effect, anthropogenic pollutants can be damaging to fish tissues, with sensory organs often the most vulnerable. The exposed sensory cells of the lateral line and electrosensory systems are especially sensitive to aquatic pollution. At still lesser levels of impact, anthropogenic activities can act as both acute and chronic stressors, activating hormonal changes that may affect behavioral and sensory function. Finally, human activities are now a nearly ubiquitous presence in aquatic habitats, often with no obvious effects on the animals exposed to them. Ship noise, indigenous and industrial fishing techniques, and all the ancillary noises of human civilization form a major part of the soundscape of fishes. How fish use these new sources of information about their habitat is a new and burgeoning field of study. © 2014 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  17. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  18. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  19. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  20. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  1. Guidelines for roadless area campsite spacing to minimize impact of human-related noises.

    Science.gov (United States)

    Tom Dailey; Dave. Redman

    1975-01-01

    This report offers guidelines for campsite spacing and location in roadless areas to allow several levels of insulation from noise impacts between camping parties. The guidelines are based on the distance that different human-related noises travel in a variety of outdoor settings. The physical and psychological properties of these noises are described and discussed....

  2. Auditory-like filterbank: An optimal speech processor for efficient human speech communication

    Indian Academy of Sciences (India)

    Prasanta Kumar Ghosh; Louis M Goldstein; Shrikanth S Narayanan

    2011-10-01

    The transmitter and the receiver in a communication system have to be designed optimally with respect to one another to ensure reliable and efficient communication. Following this principle, we derive an optimal filterbank for processing speech signal in the listener’s auditory system (receiver), so that maximum information about the talker’s (transmitter) message can be obtained from the filterbank output, leading to efficient communication between the talker and the listener. We consider speech data of 45 talkers from three different languages for designing optimal filterbanks separately for each of them. We find that the computationally derived optimal filterbanks are similar to the empirically established auditory (cochlear) filterbank in the human ear. We also find that the output of the empirically established auditory filterbank provides more than 90% of the maximum information about the talker’s message provided by the output of the optimal filterbank. Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans.

  3. Mechanism of auditory hypersensitivity in human autism using autism model rats.

    Science.gov (United States)

    Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki

    2017-04-01

    Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.

  4. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  5. Visual activation and audiovisual interactions in the auditory cortex during speech perception: intracranial recordings in humans.

    Science.gov (United States)

    Besle, Julien; Fischer, Catherine; Bidet-Caulet, Aurélie; Lecaignard, Francoise; Bertrand, Olivier; Giard, Marie-Hélène

    2008-12-24

    Hemodynamic studies have shown that the auditory cortex can be activated by visual lip movements and is a site of interactions between auditory and visual speech processing. However, they provide no information about the chronology and mechanisms of these cross-modal processes. We recorded intracranial event-related potentials to auditory, visual, and bimodal speech syllables from depth electrodes implanted in the temporal lobe of 10 epileptic patients (altogether 932 contacts). We found that lip movements activate secondary auditory areas, very shortly (approximately equal to 10 ms) after the activation of the visual motion area MT/V5. After this putatively feedforward visual activation of the auditory cortex, audiovisual interactions took place in the secondary auditory cortex, from 30 ms after sound onset and before any activity in the polymodal areas. Audiovisual interactions in the auditory cortex, as estimated in a linear model, consisted both of a total suppression of the visual response to lipreading and a decrease of the auditory responses to the speech sound in the bimodal condition compared with unimodal conditions. These findings demonstrate that audiovisual speech integration does not respect the classical hierarchy from sensory-specific to associative cortical areas, but rather engages multiple cross-modal mechanisms at the first stages of nonprimary auditory cortex activation.

  6. Spoken word memory traces within the human auditory cortex revealed by repetition priming and functional magnetic resonance imaging.

    Science.gov (United States)

    Gagnepain, Pierre; Chételat, Gael; Landeau, Brigitte; Dayan, Jacques; Eustache, Francis; Lebreton, Karine

    2008-05-14

    Previous neuroimaging studies in the visual domain have shown that neurons along the perceptual processing pathway retain the physical properties of written words, faces, and objects. The aim of this study was to reveal the existence of similar neuronal properties within the human auditory cortex. Brain activity was measured using functional magnetic resonance imaging during a repetition priming paradigm, with words and pseudowords heard in an acoustically degraded format. Both the amplitude and peak latency of the hemodynamic response (HR) were assessed to determine the nature of the neuronal signature of spoken word priming. A statistically significant stimulus type by repetition interaction was found in various bilateral auditory cortical areas, demonstrating either HR suppression and enhancement for repeated spoken words and pseudowords, respectively, or word-specific repetition suppression without any significant effects for pseudowords. Repetition latency shift only occurred with word-specific repetition suppression in the right middle/posterior superior temporal sulcus. In this region, both repetition suppression and latency shift were related to behavioral priming. Our findings highlight for the first time the existence of long-term spoken word memory traces within the human auditory cortex. The timescale of auditory information integration and the neuronal mechanisms underlying priming both appear to differ according to the level of representations coded by neurons. Repetition may "sharpen" word-nonspecific representations coding short temporal variations, whereas a complex interaction between the activation strength and temporal integration of neuronal activity may occur in neuronal populations coding word-specific representations within longer temporal windows.

  7. [Stapedial reflex under auditory masking of bone canal: 1- The effect of white noise on threshold values].

    Science.gov (United States)

    Cassano, P; Mininni, F; Paulillo, A

    1983-11-30

    The authors have studied the behaviour of the A.R. threshold under bone way masking sent to the vertex. The masking caused changings of the recorded track; a change of the compliance was observed in the 60% of the subjects and a rythmic waving of the (isoelectric) line was observed in 40% of the subjects. Upon these changes the A.R. were recorded for the tone test sent at similar values (almost equal) at those recorded without any kind of masker, even if a threshold shift sometimes big, existed because of the high intensity of the masking noise. The authors are making further researches to explain the meaning of these changes.

  8. Efeitos auditivos da exposição combinada: interação entre monóxido de carbono, ruído e tabagismo Auditory effects of combined exposure: interaction between carbon monoxide, noise and smoking

    Directory of Open Access Journals (Sweden)

    Débora Gonçalves Ferreira

    2012-12-01

    Full Text Available OBJETIVO: Analisar os efeitos auditivos da exposição combinada ao monóxido de carbono (CO e ao ruído, e o impacto do tabagismo. MÉTODOS: Participaram da pesquisa 80 trabalhadores fumantes e não fumantes, do gênero masculino, oriundos de uma empresa siderúrgica, sendo que 40 estavam expostos ao CO e ao ruído e 40 somente ao ruído. Realizou-se análise retrospectiva dos dados referentes aos riscos ambientais (CO e ruído e das informações contidas nos prontuários médicos relacionadas à saúde auditiva e às concentrações biológicas do CO no sangue (COHb. Analisou-se a audiometria tonal de referência e a última, e os limiares auditivos em função do tabagismo, do tipo de exposição (CO e ruído ou somente ao ruído, do tempo de exposição, do nível de ruído e da idade. RESULTADOS: Tanto a concentração de CO como os níveis de ruído encontraram-se acima do limite de tolerância previsto na norma regulamentadora de número 15 do Ministério do Trabalho. O grupo exposto ao CO e ao ruído apresentou mais casos de PAIR (22,5%, comparativamente ao grupo exposto somente ao ruído (7,5% e também apresentou piora significativa nos limiares auditivos de 3, 4 e 6 kHz. Foram encontradas diferenças significativas entre a idade, o tempo de serviço, o tipo de exposição, o nível de ruído e o hábito de fumar influenciando nos limiares auditivos dos participantes. O hábito de fumar potencializou o efeito tanto do CO quanto do ruído no sistema auditivo. CONCLUSÃO: Efeitos auditivos significativos foram identificados na audição dos trabalhadores de uma siderúrgica expostos ao CO.PURPOSE: To analyze the auditory effects of the combined exposure to carbon monoxide (CO and noise, and the impact of smoking. METHODS: Participants were 80 male workers, smokers and non-smokers, from a steel industry - 40 exposed to CO and noise simultaneously, and 40 exposed only to noise. A retrospective data analysis was conducted regarding the

  9. Representation of lateralization and tonotopy in primary versus secondary human auditory cortex

    NARCIS (Netherlands)

    Langers, Dave R. M.; Backes, Walter H.; van Dijk, Pim

    2007-01-01

    Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateratization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cort

  10. Selective attention and the auditory vertex potential. I - Effects of stimulus delivery rate. II - Effects of signal intensity and masking noise

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.; Galambos, R.

    1976-01-01

    The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.

  11. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  12. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  13. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    Science.gov (United States)

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  14. Correlates of perceptual awareness in human primary auditory cortex revealed by an informational masking experiment.

    Science.gov (United States)

    Wiegand, Katrin; Gutschalk, Alexander

    2012-05-15

    The presence of an auditory event may remain undetected in crowded environments, even when it is well above the sensory threshold. This effect, commonly known as informational masking, allows for isolating neural activity related to perceptual awareness, by comparing repetitions of the same physical stimulus where the target is either detected or not. Evidence from magnetoencephalography (MEG) suggests that auditory-cortex activity in the latency range 50-250 ms is closely coupled with perceptual awareness. Here, BOLD fMRI and MEG were combined to investigate at which stage in the auditory cortex neural correlates of conscious auditory perception can be observed. Participants were asked to indicate the perception of a regularly repeating target tone, embedded within a random multi-tone masking background. Results revealed widespread activation within the auditory cortex for detected target tones, which was delayed but otherwise similar to the activation of an unmasked control stimulus. The contrast of detected versus undetected targets revealed activity confined to medial Heschl's gyrus, where the primary auditory cortex is located. These results suggest that activity related to conscious perception involves the primary auditory cortex and is not restricted to activity in secondary areas.

  15. Sinusoidal echo-planar imaging with parallel acquisition technique for reduced acoustic noise in auditory fMRI.

    Science.gov (United States)

    Zapp, Jascha; Schmitter, Sebastian; Schad, Lothar R

    2012-09-01

    To extend the parameter restrictions of a silent echo-planar imaging (sEPI) sequence using sinusoidal readout (RO) gradients, in particular with increased spatial resolution. The sound pressure level (SPL) of the most feasible configurations is compared to conventional EPI having trapezoidal RO gradients. We enhanced the sEPI sequence by integrating a parallel acquisition technique (PAT) on a 3 T magnetic resonance imaging (MRI) system. The SPL was measured for matrix sizes of 64 × 64 and 128 × 128 pixels, without and with PAT (R = 2). The signal-to-noise ratio (SNR) was examined for both sinusoidal and trapezoidal RO gradients. Compared to EPI PAT, the SPL could be reduced by up to 11.1 dB and 5.1 dB for matrix sizes of 64 × 64 and 128 × 128 pixels, respectively. The SNR of sinusoidal RO gradients is lower by a factor of 0.96 on average compared to trapezoidal RO gradients. The sEPI PAT sequence allows for 1) increased resolution, 2) expanded RO frequency range toward lower frequencies, which is in general beneficial for SPL, or 3) shortened TE, TR, and RO train length. At the same time, it generates lower SPL compared to conventional EPI for a wide range of RO frequencies while having the same imaging parameters. Copyright © 2012 Wiley Periodicals, Inc.

  16. Modeling of road traffic noise and estimated human exposure in Fulton County, Georgia, USA.

    Science.gov (United States)

    Seong, Jeong C; Park, Tae H; Ko, Joon H; Chang, Seo I; Kim, Minho; Holt, James B; Mehdi, Mohammed R

    2011-11-01

    Environmental noise is a major source of public complaints. Noise in the community causes physical and socio-economic effects and has been shown to be related to adverse health impacts. Noise, however, has not been actively researched in the United States compared with the European Union countries in recent years. In this research, we aimed at modeling road traffic noise and analyzing human exposure in Fulton County, Georgia, United States. We modeled road traffic noise levels using the United States Department of Transportation Federal Highway Administration Traffic Noise Model implemented in SoundPLAN®. After analyzing noise levels with raster, vector and façade maps, we estimated human exposure to high noise levels. Accurate digital elevation models and building heights were derived from Light Detection And Ranging survey datasets and building footprint boundaries. Traffic datasets were collected from the Georgia Department of Transportation and the Atlanta Regional Commission. Noise level simulation was performed with 62 computers in a distributed computing environment. Finally, the noise-exposed population was calculated using geographic information system techniques. Results show that 48% of the total county population [N=870,166 residents] is potentially exposed to 55 dB(A) or higher noise levels during daytime. About 9% of the population is potentially exposed to 67 dB(A) or higher noises. At nighttime, 32% of the population is expected to be exposed to noise levels higher than 50 dB(A). This research shows that large-scale traffic noise estimation is possible with the help of various organizations. We believe that this research is a significant stepping stone for analyzing community health associated with noise exposures in the United States.

  17. Human responses to noise and vibration aboard ships

    NARCIS (Netherlands)

    Houben, M.M.J.; Kurt, R.; Khalid, H.; Zoet, P.; Bos, J.E.; Turan, O.

    2012-01-01

    Within the EU FP7 project SILENV, noise and vibration measurements were carried out on several ships. These objective measures were accompanied by subjective measures recorded through questionnaires. With this, we developed models describing the relationship between the levels of noise and vibration

  18. Prediction of Human's Ability in Sound Localization Based on the Statistical Properties of Spike Trains along the Brainstem Auditory Pathway

    Directory of Open Access Journals (Sweden)

    Ram Krips

    2014-01-01

    Full Text Available The minimum audible angle test which is commonly used for evaluating human localization ability depends on interaural time delay, interaural level differences, and spectral information about the acoustic stimulus. These physical properties are estimated at different stages along the brainstem auditory pathway. The interaural time delay is ambiguous at certain frequencies, thus confusion arises as to the source of these frequencies. It is assumed that in a typical minimum audible angle experiment, the brain acts as an unbiased optimal estimator and thus the human performance can be obtained by deriving optimal lower bounds. Two types of lower bounds are tested: the Cramer-Rao and the Barankin. The Cramer-Rao bound only takes into account the approximation of the true direction of the stimulus; the Barankin bound considers other possible directions that arise from the ambiguous phase information. These lower bounds are derived at the output of the auditory nerve and of the superior olivary complex where binaural cues are estimated. An agreement between human experimental data was obtained only when the superior olivary complex was considered and the Barankin lower bound was used. This result suggests that sound localization is estimated by the auditory nuclei using ambiguous binaural information.

  19. Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.

    Science.gov (United States)

    Bidelman, Gavin M; Grall, Jeremy

    2014-11-01

    Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes.

  20. 基于听觉掩蔽效应的多频带谱减语音增强方法%Multi-band spectral subtraction method for speech enhancement based on masking property of human auditory system

    Institute of Scientific and Technical Information of China (English)

    曹亮; 张天骐; 高洪兴; 易琛

    2013-01-01

    In order to reduce the music noise introduced by conventional spectral subtraction method for speech enhancement, a speech enhancement algorithm is put forward based on the combination of multi-band spectral subtraction and the masking properties of human auditory system. Firstly, the weighted recursive averaging method is used to estimate the noise power spectrum, exert subtraction of multi-band on the noise-corrupted speech signal; then, auditory masking threshold is computed using the estimated speech signal, adjust the subtraction factor according to the threshold masking; finally, we obtain the spectrum of enhanced speech through computing the gain faction according to the subtraction factor. The simulation shows that, at low SNR, compared with conventional spectral subtraction, background noise and residual music noise are effectively suppressed, and the clarity and intelligibility of speech signal are dramatically improved.%为了减小传统谱减法引入的音乐噪声,提出了一种将多频带谱减和听觉掩蔽效应相结合的语音增强算法.用加权递归平滑的方法估计噪声的功率谱,对带噪的语音信号进行多频带谱减,计算听觉掩蔽阈值,再根据掩蔽阈值动态地调节谱减因子,通过增益函数得到增强后语音信号的频谱.仿真实验结果表明,与传统的谱减法相比,该算法在信噪比较低情况下,背景噪声和残余噪声得到了有效的抑制,语音信号的清晰度和可懂度也有了明显提升.

  1. assessment of noise pollutio noise pollutio noise pollution from ...

    African Journals Online (AJOL)

    eobe

    sawmill noise on the metropolis was developed. the metropolis was ... INTRODUCTION. INTRODUCTION ... auditory fatigue and hearing loss, and indirect n auditory effects such as speech interfere annoyance .... acoustic environment for workers [29]. In particular, ..... and corn mills”, African Journal of Health Science,. Vol.

  2. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  3. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  4. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.

  5. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  6. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  7. Pitch-induced responses in the right auditory cortex correlate with musical ability in normal listeners.

    Science.gov (United States)

    Puschmann, Sebastian; Özyurt, Jale; Uppenkamp, Stefan; Thiel, Christiane M

    2013-10-23

    Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system.

  8. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  9. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals.

    Science.gov (United States)

    Schönwiesner, Marc; Dechent, Peter; Voit, Dirk; Petkov, Christopher I; Krumbholz, Katrin

    2015-10-01

    Auditory cortex (AC) contains several primary-like, or "core," fields, which receive thalamic input and project to non-primary "belt" fields. In humans, the organization and layout of core and belt auditory fields are still poorly understood, and most auditory neuroimaging studies rely on macroanatomical criteria, rather than functional localization of distinct fields. A myeloarchitectonic method has been suggested recently for distinguishing between core and belt fields in humans (Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. 2012. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. J Neurosci. 32:16095-16105). We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R. Our results provide a functional rather than anatomical localizer for core AC. We posit that inter-individual variability in the layout of core AC might explain disagreements between results from previous neuroimaging and cytological studies.

  10. Effects of exposure to noise and indoor air pollution on human perception and symptoms

    DEFF Research Database (Denmark)

    Witterseh, Thomas; Wargocki, Pawel; Fang, Lei

    1999-01-01

    was modified by playing a recording of ventilation noise. Thirty female subjects, six at a time, occupied the office for 4.4 hours. The subjects assessed the air quality, the noise, and the indoor environment upon entering the office and on six occasions during occupation. Furthermore, SBS symptoms......The objective of the present study was to investigate human perception and SBS symptoms when people are exposed simultaneously to different levels of air pollution and ventilation noise. The air quality in an office was modified by placing or removing a carpet and the background noise level...

  11. STUDY ON NOISE LEVEL GENERATED BY HUMAN ACTIVITIES IN SIBIU CITY, ROMANIA

    Directory of Open Access Journals (Sweden)

    Cristina STANCA-MOISE

    2014-10-01

    Full Text Available In this paper I have proposed an analysis and monitoring of the noise sources in the open spaces of air traffic, rail and car in Sibiu. From centralizing data obtained from the analysis of the measurements performed with equipment noise levels, we concluded that the noise and vibration produced by means of Transportation (air, road, rail can affect human health if they exceed limits. Noise is present and part of our lives and always a source of pollution as any of modern man is not conscious.

  12. Acute stress alters auditory selective attention in humans independent of HPA: a study of evoked potentials.

    Directory of Open Access Journals (Sweden)

    Ludger Elling

    Full Text Available BACKGROUND: Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM. However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such "paracorticoidal" stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. METHODOLOGY/PRINCIPAL FINDINGS: The stressor consisted of a single cold pressor test. Auditory negative difference (Nd and mismatch negativity (MMN were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occurring 4-7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8-11 minutes after onset when no further modulations in the event-related potentials (ERP occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. CONCLUSIONS/SIGNIFICANCE: Prior studies have deliberately tracked the adrenocortical influence

  13. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.;

    2008-01-01

    , which differently affect the flat envelopes of the Schroeder-phase maskers. We examined the influence of auditory-filter phase characteristics on the neural representation in the auditory cortex by investigating cortical auditory evoked fields ( AEFs). We found that the P1m component exhibited larger...... amplitudes when a long-duration tone was presented in a repeating linearly downward sweeping ( Schroeder positive, or m(+)) masker than in a repeating linearly upward sweeping ( Schroeder negative, or m(-)) masker. We also examined the neural representation of short-duration tone pulses presented...... at different temporal positions within a single period of three maskers differing in their component phases ( m(+), m(-), and sine phase m(0)). The P1m amplitude varied with the position of the tone pulse in the masker and depended strongly on the masker waveform. The neuromagnetic results in all cases were...

  14. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  15. A chamber-experiment investigation of the interaction between perceptions of noise and odor in humans.

    Science.gov (United States)

    Pan, Zhiwei; Kjaergaard, Søren K; Mølhave, Lars

    2003-10-01

    This study was designed to investigate human comfort and health effects following exposure to noise and odor and to explore the interaction between perceptions of noise and odor in humans. Nine healthy subjects were randomly exposed to noise, odor, and their combination, in a 3 x 3 Latin square design for 80 min in an exposure chamber. Continuous noise was broadcast at an average level of 75 dBA by a loudspeaker, and odor was provided by furfurylmercaptan (a coffee-aroma constituent). A standardized 28-item questionnaire, together with mood-scale ratings, nasal dimensions by acoustic rhinometry, addition tests for distraction, and skin humidity, were performed before and at the end of exposure. In the questionnaire investigation, the perceived "sound level" was significantly affected by noise and the combined exposures, while "odor intensity", "air quality", and "need more ventilation" was significantly affected by odor and the combined exposures. Perceptions of symptoms became worse with increasing exposure time, such as perceived "dry nose" and "sleepiness" by odor and combined exposures, "headache" by noise, "concentration difficulty", "general well being", and "stressed by being in the chamber" by noise, odor and combined exposures. In addition, the occurrence of interactions was analyzed by comparison of the ratings of perceived "sound level", "odor intensity", "air quality", and "need more ventilation" during the combined exposure with two single exposures. Insignificant interaction was found but it indicated a decreased tendency to perceptions of discomfort from "odor intensity", "air quality", and "need for more ventilation" when noise was added to odor exposure. It may be concluded that noise and odor cause discomfort in humans. Moreover, the study might indicate that additions of noise reduce (mask) the perception of discomfort from odor, and additions of odor have no or little affect on the perception of noise.

  16. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    Science.gov (United States)

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  17. "To ear is human, to frogive is divine": Bob Capranica's legacy to auditory neuroethology.

    Science.gov (United States)

    Simmons, Andrea Megela

    2013-03-01

    Bob Capranica was a towering figure in the field of auditory neuroethology. Among his many contributions are the exploitation of the anuran auditory system as a general vertebrate model for studying communication, the introduction of a signal processing approach for quantifying sender-receiver dynamics, and the concept of the matched filter for efficient neural processing of complex vocal signals. In this paper, meant to honor Bob on his election to Fellow of the International Society for Neuroethology, I provide a description and analysis of some of his most important research, and I highlight how the concepts and data he contributed still inspire neuroethology today.

  18. Using Reaction Time and Equal Latency Contours to Derive Auditory Weighting Functions in Sea Lions and Dolphins.

    Science.gov (United States)

    Finneran, James J; Mulsow, Jason; Schlundt, Carolyn E

    2016-01-01

    Subjective loudness measurements are used to create equal-loudness contours and auditory weighting functions for human noise-mitigation criteria; however, comparable direct measurements of subjective loudness with animal subjects are difficult to conduct. In this study, simple reaction time to pure tones was measured as a proxy for subjective loudness in a Tursiops truncatus and Zalophus californianus. Contours fit to equal reaction-time curves were then used to estimate the shapes of auditory weighting functions.

  19. Modeling human auditory evoked brainstem responses based on nonlinear cochlear processing

    DEFF Research Database (Denmark)

    Harte, James; Rønne, Filip Munch; Dau, Torsten

    2010-01-01

    (ABR) to transient sounds and frequency following responses (FFR) to tones. The model includes important cochlear processing stages (Zilany and Bruce, 2006) such as basilar-membrane (BM) tuning and compression, inner hair-cell (IHC) transduction, and IHC auditory-nerve (AN) synapse adaptation...

  20. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    Science.gov (United States)

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  1. The relative importance of noise level and number of events on human reactions to noise: Community survey findings and study methods

    Science.gov (United States)

    Fields, J. M.

    1980-01-01

    The data from seven surveys of community response to environmental noise are reanalyzed to assess the relative influence of peak noise levels and the numbers of noise events on human response. The surveys do not agree on the value of the tradeoff between the effects of noise level and numbers of events. The value of the tradeoff cannot be confidently specified in any survey because the tradeoff estimate may have a large standard error of estimate and because the tradeoff estimate may be seriously biased by unknown noise measurement errors. Some evidence suggests a decrease in annoyance with very high numbers of noise events but this evidence is not strong enough to lead to the rejection of the conventionally accepted assumption that annoyance is related to a log transformation of the number of noise events.

  2. Investigating binocular summation in human vision using complementary fused external noise

    Science.gov (United States)

    Howell, Christopher L.; Olson, Jeffrey T.

    2016-05-01

    The impact noise has on the processing of visual information at various stages within the human visual system (HVS) is still an open research area. To gain additional insight, twelve experiments were administered to human observers using sine wave targets to determine their contrast thresholds. A single frame of additive white Gaussian noise (AWGN) and its complement were used to investigate the effect of noise on the summation of visual information within the HVS. A standard contrast threshold experiment served as the baseline for comparisons. In the standard experiment, a range of sine wave targets are shown to the observers and their ability to detect the targets at varying contrast levels were recorded. The remaining experiments added some form of noise (noise image or its complement) and/or an additional sine wave target separated between one to three octaves to the test target. All of these experiments were tested using either a single monitor for viewing the targets or with a dual monitor presentation method for comparison. In the dual monitor experiments, a ninety degree mirror was used to direct each target to a different eye, allowing for the information to be fused binocularly. The experiments in this study present different approaches for delivering external noise to the HVS, and should allow for an improved understanding regarding how noise enters the HVS and what impact noise has on the processing of visual information.

  3. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  4. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory

  5. ICBEN review of research on the biological effects of noise 2011-2014.

    Science.gov (United States)

    Basner, Mathias; Brink, Mark; Bristow, Abigail; de Kluizenaar, Yvonne; Finegold, Lawrence; Hong, Jiyoung; Janssen, Sabine A; Klaeboe, Ronny; Leroux, Tony; Liebl, Andreas; Matsui, Toshihito; Schwela, Dieter; Sliwinska-Kowalska, Mariola; Sörqvist, Patrik

    2015-01-01

    The mandate of the International Commission on Biological Effects of Noise (ICBEN) is to promote a high level of scientific research concerning all aspects of noise-induced effects on human beings and animals. In this review, ICBEN team chairs and co-chairs summarize relevant findings, publications, developments, and policies related to the biological effects of noise, with a focus on the period 2011-2014 and for the following topics: Noise-induced hearing loss; nonauditory effects of noise; effects of noise on performance and behavior; effects of noise on sleep; community response to noise; and interactions with other agents and contextual factors. Occupational settings and transport have been identified as the most prominent sources of noise that affect health. These reviews demonstrate that noise is a prevalent and often underestimated threat for both auditory and nonauditory health and that strategies for the prevention of noise and its associated negative health consequences are needed to promote public health.

  6. 体感刺激激活人脑听觉皮层%Somatosensory stimulation activates human auditory cortex

    Institute of Scientific and Technical Information of China (English)

    蒋宇钢; 周倩; 张明铭

    2011-01-01

    目的 初步探讨体感刺激是否可以激活听觉皮层,为听觉皮层作为多重感觉皮层提供证据.方法 5例颞叶占位的患者术中暴露颞上回后,分别接受声音(100 dB)和体感刺激,通过光学成像在红光下(610±10)nm观察初级、次级听觉皮层(BA41、42)反射内源光信号变化特征.结果 红光(610±lO)nm下我们观察到听觉刺激后听觉皮层(BA41、42)明显激活(n=5),体感刺激后可观察到和听觉刺激时相似区域的激活,且响应的方式与听觉刺激无明显差异(n=4).结论 体感刺激可激活听觉皮层,这可能是听觉皮层作为多重感觉皮层的一个证据.%Objective This paper is to explore whether somatosensory stimulation could activate human auditory cortex (AI) and provide a new evidence for the multisensory center.Methods Intrinsic optical signals from the superior temporal gyrus were measured intraoperatively in five anesthetized patients with temporal lobe tumors.We detected the activation of the auditory cortex ( BA41、42) during auditory and somatosensory stimuli respectively under red illuminating light (610 ± 10 ) nm.Results Under the illumination of red light wavelength we clearly detected hemodynamic responses in the primary and secondary auditory cortex ( BA 41,42) by the stimulus of the 100 dB clicks ( n =5) and similar response area during the somatosensory paradigm ( n =4).Conclusion Somatosensory stimulation can activate the auditory cortex which may be a new evidence of the multisensory center.

  7. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  8. Prior implicit knowledge shapes human threshold for orientation noise

    DEFF Research Database (Denmark)

    Christensen, Jeppe H; Bex, Peter J; Fiser, József

    2015-01-01

    , resulting in an image-class-specific threshold that changes the shape and position of the dipper function according to image class. These findings do not fit a filter-based feed-forward view of orientation coding, but can be explained by a process that utilizes an experience-based perceptual prior...... of the expected local orientations and their noise. Thus, the visual system encodes orientation in a dynamic context by continuously combining sensory information with expectations derived from earlier experiences....

  9. Mudança significativa do limiar auditivo em trabalhadores expostos a diferentes níveis de ruído Significant auditory threshold shift among workers exposed to different noise levels

    Directory of Open Access Journals (Sweden)

    Flavia Cardoso Oliva

    2011-09-01

    Full Text Available OBJETIVO: Avaliar a audição e a ocorrência de mudança significativa do limiar auditivo em trabalhadores de frigoríficos expostos a níveis de ruído abaixo das Normas e Regulamentações nacionais e internacionais e compará-los com trabalhadores expostos a níveis de ruído considerados excessivos. MÉTODOS: Utilizou-se um banco de dados contendo informações longitudinais de 266 trabalhadores. Foram selecionados trabalhadores com um mínimo de três exames audiométricos e os que continham dados de exposição ao ruído. Foram mantidos 63 exames, classificados de acordo com sua exposição ao ruído em três níveis: 79 a 84,9 dB(A, 85 a 89,9 dB(A e 90 a 98,8 dB(A. Foi avaliada a ocorrência de perdas auditivas e de mudança significativa de limiar auditivo dos participantes de cada subgrupo. RESULTADOS: Verificou-se diferenças em todas as frequências nos testes de comparação entre a média dos limiares auditivos para cada frequência em função do nível de exposição ao ruído. A correlação entre a ocorrência de Perda Auditiva Induzida por Níveis de Pressão Sonora Elevados (PAINPSE e os anos de exposição ao ruído dentro da empresa atual foi significativa (R=0,373; p=0,079. Foram encontradas mudanças permanentes de limiar auditivo nos três níveis de exposição ao ruído. CONCLUSÃO: Os achados do presente estudo sugerem a existência de uma associação entre mudança significativa do limiar auditivo dos trabalhadores e os anos de exposição ao ruído considerado de baixo risco.PURPOSE: To assess the hearing status and signs of significant auditory threshold shifts in meat-processing facility workers who are exposed to noise levels below nationally and internationally recommended exposure limits, and to compare these results with data from workers exposed to excessive noise levels. METHODS: Longitudinal audiometric data from 266 workers were evaluated, and only workers with a minimum of three audiometric test results

  10. Effects of exposure to noise and indoor air pollution on human perception and symptoms

    DEFF Research Database (Denmark)

    Witterseh, Thomas; Wargocki, Pawel; Fang, Lei

    1999-01-01

    The objective of the present study was to investigate human perception and SBS symptoms when people are exposed simultaneously to different levels of air pollution and ventilation noise. The air quality in an office was modified by placing or removing a carpet and the background noise level...... was modified by playing a recording of ventilation noise. Thirty female subjects, six at a time, occupied the office for 4.4 hours. The subjects assessed the air quality, the noise, and the indoor environment upon entering the office and on six occasions during occupation. Furthermore, SBS symptoms...... of the occupants were recorded throughout the exposure period. During occupation, the subjects performed simulated office work. The results show that elevated air pollution and noise in an office can interact and negatively affect office workers by increasing the prevalence of SBS symptoms. A moderate increase...

  11. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  12. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS.

    Science.gov (United States)

    Sowman, Paul F; Dueholm, Søren S; Rasmussen, Jesper H; Mrachacz-Kersting, Natalie

    2014-01-01

    Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS) of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS)-induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  13. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance...... in conditions which are sensitive to cochlear suppression. The fourth chapter examines the role of cognitive processing in different stimulus paradigms: CMR, binaural masking level differences and modulation detection interference are investigated in contexts of auditory grouping. It is shown that auditory...

  14. Differential maturation of brain signal complexity in the human auditory and visual system

    Directory of Open Access Journals (Sweden)

    Sarah Lippe

    2009-11-01

    Full Text Available Brain development carries with it a large number of structural changes at the local level which impact on the functional interactions of distributed neuronal networks for perceptual processing. Such changes enhance information processing capacity, which can be indexed by estimation of neural signal complexity. Here, we show that during development, EEG signal complexity increases from one month to 5 years of age in response to auditory and visual stimulation. However, the rates of change in complexity were not equivalent for the two responses. Infants’ signal complexity for the visual condition was greater than auditory signal complexity, whereas adults showed the same level of complexity to both types of stimuli. The differential rates of complexity change may reflect a combination of innate and experiential factors on the structure and function of the two sensory systems.

  15. Effects of ultrasonic noise on the human body-a bibliographic review.

    Science.gov (United States)

    Smagowska, Bożena; Pawlaczyk-Łuszczyńska, Małgorzata

    2013-01-01

    Industrial noise in the working environment has adverse effects on human hearing; literature and private studies confirm that. It has been determined that significant changes in the hearing threshold level occur in the high frequency audiometry, i.e., in the 8-20 kHz frequency range. Therefore, it is important to determine the effect of ultrasonic noise (10-40 kHz) on the human body in the working environment. This review describes hearing and nonhearing effects (thermal effects, subjective symptoms and functional changes) of the exposure to noise emitted by ultrasound devices. Many countries have standard health exposure limits to prevent effects of the exposure to ultrasonic noise in the working environment.

  16. Evaluation of human exposure to the noise from large wind turbine generators

    Science.gov (United States)

    Shepherd, K. P.; Grosveld, F. W.; Stephens, D. G.

    1983-01-01

    The human perception of a nuisance level of noise was quantified in tests and attempts were made to define criteria for acceptable sound levels from wind turbines. Comparisons were made between the sound necessary to cause building vibration, which occurred near the Mod-1 wind turbine, and human perception thresholds for building noise and building vibration. Thresholds were measured for both broadband and impulsive noise, with the finding that noise in the 500-2000 Hz region, and impulses with a 1 Hz fundamental, were most noticeable. Curves were developed for matching a receiver location with expected acoustic output from a machine to determine if the sound levels were offensive. In any case, further data from operating machines are required before definitive criteria can be established.

  17. Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.

    Directory of Open Access Journals (Sweden)

    Thalita Ubiali

    Full Text Available The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing.P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation.P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values.Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.

  18. The Effect of Temporal Context on the Sustained Pitch Response in Human Auditory Cortex

    OpenAIRE

    Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André

    2006-01-01

    Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...

  19. Perceptual demand modulates activation of human auditory cortex in response to task-irrelevant sounds.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Mangalathu, Jain; Desai, Anjali; Binder, Jeffrey R; Liebenthal, Einat

    2013-09-01

    In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130-230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.

  20. How does stochastic resonance work within the human brain? - Psychophysics of internal and external noise

    Energy Technology Data Exchange (ETDEWEB)

    Aihara, Takatsugu [ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288 (Japan); Kitajo, Keiichi [Laboratory for Dynamics of Emergent Intelligence, RIKEN Brain Science Institute, Wako, Saitama 351-0198 (Japan); PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama 332-0012 (Japan); Nozaki, Daichi [Educational Physiology Laboratory, Graduate School of Education, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Yamamoto, Yoshiharu, E-mail: yamamoto@p.u-tokyo.ac.jp [Educational Physiology Laboratory, Graduate School of Education, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2010-10-05

    We review how research on stochastic resonance (SR) in neuroscience has evolved and point out that the previous studies have overlooked the interaction between internal and external noise. We propose a new psychometric function incorporating SR effects, and show that a Bayesian adaptive method applied to the function efficiently estimates the parameters of the function. Using this procedure in visual detection experiments, we provide significant insight into the relationship between internal and external noise in SR within the human brain.

  1. Histological Basis of Laminar MRI Patterns in High Resolution Images of Fixed Human Auditory Cortex

    Science.gov (United States)

    Wallace, Mark N.; Cronin, Matthew J.; Bowtell, Richard W.; Scott, Ian S.; Palmer, Alan R.; Gowland, Penny A.

    2016-01-01

    Functional magnetic resonance imaging (fMRI) studies of the auditory region of the temporal lobe would benefit from the availability of image contrast that allowed direct identification of the primary auditory cortex, as this region cannot be accurately located using gyral landmarks alone. Previous work has suggested that the primary area can be identified in magnetic resonance (MR) images because of its relatively high myelin content. However, MR images are also affected by the iron content of the tissue and in this study we sought to confirm that different MR image contrasts did correlate with the myelin content in the gray matter and were not primarily affected by iron content as is the case in the primary visual and somatosensory areas. By imaging blocks of fixed post-mortem cortex in a 7 T scanner and then sectioning them for histological staining we sought to assess the relative contribution of myelin and iron to the gray matter contrast in the auditory region. Evaluating the image contrast in T2*-weighted images and quantitative R2* maps showed a reasonably high correlation between the myelin density of the gray matter and the intensity of the MR images. The correlation with T1-weighted phase sensitive inversion recovery (PSIR) images was better than with the previous two image types, and there were clearly differentiated borders between adjacent cortical areas in these images. A significant amount of iron was present in the auditory region, but did not seem to contribute to the laminar pattern of the cortical gray matter in MR images. Similar levels of iron were present in the gray and white matter and although iron was present in fibers within the gray matter, these fibers were fairly uniformly distributed across the cortex. Thus, we conclude that T1- and T2*-weighted imaging sequences do demonstrate the relatively high myelin levels that are characteristic of the deep layers in primary auditory cortex and allow it and some of the surrounding areas to be

  2. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) Software, Beta Release W93e

    Science.gov (United States)

    2009-09-01

    Dearborn, MI. Price, G. R. (1997). “Noise hazard issues in the design of airbags.” Invited seminar presented to GM- NAO R&D Center, Warren, MI. Price...Invited presentation to seminar at GM- NAO R&D Center, Warren, MI. Price, G. R. (1994). “Hazard from impulse noise: Problems and prospects,” J. Acoust. Soc...from intense impulses from a mathematical model of the ear.” Paper in proceedings of Inter-Noise 87, meeting in Beijing, China , Sept 1987. 1986

  3. Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings.

    Science.gov (United States)

    Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    2015-09-01

    High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons.

    Science.gov (United States)

    Dolležal, Lena-Vanessa; Itatani, Naoya; Günther, Stefanie; Klump, Georg M

    2012-12-01

    Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.

  5. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    Science.gov (United States)

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  6. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2016-12-06

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Attentional influences on functional mapping of speech sounds in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Elbert Thomas

    2004-07-01

    Full Text Available Abstract Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content, and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization, while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization. Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG, and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

  8. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  9. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  10. A region finding method to remove the noise from the images of the human hand gesture recognition system

    Science.gov (United States)

    Khan, Muhammad Jibran; Mahmood, Waqas

    2015-12-01

    The performance of the human hand gesture recognition systems depends on the quality of the images presented to the system. Since these systems work in real time environment the images may be corrupted by some environmental noise. By removing the noise the performance of the system can be enhanced. So far different noise removal methods have been presented in many researches to eliminate the noise but all have its own limitations. We have presented a region finding method to deal with the environmental noise that gives better results and enhances the performance of the human hand gesture recognition systems so that the recognition rate of the system can be improved.

  11. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  12. 雌激素对模拟失重及噪声条件下豚鼠听功能的影响%Effects of Estrogen on Guinea Pig Auditory Function in Simulated Spaceship Weightlessness and Noise

    Institute of Scientific and Technical Information of China (English)

    王刚; 吴大蔚; 牛聪敏; 刘钢; 吴玮; 韩浩伦; 屈昌北; 王方园; 王鸿南; 李保卫; 孟令照; 虞学军

    2013-01-01

    Objective To investigate the effects of combined simulated weightlessness and noise on auditory brainstem response thresholds in guinea pigs and the protective effects of estrogen. Methods Forty guinea pigs were randomly divided into a simulated weightlessness only group, a weightlessness with noise group, an estrogen treatment group and an estrogen prevention group. Weightlessness was simulated by posterior limb suspension, at a-30° angle between horizon and longitudi-nal axis of the body. Except for the weightlessness only group, animals in all other groups were also exposed to simulated spaceship flying and return stages noise for 5 days.Intramuscular injection of estradiol benzoate (0.08 mg/kg/day with dou-ble dose on Day 1) was administered for 3 days before experiment in the estrogen prevention group, and from the start of the experiment to 3 days after experiment in the estrogen treatment group. Auditory brainstem response (ABR) thresholds were re-corded before, at the end of and 3 days after the experiment. Results ABR thresholds were not different among the groups be-fore the experiment. ABR thresholds were different at the end of and 3 days after the experiment, with those in the estrogen treatment group being the lowest. ABR threshold differences were also seen at different times during experiment in each group. Conclusion While both weightlessness and noise can lead to damage of auditory function in guinea pigs, their combi-nation can cause even more severe damage. Estrogen appears to have protective effects against damage of auditory function caused by compound weightlessness and noise factors in space.%  目的探讨模拟失重和飞船舱内噪声复合因素对豚鼠耳蜗听性脑干反应(ABR)阈值的影响及雌激素的防护作用。方法40只豚鼠随机分为单纯失重组、失重+噪声组、雌激素治疗组、雌激素预防组各10只。后肢悬吊法模拟失重,除单纯失重组外均暴露于模拟飞船舱内在天飞

  13. Characterizing noise in nonhuman vocalizations: Acoustic analysis and human perception of barks by coyotes and dogs

    Science.gov (United States)

    Riede, Tobias; Mitchell, Brian R.; Tokuda, Isao; Owren, Michael J.

    2005-07-01

    Measuring noise as a component of mammalian vocalizations is of interest because of its potential relevance to the communicative function. However, methods for characterizing and quantifying noise are less well established than methods applicable to harmonically structured aspects of signals. Using barks of coyotes and domestic dogs, we compared six acoustic measures and studied how they are related to human perception of noisiness. Measures of harmonic-to-noise-ratio (HNR), percent voicing, and shimmer were found to be the best predictors of perceptual rating by human listeners. Both acoustics and perception indicated that noisiness was similar across coyote and dog barks, but within each species there was significant variation among the individual vocalizers. The advantages and disadvantages of the various measures are discussed.

  14. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  15. An overview of health effects on noise

    Science.gov (United States)

    Osada, Y.

    1988-12-01

    Although noise can damage the inner ear and cause other pathological changes, its most common negative effects are non-somatic, such as a perception of noisiness and disturbance of daily activities. According to the definition of health by WHO, this should be considered as a health hazard. These health effects of noise can be classified into the following three categories: (I) hearing loss, perception of noisiness and masking are produced along the auditory pathway and are thus direct and specific effects of noise; (II) interference with performance, rest and sleep, a feeling of discomfort and some physiological effects are produced as indirect and non-specific effects via reticular formation of the midbrain; (III) annoyance is not merely a feeling of unpleasantness but the feeling of being bothered or troubled, and includes the development of a particular attitude toward the noise source. Individual or group behavioral responses will be evoked when annoyance develops. Annoyance and behavioral response are integrated and composite effects. The health effects of noise are modified by many factors related to both the noise and the individual. Noise level, frequency spectrum, duration and impulsiveness modify the effects. Sex, age, health status and mental character also have an influence on the effects. Direct effects of noise are most dependent on the physical nature of the noise and least dependent on human factors. Indirect effects are more dependent, and integrated effects most dependent, on human factors.

  16. Human Response to Helicopter Noise: A Test of A-Weighting

    Science.gov (United States)

    1991-11-01

    34Environmental Quality Technology" Work Unit NN-TGO, "Department of Defense (DOD) Noise Source Human Response Characterization." Cosponsors of this research were...CH53E Indoo 50 CH5E Idoo92 dB Control SEL / O 4 0

  17. Objective measures of binaural masking level differences and comodulation masking release based on late auditory evoked potentials

    DEFF Research Database (Denmark)

    Epp, Bastian; Yasin, Ifat; Verhey, Jesko L.

    2013-01-01

    The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound...

  18. Effects of Auditory Input in Individuation Tasks

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  19. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... relation between frequency and travel time in the cochlea defines the cochlear delay. This delay is directly associated with the signal analysis occurring in the inner ear and is therefore of primary interest to get a better knowledge of this organ. It is possible to estimate the cochlear delay by direct...... and ASSR latency estimates demonstrated similar rates of latency decrease as a function of frequency. It was further concluded, in this thesis, that OAE measurements are the most appropriate to estimate cochlear delays, since they had the best repeatability and the shortest recording time. Preliminary...

  20. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  1. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  2. A 3 year update on the influence of noise on performance and behavior

    Directory of Open Access Journals (Sweden)

    Charlotte Clark

    2012-01-01

    Full Text Available The effect of noise exposure on human performance and behavior continues to be a focus for research activities. This paper reviews developments in the field over the past 3 years, highlighting current areas of research, recent findings, and ongoing research in two main research areas: Field studies of noise effects on children′s cognition and experimental studies of auditory distraction. Overall, the evidence for the effects of external environmental noise on children′s cognition has strengthened in recent years, with the use of larger community samples and better noise characterization. Studies have begun to establish exposure-effect thresholds for noise effects on cognition. However, the evidence remains predominantly cross-sectional and future research needs to examine whether sound insulation might lessen the effects of external noise on children′s learning. Research has also begun to explore the link between internal classroom acoustics and children′s learning, aiming to further inform the design of the internal acoustic environment. Experimental studies of the effects of noise on cognitive performance are also reviewed, including functional differences in varieties of auditory distraction, semantic auditory distraction, individual differences in susceptibility to auditory distraction, and the role of cognitive control on the effects of noise on understanding and memory of target speech materials. In general, the results indicate that there are at least two functionally different types of auditory distraction: One due to the interruption of processes (as a result of attention being captured by the sound, another due to interference between processes. The magnitude of the former type is related to individual differences in cognitive control capacities (e.g., working memory capacity; the magnitude of the latter is not. Few studies address noise effects on behavioral outcomes, emphasizing the need for researchers to explore noise

  3. The effects of auditory contrast tuning upon speech intelligibility

    Directory of Open Access Journals (Sweden)

    Nathaniel J Killian

    2016-08-01

    Full Text Available We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.

  4. An Acoustic Gap between the NICU and the Womb: A Potentially Overlooked Risk for Compromised Neuroplasticity of the Auditory System in Preterm Infants

    Directory of Open Access Journals (Sweden)

    Amir eLahav

    2014-12-01

    Full Text Available The intrauterine environment allows the fetus to begin hearing with low frequency sounds in a protected fashion, ensuring optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF noisy environment of the neonatal intensive care unit (NICU. The present article draws a concerning line between auditory system development and HF noise in the NICU, which is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize, the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds potentially important to the preterm infant, whose exposure to linguistic stimuli is already restricted. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.

  5. An acoustic gap between the NICU and womb: a potential risk for compromised neuroplasticity of the auditory system in preterm infants.

    Science.gov (United States)

    Lahav, Amir; Skoe, Erika

    2014-01-01

    The intrauterine environment allows the fetus to begin hearing low-frequency sounds in a protected fashion, ensuring initial optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF) noisy environment of the neonatal intensive care unit (NICU). The present article draws a concerning line between auditory system development and HF noise in the NICU, which we argue is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize that the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to increased risks for a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds, further limiting quality exposure to linguistic stimuli. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.

  6. CONTRALATERAL SUPPRESSION OF DISTORTION PRODUCT OTOACOUSTIC EMISSION IN CHILDREN WITH AUDITORY PROCESSING DISORDERS

    Institute of Scientific and Technical Information of China (English)

    Jessica Oppee; SUN Wei; Nancy Stecker

    2014-01-01

    Previous research has demonstrated that the amplitude of evoked emissions decreases in human sub-jects when the contralateral ear is stimulated by noise. The medial olivocochlear bundle (MOCB) is be-lieved to control this phenomenon. Recent research has examined this effect in individuals with auditory pro-cessing disorders (APD), specifically with difficulty understanding speech in noise. Results showed tran-sient evoked otoacoustic emissions (TEOAEs) were not affected by contralateral stimulation in these sub-jects. Much clinical research has measured the function of the MOCB through TEOAEs.This study will use an alternative technique, distortion product otoacoustic emissions (DPOAEs), to examine this phenomenon and evaluate the function of the MOCB. DPOAEs of individuals in a control group with normal hearing and no significant auditory processing difficulties were compared to the DPOAEs of children with signifi-cant auditory processing difficulties.Results showed that the suppression effect was observed in the control group at 2 kHz with 3 kHz of narrowband noise. For the auditory processing disorders group, no significant suppression was observed.Overall, DPOAEs showed suppression with contralateral noise, while the APD group levels increased overall.These results provide further evidence that the MOCB may have reduced function in children with APD.

  7. The Auditory Hazard Assessment Algorithm for Humans (AHAAH): Hazard Evaluation of Intense Sounds

    Science.gov (United States)

    2011-07-01

    1972; Ward, 1962) and recovery is delayed and may not go to completion (Hamernik et al., 1988; Luz and Hodge, 1971). Given this change in modes, it is...Baltimore, MD, April 2010b. Luz , G. A.; Hodge, D. C. The Recovery From Impulse Noise-Induced TTS in Monkeys and Men: A Descriptive Model. J...KALAMAZOO MI 49008-5355 NO. OF COPIES ORGANIZATION 19 1 GEORGE LUZ LUZ SOCIAL & ENVIRONMENTAL ASSOC 4910 CROWSON AVE

  8. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  9. Noise-induced phase transition in the model of human virtual stick balancing

    CERN Document Server

    Zgonnikov, Arkady

    2016-01-01

    Humans face the task of balancing dynamic systems near an unstable equilibrium repeatedly throughout their lives. Much research has been aimed at understanding the mechanisms of intermittent control in the context of human balance control. The present paper deals with one of the recent developments in the theory of human intermittent control, namely, the double-well model of noise-driven control activation. We demonstrate that the double-well model can reproduce the whole range of experimentally observed distributions under different conditions. Moreover, we show that a slight change in the noise intensity parameter leads to a sudden shift of the action point distribution shape, that is, a phase transition is observed.

  10. Longitudinal Study of Human Hearings: Its Relationship to Noise and Other Factors. 1. Design of Five Year Study; Data from First Year

    Science.gov (United States)

    1977-03-01

    that exposure to loud noises causes more histological damage in young than in adult guinea pigs (Jauhiainen et al., 1972) and that kittens lose more...particular care . Further, serial studies offer several advantages over cross-sectional studies. The major reasons why serial studies of auditory

  11. Neural adaptation to silence in the human auditory cortex: a magnetoencephalographic study.

    Science.gov (United States)

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2014-01-01

    Previous studies demonstrated that a decrement in the N1m response, a major deflection in the auditory evoked response, with sound repetition was mainly caused by bottom-up driven neural refractory periods following brain activation due to sound stimulations. However, it currently remains unknown whether this decrement occurs with a repetition of silences, which do not induce refractoriness. In the present study, we investigated decrements in N1m responses elicited by five repetitive silences in a continuous pure tone and by five repetitive pure tones in silence using magnetoencephalography. Repetitive sound stimulation differentially affected the N1m decrement in a sound type-dependent manner; while the N1m amplitude decreased from the 1st to the 2nd pure tone and remained constant from the 2nd to the 5th pure tone in silence, a gradual decrement was observed in the N1m amplitude from the 1st to the 5th silence embedded in a continuous pure tone. Our results suggest that neural refractoriness may mainly cause decrements in N1m responses elicited by trains of pure tones in silence, while habituation, which is a form of the implicit learning process, may play an important role in the N1m source strength decrements elicited by successive silences in a continuous pure tone.

  12. GF-GC Theory of Human Cognition: Differentiation of Short-Term Auditory and Visual Memory Factors.

    Science.gov (United States)

    McGhee, Ron; Lieberman, Lewis

    1994-01-01

    Study sought to determine whether separate short-term auditory and visual memory factors would emerge given a sufficient number of markers in a factor matrix. A principal component factor analysis with varimax rotation was performed. Short-term visual and short-term auditory memory factors emerged as expected. (RJM)

  13. Noise-driven activation in human intermittent control: a double-well potential model

    CERN Document Server

    Zgonnikov, Arkady

    2014-01-01

    In controlling unstable systems humans switch intermittently between the passive and active behavior instead of controlling the system in a continuous manner. The notion of noise-driven control activation provides a richer alternative to the conventional threshold-based models of intermittent motor control. The present study represents the control activation as a random walk in a continuously changing double-well potential. The match between the proposed model and the previous data on human balancing of virtual stick prompts that the double-well approach can aid in explaining complex dynamics of human behavior in control processes.

  14. Effects of noise overexposure on distortion product otoacoustic emissions

    DEFF Research Database (Denmark)

    de Toro, Miguel Angel Aranda

    The risk of noise-induced hearing loss (NIHL) at the workplace can be predicted according to the International Standard ISO 1999:1990. The standard is applicable to all types of noise and it is based on measurements of the total acoustic energy (LEX,8ℎ). Therefore, noises with equal energy...... are assumed to be equally hazardous for our hearing. Nevertheless, the standard allows adding a +5dB penalty to impulsive and tonal noises based on the presumption that they might pose a higher risk of hearing loss. This PhD thesis investigates the effect of different occupational noise exposures...... on the auditory system and the need for penalization. A total of 16 normal-hearing human subjects were exposed under laboratory conditions to three noise stimuli with equal energy: (1) continuous broadband; (2) impulsive+continuous; and (3) tonal. Temporary changes on the hearing of the subjects were evaluated...

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. Optimizing the imaging of the monkey auditory cortex: sparse vs. continuous fMRI.

    Science.gov (United States)

    Petkov, Christopher I; Kayser, Christoph; Augath, Mark; Logothetis, Nikos K

    2009-10-01

    The noninvasive imaging of the monkey auditory system with functional magnetic resonance imaging (fMRI) can bridge the gap between electrophysiological studies in monkeys and imaging studies in humans. Some of the recent imaging of monkey auditory cortical and subcortical structures relies on a technique of "sparse imaging," which was developed in human studies to sidestep the negative influence of scanner noise by adding periods of silence in between volume acquisition. Among the various aspects that have gone into the ongoing optimization of fMRI of the monkey auditory cortex, replacing the more common continuous-imaging paradigm with sparse imaging seemed to us to make the most obvious difference in the amount of activity that we could reliably obtain from awake or anesthetized animals. Here, we directly compare the sparse- and continuous-imaging paradigms in anesthetized animals. We document a strikingly greater auditory response with sparse imaging, both quantitatively and qualitatively, which includes a more expansive and robust tonotopic organization. There were instances where continuous imaging could better reveal organizational properties that sparse imaging missed, such as aspects of the hierarchical organization of auditory cortex. We consider the choice of imaging paradigm as a key component in optimizing the fMRI of the monkey auditory cortex.

  17. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  18. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography.

    Science.gov (United States)

    Simon, Jonathan Z

    2015-02-01

    Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Acoustic fMRI noise : Linear time-invariant system model

    NARCIS (Netherlands)

    Sierra, Carlos V. Rizzo; Versluis, Maarten J.; Hoogduin, Johannes M.; Duifhuis, Hendrikus (Diek)

    2008-01-01

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For auditory system studies, however, the acoustic noise generated by the scanner tends to interfere with the assessments of this activation. Understanding and modeling fMRI acoustic noi

  20. The Effect of a Noise Reducing Test Accommodation on Elementary Students with Learning Disabilities

    Science.gov (United States)

    Smith, Gregory W.; Riccomini, Paul J.

    2013-01-01

    Researchers in the fields of cognitive psychology and education have been studying the negative effects of noise on human performance for almost a century. A new empirical study that builds upon past relevant research on (1) test accommodations and (2) auditory distraction and academic performance was conducted with elementary age students.…

  1. Through-wall imaging and characterization of human activity using ultrawideband (UWB) random noise radar

    Science.gov (United States)

    Lai, Chieh-Ping; Narayanan, Ram M.

    2005-05-01

    Recent terrorist activities and law-enforcement situations involving hostage situations underscore the need for effective through-wall imaging. Current building interior imaging systems are based on short-pulse waveforms, which require specially designed antennas to subdue unwanted ringing. In addition, periodically transmitted pulses of energy are easily recognizable by the intelligent adversary who may employ appropriate countermeasures to confound detection. A coherent polarimetric random noise radar architecture is being developed based on UWB technology and software defined radio, which has great promise in its ability to covertly image obscured targets. The main advantages of the random noise radar lie in two aspects: first, random noise waveform has an ideal "thumbtack" ambiguity function, i.e., its down range and cross range resolution can be separately controlled, thus providing unambiguous high resolution imaging at any distance; second, random noise waveform is inherently low probability of intercept (LPI) and low probability of detection (LPD), i.e., it is immune from detection, jamming, and interference. Thus, it is an ideal candidate sensor for covert imaging of obscured regions in hostile environments. The coherency in the system can be exploited to field a fully-polarimetric system that can take advantage of polarization features in target recognition. Moving personnel can also be detected using Doppler processing. Simulation studies are used to analyze backscattered signals from the walls, and humans and other targets behind the walls. Real-time data processing shows human activity behind the wall and human target tracking. The high resolution provides excellent multipath and clutter rejection.

  2. The specificity of stimulus-specific adaptation in human auditory cortex increases with repeated exposure to the adapting stimulus.

    Science.gov (United States)

    Briley, Paul M; Krumbholz, Katrin

    2013-12-01

    The neural response to a sensory stimulus tends to be more strongly reduced when the stimulus is preceded by the same, rather than a different, stimulus. This stimulus-specific adaptation (SSA) is ubiquitous across the senses. In hearing, SSA has been suggested to play a role in change detection as indexed by the mismatch negativity. This study sought to test whether SSA, measured in human auditory cortex, is caused by neural fatigue (reduction in neural responsiveness) or by sharpening of neural tuning to the adapting stimulus. For that, we measured event-related cortical potentials to pairs of pure tones with varying frequency separation and stimulus onset asynchrony (SOA). This enabled us to examine the relationship between the degree of specificity of adaptation as a function of frequency separation and the rate of decay of adaptation with increasing SOA. Using simulations of tonotopic neuron populations, we demonstrate that the fatigue model predicts independence of adaptation specificity and decay rate, whereas the sharpening model predicts interdependence. The data showed independence and thus supported the fatigue model. In a second experiment, we measured adaptation specificity after multiple presentations of the adapting stimulus. The multiple adapters produced more adaptation overall, but the effect was more specific to the adapting frequency. Within the context of the fatigue model, the observed increase in adaptation specificity could be explained by assuming a 2.5-fold increase in neural frequency selectivity. We discuss possible bottom-up and top-down mechanisms of this effect.

  3. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  4. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  5. Dynamic movement of N100m current sources in auditory evoked fields: comparison of ipsilateral versus contralateral responses in human auditory cortex.

    Science.gov (United States)

    Jin, Chun Yu; Ozaki, Isamu; Suzuki, Yasumi; Baba, Masayuki; Hashimoto, Isao

    2008-04-01

    We recorded auditory evoked magnetic fields (AEFs) to monaural 400Hz tone bursts and investigated spatio-temporal features of the N100m current sources in the both hemispheres during the time before the N100m reaches at the peak strength and 5ms after the peak. A hemispheric asymmetry was evaluated as the asymmetry index based on the ratio of N100m peak dipole strength between right and left hemispheres for either ear stimulation. The results of asymmetry indices showed right-hemispheric dominance for left ear stimulation but no hemispheric dominance for right ear stimulation. The current sources for N100m in both hemispheres in response to monaural 400Hz stimulation moved toward anterolateral direction along the long axis of the Heschl gyri during the time before it reaches the peak strength; the ipsilateral N100m sources were located slightly posterior to the contralateral N100m ones. The onset and peak latencies of the right hemispheric N100m in response to right ear stimulation are shorter than those of the left hemispheric N100m to left ear stimulation. The traveling distance of the right hemispheric N100m sources following right ear stimulation was longer than that for the left hemispheric ones following left ear stimulation. These results suggest the right-dominant hemispheric asymmetry in pure tone processing.

  6. Binaural auditory beats affect vigilance performance and mood.

    Science.gov (United States)

    Lane, J D; Kasian, S J; Owens, J E; Marsh, G R

    1998-01-01

    When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.

  7. EFFECTS OF NOISE EXPOSURE ON CLICK- AND TONE BURST-EVOKED AUDITORY BRAINSTEM RESPONSE IN RATS%噪声暴露对大鼠短纯音与短声听觉脑干反应的影响

    Institute of Scientific and Technical Information of China (English)

    佘晓俊; 崔博; 吴铭权; 马强; 刘洪涛

    2011-01-01

    目的 研究噪声暴露对大鼠听觉脑干反应(ABR)的影响,及大鼠短声ABR(C-ABR)和短纯音ABR(Tb-ABR)的特点,以探讨Tb-ABR在听力评估中的价值.方法 将成年SD大鼠随机分为噪声组和正常组,每组7只(14耳).噪声组暴露于100 dB(SPL)白噪声,6 h/d,连续12周.噪声停止后24 h分别测定C-ABR和Tb-ABR(刺激声2、4、8、16、32 kHz).对2组的ABR结果进行统计分析.结果 随着刺激声从32 kHz到2 kHz,正常大鼠Tb-ABR各波的潜伏期延长;与C-ABR比较,Tb-ABR各频率波Ⅰ、Ⅱ、Ⅳ的潜伏期都延迟,差异有显著性(P<0.01).大鼠听力在16 kHz比较敏感,阈值较低.噪声组大鼠在8 kHz听力损失最重,升高11 dB;各种短纯音刺激时,Tb-ABR各波潜伏期及峰间潜伏期无明显变化.结论 正常大鼠在各个频率的听力闽值、敏感性不同,噪声对各频率听力影响也不同;Tb-ABR较C-ABR更能反映听力损失的频率特性.%Objective To study the effects of noise exposure on click-evoked auditory brainstem responses( CABR) and tone burst-evoked ABR(Tb-ABR) in rats, so as to provide the reference for application of Tb-ABR to hearing evaluation.Methods Fourteen SD rats were divided into two groups, named the control group and the noise exposure group with seven rats per group(fourteen ears).The noise exposure group were exposed to the white noise[100 dB(SPL),6 h/day,12 weeks].C-ABR and Tb-ABR(2,4,8, 16,32 kHz) were measured 24 hours after noise exposure.Results The peak latency in normal rats was prolonged when the stimulate tone changed from 32 kHz to 2 kHz; the peak latency of TbABR with any frequency was significantly longer than that of C-ABR ( P < 0.01 ); the hearing threshold at 16 kHz of TbABR was the lowest, and the hearing was the best.The hearing loss in noise exposure group was most severe at 8 kHz ( 11 dB more than the control).The thresholds of C-ABR and Tb-ABR of noise exposure group were higher than those of control group.The peak latency of

  8. Mismatch responses in the awake rat: evidence from epidural recordings of auditory cortical fields.

    Directory of Open Access Journals (Sweden)

    Fabienne Jung

    Full Text Available Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN, a component of auditory evoked potentials (AEPs, reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.

  9. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  10. BDNF Increases Survival and Neuronal Differentiation of Human Neural Precursor Cells Cotransplanted with a Nanofiber Gel to the Auditory Nerve in a Rat Model of Neuronal Damage

    Directory of Open Access Journals (Sweden)

    Yu Jiao

    2014-01-01

    Full Text Available Objectives. To study possible nerve regeneration of a damaged auditory nerve by the use of stem cell transplantation. Methods. We transplanted HNPCs to the rat AN trunk by the internal auditory meatus (IAM. Furthermore, we studied if addition of BDNF affects survival and phenotypic differentiation of the grafted HNPCs. A bioactive nanofiber gel (PA gel, in selected groups mixed with BDNF, was applied close to the implanted cells. Before transplantation, all rats had been deafened by a round window niche application of β-bungarotoxin. This neurotoxin causes a selective toxic destruction of the AN while keeping the hair cells intact. Results. Overall, HNPCs survived well for up to six weeks in all groups. However, transplants receiving the BDNF-containing PA gel demonstrated significantly higher numbers of HNPCs and neuronal differentiation. At six weeks, a majority of the HNPCs had migrated into the brain stem and differentiated. Differentiated human cells as well as neurites were observed in the vicinity of the cochlear nucleus. Conclusion. Our results indicate that human neural precursor cells (HNPC integration with host tissue benefits from additional brain derived neurotrophic factor (BDNF treatment and that these cells appear to be good candidates for further regenerative studies on the auditory nerve (AN.

  11. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    Science.gov (United States)

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  12. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  13. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  14. Complexity and multifractality of neuronal noise in mouse and human hippocampal epileptiform dynamics

    Science.gov (United States)

    Serletis, Demitre; Bardakjian, Berj L.; Valiante, Taufik A.; Carlen, Peter L.

    2012-10-01

    Fractal methods offer an invaluable means of investigating turbulent nonlinearity in non-stationary biomedical recordings from the brain. Here, we investigate properties of complexity (i.e. the correlation dimension, maximum Lyapunov exponent, 1/fγ noise and approximate entropy) and multifractality in background neuronal noise-like activity underlying epileptiform transitions recorded at the intracellular and local network scales from two in vitro models: the whole-intact mouse hippocampus and lesional human hippocampal slices. Our results show evidence for reduced dynamical complexity and multifractal signal features following transition to the ictal epileptiform state. These findings suggest that pathological breakdown in multifractal complexity coincides with loss of signal variability or heterogeneity, consistent with an unhealthy ictal state that is far from the equilibrium of turbulent yet healthy fractal dynamics in the brain. Thus, it appears that background noise-like activity successfully captures complex and multifractal signal features that may, at least in part, be used to classify and identify brain state transitions in the healthy and epileptic brain, offering potential promise for therapeutic neuromodulatory strategies for afflicted patients suffering from epilepsy and other related neurological disorders. This paper is based on chapter 5 of Serletis (2010 PhD Dissertation Department of Physiology, Institute of Biomaterials and Biomedical Engineering, University of Toronto).

  15. Complexity in neurobiology: perspectives from the study of noise in human motor systems.

    Science.gov (United States)

    Balasubramaniam, Ramesh; Torre, Kjerstin

    2012-01-01

    This article serves as an introduction to the themed special issue on "Complex Systems in Neurobiology." The study of complexity in neurobiology has been sensitive to the stochastic processes that dominate the micro-level architecture of neurobiological systems and the deterministic processes that govern the macroscopic behavior of these systems. A large body of research has traversed these scales of interest, seeking to determine how noise at one spatial or temporal scale influences the activity of the system at another scale. In introducing this special issue, we pay special attention to the history of inquiry in complex systems and why scientists have tended to favor linear, causally driven, reductionist approaches in Neurobiology. We follow this with an elaboration of how an alternative approach might be formulated. To illustrate our position on how the sciences of complexity and the study of noise can inform neurobiology, we use three systematic examples from the study of human motor control and learning: 1) phase transitions in bimanual coordination; 2) balance, intermittency, and discontinuous control; and 3) sensorimotor synchronization and timing. Using these examples and showing that noise is adaptively utilized by the nervous system, we make the case for the studying complexity with a perspective of understanding the macroscopic stability in biological systems by focusing on component processes at extended spatial and temporal scales. This special issue continues this theme with contributions in topics as diverse as neural network models, physical biology, motor learning, and statistical physics.

  16. Human Gravity-Gradient Noise in Interferometric Gravitational-Wave Detectors

    CERN Document Server

    Thorne, K S; Thorne, Kip S.; Winstein, Carolee J.

    1999-01-01

    Among all forms of routine human activity, the one which produces the strongest gravity-gradient noise in interferometric gravitational-wave detectors (e.g. LIGO) is the beginning and end of weight transfer from one foot to the other during walking. The beginning and end of weight transfer entail sharp changes (timescale tau ~ 20msec) in the horizontal jerk (first time derivative of acceleration) of a person's center of mass. These jerk pairs, occuring about twice per second, will produce gravity-gradient noise in LIGO in the frequency band 2.5 Hz <~ f <~ 1/(2 tau) ~= 25 Hz with the form sqrt{S_h(f)} sum is over all the walking people, r_i is the distance of the i'th person from the nearest interferometer test mass, and we estimate this formula to be accurate to within a factor 3. To ensure that this noise is neglible in advanced LIGO interferometers, people should be prevented from coming nearer to the test masses than r ~= 10m. A r ~= 10m exclusion zone will also reduce to an acceptable level gravity ...

  17. Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.

    Science.gov (United States)

    Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki

    2017-09-06

    Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.

  18. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  19. Estudo dos efeitos auditivos e extra-auditivos da exposição ocupacional a ruído e vibração Auditory and extra-auditory effects of occupational exposure to noise and vibration

    Directory of Open Access Journals (Sweden)

    Márcia Fernandes

    2002-10-01

    compact hydraulic excavators. The 73 participants underwent an interview, otoscopy, and pure-tone audiometry. Regarding general health, group 2 workers, exposed to whole-body vibration presented the highest number of complaints. Results: All the participants from group 1 use hearing protectors and 11% of them complained about tinnitus. Not all workers from group 2 use hearing protectors and 17% of them 2 reported tinnitus. However, group 1 workers, exposed to hand-arm vibration was the group with the highest percentage of abnormal audiograms. Conclusion: This study revealed a series of weaknesses in the health surveillance of these populations and indicated the need for the implementation of preventive programs focusing on their exposures to noise and vibration.

  20. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  1. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  2. Osteocyte apoptosis and absence of bone remodeling in human auditory ossicles and scleral ossicles of lower vertebrates: a mere coincidence or linked processes?

    Science.gov (United States)

    Palumbo, Carla; Cavani, Francesco; Sena, Paola; Benincasa, Marta; Ferretti, Marzia

    2012-03-01

    Considering the pivotal role as bone mechanosensors ascribed to osteocytes in bone adaptation to mechanical strains, the present study analyzed whether a correlation exists between osteocyte apoptosis and bone remodeling in peculiar bones, such as human auditory ossicles and scleral ossicles of lower vertebrates, which have been shown to undergo substantial osteocyte death and trivial or no bone turnover after cessation of growth. The investigation was performed with a morphological approach under LM (by means of an in situ end-labeling technique) and TEM. The results show that a large amount of osteocyte apoptosis takes place in both auditory and scleral ossicles after they reach their final size. Additionally, no morphological signs of bone remodeling were observed. These facts suggest that (1) bone remodeling is not necessarily triggered by osteocyte death, at least in these ossicles, and (2) bone remodeling does not need to mechanically adapt auditory and scleral ossicles since they appear to be continuously submitted to stereotyped stresses and strains; on the contrary, during the resorption phase, bone remodeling might severely impair the mechanical resistance of extremely small bony segments. Thus, osteocyte apoptosis could represent a programmed process devoted to make stable, when needed, bone structure and mechanical resistance.

  3. Survival of human embryonic stem cells implanted in the guinea pig auditory epithelium

    Science.gov (United States)

    Young Lee, Min; Hackelberg, Sandra; Green, Kari L.; Lunghamer, Kelly G.; Kurioka, Takaomi; Loomis, Benjamin R.; Swiderski, Donald L.; Duncan, R. Keith; Raphael, Yehoash

    2017-01-01

    Hair cells in the mature cochlea cannot spontaneously regenerate. One potential approach for restoring hair cells is stem cell therapy. However, when cells are transplanted into scala media (SM) of the cochlea, they promptly die due to the high potassium concentration. We previously described a method for conditioning the SM to make it more hospitable to implanted cells and showed that HeLa cells could survive for up to a week using this method. Here, we evaluated the survival of human embryonic stem cells (hESC) constitutively expressing GFP (H9 Cre-LoxP) in deaf guinea pig cochleae that were pre-conditioned to reduce potassium levels. GFP-positive cells could be detected in the cochlea for at least 7 days after the injection. The cells appeared spherical or irregularly shaped, and some were aggregated. Flushing SM with sodium caprate prior to transplantation resulted in a lower proportion of stem cells expressing the pluripotency marker Oct3/4 and increased cell survival. The data demonstrate that conditioning procedures aimed at transiently reducing the concentration of potassium in the SM facilitate survival of hESCs for at least one week. During this time window, additional procedures can be applied to initiate the differentiation of the implanted hESCs into new hair cells. PMID:28387239

  4. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    Science.gov (United States)

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  5. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  6. Bimodal audio-visual training enhances auditory adaptation process.

    Science.gov (United States)

    Kawase, Tetsuaki; Sakamoto, Shuichi; Hori, Yoko; Maki, Atsuko; Suzuki, Yôiti; Kobayashi, Toshimitsu

    2009-09-23

    Effects of auditory training with bimodal audio-visual stimuli on monomodal aural speech intelligibility were examined in individuals with normal hearing using highly degraded noise-vocoded speech sound. Visual cue simultaneously presented with auditory stimuli during the training session significantly improved auditory speech intelligibility not only for words used in the training session, but also untrained words, when compared with the auditory training using only auditory stimuli. Visual information is generally considered to complement insufficient speech information conveyed by the auditory system during audio-visual speech perception. However, the present results showed another beneficial effect of audio-visual training that the visual cue enhances the auditory adaptation process to the degraded new speech sound, which is different from those given during bimodal training.

  7. Development of a central auditory test battery for adults.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den

    2001-01-01

    There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise, filte

  8. Development of a central auditory test battery for adults.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den

    2001-01-01

    There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise,

  9. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  10. Frequency band-importance functions for auditory and auditory-visual speech recognition

    Science.gov (United States)

    Grant, Ken W.

    2005-04-01

    In many everyday listening environments, speech communication involves the integration of both acoustic and visual speech cues. This is especially true in noisy and reverberant environments where the speech signal is highly degraded, or when the listener has a hearing impairment. Understanding the mechanisms involved in auditory-visual integration is a primary interest of this work. Of particular interest is whether listeners are able to allocate their attention to various frequency regions of the speech signal differently under auditory-visual conditions and auditory-alone conditions. For auditory speech recognition, the most important frequency regions tend to be around 1500-3000 Hz, corresponding roughly to important acoustic cues for place of articulation. The purpose of this study is to determine the most important frequency region under auditory-visual speech conditions. Frequency band-importance functions for auditory and auditory-visual conditions were obtained by having subjects identify speech tokens under conditions where the speech-to-noise ratio of different parts of the speech spectrum is independently and randomly varied on every trial. Point biserial correlations were computed for each separate spectral region and the normalized correlations are interpreted as weights indicating the importance of each region. Relations among frequency-importance functions for auditory and auditory-visual conditions will be discussed.

  11. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  12. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  13. Theory of Auditory Thresholds in Primates

    Science.gov (United States)

    Harrison, Michael J.

    2001-03-01

    The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note

  14. Fractal EEG analysis with Higuchi's algorithm of low-frequency noise exposition on humans

    Science.gov (United States)

    Panuszka, Ryszard; Damijan, Zbigniew; Kasprzak, Cezary

    2004-05-01

    Authors used methods based on fractal analysis of EEG signal to assess the influence of low-frequency sound field on the human brain electro-potentials. The relations between LFN (low-frequency noise) and change in fractal dimension EEG signal were measured with stimulations tones. Three types of LFN stimuli were presented; each specified dominant frequency and sound-pressure levels (7 Hz at 120 dB, 18 Hz at 120 dB, and 40 Hz at 110 dB). Standard EEG signal was recorded before, during, and after subject's exposure for 35 min. LFN. Applied to the analysis fractal dimension of EEG-signal Higuchis algorithm. Experiments show LFN influence on complexity of EEG-signal with calculated Higuchi's algorithm. Observed increase of mean value of Higuchi's fractal dimension during exposition to LFN.

  15. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    Science.gov (United States)

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  16. Nicotine, Auditory Sensory Memory, and sustained Attention in a Human Ketamine Model of Schizophrenia: Moderating Influence of a Hallucinatory Trait

    Science.gov (United States)

    Knott, Verner; Shah, Dhrasti; Millar, Anne; McIntosh, Judy; Fisher, Derek; Blais, Crystal; Ilivitsky, Vadim

    2012-01-01

    Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR) agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low-level auditory sensory processes and higher-order attention-dependent operations. Objectives: As N-methyl-d-aspartate receptor (NMDAR) hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: (a) to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention), as indexed by the auditory event-related brain potential – mismatch negativity (MMN), and (b) to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD). Methods: In a randomized, double-blind, placebo-controlled design involving a low intravenous dose of ketamine (0.04 mg/kg) and a 4 mg dose of nicotine gum, MMN, and performance on a rapid visual information processing (RVIP) task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12) or higher (H-HD) for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed [reaction time (RT)] and accuracy (increased % hits and d′ and reduced false alarms) on the RVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d′, as well as RT were poorer in H-HD (vs. L-HD) and while hit rate and d′ was increased by nicotine in H-HD, RT was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairment and improved attention, particularly in individuals prone to HD. PMID:23060793

  17. Nicotine, auditory sensory memory and attention in a human ketamine model of schizophrenia: moderating influence of a hallucinatory trait

    Directory of Open Access Journals (Sweden)

    Verner eKnott

    2012-09-01

    Full Text Available Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low level auditory sensory processes and higher order attention-dependent operations. Objectives: As N-methyl-D-aspartate receptor (NMDAR hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: a to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention, as indexed by the auditory event-related brain potential (ERP – mismatch negativity (MMN, and b to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD. Methods: In a randomized, double-blind, placebo controlled design involving a low intravenous dose of ketamine (.04 mg/kg and a 4 mg dose of nicotine gum, MMN and performance on a rapid visual information processing (RVIP task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12 or higher (H-HD for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed (reaction time and accuracy (increased % hits and d΄ and reduced false alarms on the RIVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d΄, as well as reaction time were poorer in H-HD (vs. L-HD and while hit rate and d΄ was increased by nicotine in H-HD, reaction time was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairments and improved attention, particularly in individuals prone to HD.

  18. Age-Associated Reduction of Asymmetry in Human Central Auditory Function: A 1H-Magnetic Resonance Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Xianming Chen

    2013-01-01

    Full Text Available The aim of this study was to investigate the effects of age on hemispheric asymmetry in the auditory cortex after pure tone stimulation. Ten young and 8 older healthy volunteers took part in this study. Two-dimensional multivoxel 1H-magnetic resonance spectroscopy scans were performed before and after stimulation. The ratios of N-acetylaspartate (NAA, glutamate/glutamine (Glx, and γ-amino butyric acid (GABA to creatine (Cr were determined and compared between the two groups. The distribution of metabolites between the left and right auditory cortex was also determined. Before stimulation, left and right side NAA/Cr and right side GABA/Cr were significantly lower, whereas right side Glx/Cr was significantly higher in the older group compared with the young group. After stimulation, left and right side NAA/Cr and GABA/Cr were significantly lower, whereas left side Glx/Cr was significantly higher in the older group compared with the young group. There was obvious asymmetry in right side Glx/Cr and left side GABA/Cr after stimulation in young group, but not in older group. In summary, there is marked hemispheric asymmetry in auditory cortical metabolites following pure tone stimulation in young, but not older adults. This reduced asymmetry in older adults may at least in part underlie the speech perception difficulties/presbycusis experienced by aging adults.

  19. Maturation of visual and auditory temporal processing in school-aged children

    OpenAIRE

    Owens, Daniel; Dawes, Piers; Bishop, Dorothy V.M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes included detection of pitch from temporal cues using iterated rippled noise and frequency modulation detection at 2 Hz, 40 Hz, and 240 Hz. Visual process...

  20. Research on road traffic noise and human health in India: Review of literature from 1991 to current

    Directory of Open Access Journals (Sweden)

    Dibyendu Banerjee

    2012-01-01

    Full Text Available This article reviews the literature on research conducted during the last two decades on traffic noise impacts in India. Road traffic noise studies in India are fewer and restricted only to the metropolitan areas. The studies over the years have also focused on the monitoring, recording, analysis, modeling, and to some extent mapping related themes. Negligible studies are observed in areas of physiological and sleep research exposure-effect context. Most impact studies have been associated with annoyance and attitudinal surveys only. Little scientific literature exists related to effects of traffic noise on human physiology in the Indian context. The findings of this review search and analysis observe that very little studies are available relating to traffic noise and health impacts. All of them are subjective response studies and only a small portion of them quantify the exposure-effect chain and model the noise index with annoyance. The review of papers showed that road traffic noise is a cause for annoyance to a variety of degree among the respondents. A generalization of impacts and meta-analysis was not possible due to variability of the study designs and outputs preferred.

  1. Simultaneous human detection and ranging using a millimeter-wave radar system transmitting wideband noise with an embedded tone

    Science.gov (United States)

    Gallagher, Kyle A.; Narayanan, Ram M.

    2012-06-01

    This paper describes a millimeter-wave (mm-wave) radar system that has been constructed to simultaneously range and detect humans at distances up to 82 meters. This is done by utilizing a composite signal consisting of two waveforms: a wideband noise waveform and a single tone. These waveforms are summed together and transmitted simultaneously. Matched filtering of the received and transmitted noise signals is performed to range targets with high resolution, while the received single tone signal is used for Doppler analysis. The Doppler measurements are used to distinguish between different human movements using characteristic micro-Doppler signals. Using hardware and software filters allows for simultaneous processing of both the noise and Doppler waveforms. Our measurements establish the mm-wave system's ability to detect humans up to and beyond 80 meters and distinguish between different human movements. In this paper, we describe the architecture of the multi-modal mm-wave radar system and present results on human target ranging and Doppler characterization of human movements. In addition, data are presented showing the differences in reflected signal strength between a human with and without a concealed metallic object.

  2. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Title: Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline Christopher J. Smalt...to utilize computational models of the auditory periphery and auditory cortex to study the effect of low spontaneous rate ANF loss on the cortical...clinical hearing thresholds is difficulty in understanding speech in noise. Recent animal studies have shown that noise exposure causes selective loss

  3. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  4. Noise-Induced Hearing Loss (NIHL Prediction in Humans Using a Modified Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Muhammad Zubair Rehman

    2011-01-01

    Full Text Available Noise-Induced Hearing Loss (NIHL has become a major source of health problem in industrial workers due to continuous exposure to high frequency sounds emitting from the machines. In the past, several studies have been carried-out to identify NIHL industrial workers. Unfortunately, these studies neglected some important factors that directly affect hearing ability in human. Artificial Neural Network (ANN provides very effective way to predict hearing loss in humans. However, the training process for an ANN required the designers to arbitrarily select parameters such as network topology, initial weights and biases, learning rate value, the activation function, value for gain in activation function and momentum. An improper choice of any of these parameters can result in slow convergence or even network paralysis, where the training process comes to a standstill or get stuck at local minima. Therefore, this current study focuses on proposing a new framework on using Gradient Descent Back Propagation Neural Network model with an improvement on the momentum value to identify the important factors that directly affect the hearing ability of industrial workers. Results from the prediction will be used in determining the environmental health hazards which affect the workers health.

  5. Modulation of auditory percepts by transcutaneous electrical stimulation.

    Science.gov (United States)

    Ueberfuhr, Margarete Anna; Braun, Amalia; Wiegrebe, Lutz; Grothe, Benedikt; Drexl, Markus

    2017-07-01

    Transcutaneous, electrical stimulation with electrodes placed on the mastoid processes represents a specific way to elicit vestibular reflexes in humans without active or passive subject movements, for which the term galvanic vestibular stimulation was coined. It has been suggested that galvanic vestibular stimulation mainly affects the vestibular periphery, but whether vestibular hair cells, vestibular afferents, or a combination of both are excited, is still a matter of debate. Galvanic vestibular stimulation has been in use since the late 18th century, but despite the long-known and well-documented effects on the vestibular system, reports of the effect of electrical stimulation on the adjacent cochlea or the ascending auditory pathway are surprisingly sparse. The present study examines the effect of transcutaneous, electrical stimulation of the human auditory periphery employing evoked and spontaneous otoacoustic emissions and several psychoacoustic measures. In particular, level growth functions of distortion product otoacoustic emissions were recorded during electrical stimulation with alternating currents (2 Hz, 1-4 mA in 1 mA-steps). In addition, the level and frequency of spontaneous otoacoustic emissions were followed before, during, and after electrical stimulation (2 Hz, 1-4 mA). To explore the effect of electrical stimulation on the retrocochlear level (i.e. on the ascending auditory pathway beyond the cochlea), psychoacoustic experiments were carried out. Specifically, participants indicated whether electrical stimulation (4 Hz, 2 and 3 mA) induced amplitude modulations of the perception of a pure tone, and of auditory illusions after presentation of either an intense, low-frequency sound (Bounce tinnitus) or a faint band-stop noise (Zwicker tone). These three psychoacoustic measures revealed significant perceived amplitude modulations during electrical stimulation in the majority of participants. However, no significant changes of evoked and

  6. Noise analysis and single-channel observations of 4 pS chloride channels in human airway epithelia.

    OpenAIRE

    Duszyk, M; French, A S; Man, S F

    1992-01-01

    Apical membranes of human airway epithelial cells have significant chloride permeability, which is reduced in cystic fibrosis (CF), causing abnormal electrochemistry and impaired mucociliary clearance. At least four types of chloride channels have been identified in these cells, but their relative roles in total permeability and CF are unclear. Noise analysis was used to measure the conductance of chloride channels in human nasal epithelial cells. The data indicate that channels with a mean c...

  7. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  8. Noise suppression by noise

    OpenAIRE

    Vilar, J. M. G.; Rubí Capaceti, José Miguel

    2001-01-01

    We have analyzed the interplay between an externally added noise and the intrinsic noise of systems that relax fast towards a stationary state, and found that increasing the intensity of the external noise can reduce the total noise of the system. We have established a general criterion for the appearance of this phenomenon and discussed two examples in detail.

  9. Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns.

    Directory of Open Access Journals (Sweden)

    Jayalakshmi Viswanathan

    2016-11-01

    Full Text Available Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs (the two halves of the noise were identical or 1-s plain random noises (Ns. Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin and scrambled (chopping sounds into 10- and 20-ms bits before shuffling versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant’s discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities.

  10. Interaction of speech and script in human auditory cortex: insights from neuro-imaging and effective connectivity.

    Science.gov (United States)

    van Atteveldt, Nienke; Roebroeck, Alard; Goebel, Rainer

    2009-12-01

    In addition to visual information from the face of the speaker, a less natural, but nowadays extremely important visual component of speech is its representation in script. In this review, neuro-imaging studies are examined which were aimed to understand how speech and script are associated in the adult "literate" brain. The reviewed studies focused on the role of different stimulus and task factors and effective connectivity between different brain regions. The studies will be summarized in a neural mechanism for the integration of speech and script that can serve as a basis for future studies addressing (the failure of) literacy acquisition. In this proposed mechanism, speech sound processing in auditory cortex is modulated by co-presented visual letters, depending on the congruency of the letter-sound pairs. Other factors of influence are temporal correspondence, input quality and task instruction. We present results showing that the modulation of auditory cortex is most likely mediated by feedback from heteromodal areas in the superior temporal cortex, but direct influences from visual cortex are not excluded. The influence of script on speech sound processing occurs automatically and shows extended development during reading acquisition. This review concludes with suggestions to answer currently still open questions to get closer to understanding the neural basis of normal and impaired literacy.

  11. Noise-induced hearing loss: new animal models.

    Science.gov (United States)

    Christie, Kevin W; Eberl, Daniel F

    2014-10-01

    This article presents research findings from two invertebrate model systems with potential to advance both the understanding of noise-induced hearing loss mechanisms and the development of putative therapies to reduce human noise damage. Work on sea anemone hair bundles, which resemble auditory hair cells, has revealed secretions that exhibit astonishing healing properties not only for damaged hair bundles, but also for vertebrate lateral line neuromasts. We present progress on identifying functional components of the secretions, and their mechanisms of repair. The second model, the Johnston's organ in Drosophila, is also genetically homologous to hair cells and shows noise-induced hearing loss similar to vertebrates. Drosophila offers genetic and molecular insight into noise sensitivity and pathways that can be manipulated to reduce stress and damage from noise. Using the comparative approach is a productive avenue to understanding basic mechanisms, in this case cellular responses to noise trauma. Expanding study of these systems may accelerate identification of strategies to reduce or prevent noise damage in the human ear.

  12. Effect of Electroacupuncture on Threshold Shift of Auditory Middle Latency Response in Guinea Pigs after Noise Exposure%电针对噪声所致豚鼠听皮层中潜伏期诱发电位阈移的影响

    Institute of Scientific and Technical Information of China (English)

    周庆辉; 曾兆麟; 施建蓉; 郭瑞新; 庄剑青; 李鼎

    2001-01-01

    To study the dffect of electroacupuncture on threshold shift of auditory middle latencyresponse in guinea pigs after noise exposure. Methods: 18 guinea pigs were divided into three groups: control group, ear area EA group and forefoot EA group, with 6 animals in each group. Animals were exposed to noise of 105dB3 SPL for 10 minutes. Electroacupuncture of points on ear area and forefoot was administered to animals in the latter two groups during the noise exposure. Threshold of auditory middle latency response induced by click was recorded before noise exposure and at different time after noise exposure. Results: Temporary threshold shift (TTS) of MLR was induced in guinea pigs after noise exposure. TTS of MLR of forefoot EA group and ear area EA group at different time after noise exposure was less than that of control group (P<0.01 or P<0.05) ,and the recovery of TTS of forefoot EA group and ear area EA group was also earlier than that of control group. Conclusion: Electroacupuncture of points on ear area and forefoot can reduce the hearing impairment caused by noise exposure and promote the recovery of the impaired hearing.%目的:观察电针对噪声暴露所致豚鼠听皮层中潜伏期诱发电位(MLR)阚移的影响。方法:实验动物分为对照组、电针耳区穴组和电针前肢穴组。对照组只给噪声,后两组施与噪声的同时分别电针耳区穴和前肢穴。噪声暴露前及停止后不同时间记录MLR的阚值。结果:105dB SPL噪声暴露10min可致豚鼠MLR的暂时性阈移,电针耳区穴组和电针前肢穴组噪声暴露后不同时间的MLR阚移明显小于对照组(P<0.01或P<0.05),MLR阚移恢复时间比对照组缩短。结论:电针耳区穴和前肢穴均能减轻噪声暴露所致的听力损伤,并促进听力的恢复。

  13. Task engagement selectively modulates neural correlations in primary auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2015-05-13

    Noise correlations (r(noise)) between neurons can affect a neural population's discrimination capacity, even without changes in mean firing rates of neurons. r(noise), the degree to which the response variability of a pair of neurons is correlated, has been shown to change with attention with most reports showing a reduction in r(noise). However, the effect of reducing r(noise) on sensory discrimination depends on many factors, including the tuning similarity, or tuning correlation (r(tuning)), between the pair. Theoretically, reducing r(noise) should enhance sensory discrimination when the pair exhibits similar tuning, but should impair discrimination when tuning is dissimilar. We recorded from pairs of neurons in primary auditory cortex (A1) under two conditions: while rhesus macaque monkeys (Macaca mulatta) actively performed a threshold amplitude modulation (AM) detection task and while they sat passively awake. We report that, for pairs with similar AM tuning, average r(noise) in A1 decreases when the animal performs the AM detection task compared with when sitting passively. For pairs with dissimilar tuning, the average r(noise) did not significantly change between conditions. This suggests that attention-related modulation can target selective subcircuits to decorrelate noise. These results demonstrate that engagement in an auditory task enhances population coding in primary auditory cortex by selectively reducing deleterious r(noise) and leaving beneficial r(noise) intact.

  14. White noise and synchronization shaping the age structure of the human population

    Science.gov (United States)

    Cebrat, Stanislaw; Biecek, Przemyslaw; Bonkowska, Katarzyna; Kula, Mateusz

    2007-06-01

    We have modified the standard diploid Penna model of ageing in such a way that instead of threshold of defective loci resulting in genetic death of individuals, the fluctuation of environment and "personal" fluctuations of individuals were introduced. The sum of the both fluctuations describes the health status of the individual. While environmental fluctuations are the same for all individuals in the population, the personal component of fluctuations is composed of fluctuations corresponding to each physiological function (gene, genetic locus). It is rather accepted hypothesis that physiological parameters of any organism fluctuate highly nonlinearly. Transition to the synchronized behaviors could be a very strong diagnostic signal of the life threatening disorder. Thus, in our model, mutations of genes change the chaotic fluctuations representing the function of a wild gene to the synchronized signals generated by mutated genes. Genes are switched on chronologically, like in the standard Penna model. Accumulation of defective genes predicted by Medawar's theory of ageing leads to the replacement of uncorrelated white noise corresponding to the healthy organism by the correlated signals of defective functions. As a result we have got the age distribution of population corresponding to the human demographic data.

  15. Sexual dimorphism of the lateral angle of the internal auditory canal and its potential for sex estimation of burned human skeletal remains.

    Science.gov (United States)

    Gonçalves, David; Thompson, Tim J U; Cunha, Eugénia

    2015-09-01

    The potential of the petrous bone for sex estimation has been recurrently investigated in the past because it is very resilient and therefore tends to preserve rather well. The sexual dimorphism of the lateral angle of the internal auditory canal was investigated in two samples of cremated Portuguese individuals in order to assess its usefulness for sex estimation in burned remains. These comprised the cremated petrous bones from fleshed cadavers (N = 54) and from dry and disarticulated bones (N = 36). Although differences between males and females were more patent in the sample of skeletons, none presented a very significant sexual dimorphism, thus precluding any attempt of sex estimation. This may have been the result of a difficult application of the method and of a differential impact of heat-induced warping which is known to be less frequent in cremains from dry skeletons. Results suggest that the lateral angle method cannot be applied to burned human skeletal remains.

  16. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes.

    Science.gov (United States)

    Lewis, James W; Talkington, William J; Tallaksen, Katherine C; Frum, Chris A

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the

  17. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  18. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  19. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  20. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  1. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  2. Cardiovascular effects of environmental noise: Research in the United Kingdom

    Directory of Open Access Journals (Sweden)

    Stephen Stansfeld

    2011-01-01

    Full Text Available Although the auditory effects of noise on humans have been established, the non-auditory effects are not so well established. The emerging links between noise and cardiovascular disease (CVD have potentially important implications on public health and policy. In the United Kingdom (UK, noise from transport is a problem, where more than half of the population is exposed to more than the recommended maximum day-time noise level and just under three-quarters of the population live in areas where the recommended night-time noise level is exceeded. This review focuses on findings from studies conducted in the UK that examined environmental noise and cardiovascular disease. There were statistically no significant associations between road traffic noise and incident ischemic heart disease in the Caerphilly and Speedwell studies, but there was a suggestion of effects when modifying factors such as length of residence, room orientation, and window opening were taken into account. In a sample stratified by pre-existing disease a strongly increased odds of incident ischemic heart disease for the highest annoyance category was found compared to the lowest among men without pre-existing disease (OR = 2.45, 95%1.13 - 5.31, which was not found in men with pre-existing disease. In the Hypertension and exposure to noise near airports (HYENA study, night time aircraft noise exposure (L night was associated with an increased risk of hypertension, in fully adjusted analyses. A 10-dB increase in aircraft noise exposure was associated with an odds ratio of 1.14 (95%CI, 1.01 - 1.29. Aircraft noise was not consistently related to raised systolic blood pressure in children in the road traffic and aircraft noise exposure and children′s cognition and health (RANCH study. There is some evidence of an association among environmental noise exposure and hypertension and ischemic heart disease in the UK studies; further studies are required to explore gender differences, the

  3. High Capacity and Resistance to Additive Noise Audio Steganography Algorithm

    Directory of Open Access Journals (Sweden)

    Haider Ismael Shahadi

    2011-09-01

    Full Text Available Steganography is the art of message hiding in a cover signal without attracting attention. The requirements of the good steganography algorithm are security, capacity, robustness and imperceptibility, all them are contradictory, therefore, satisfying all together is not easy especially in audio cover signal because human auditory system (HAS has high sensitivity to audio modification. In this paper, we proposed a high capacity audio steganography algorithm with good resistance to additive noise. The proposed algorithm is based on wavelet packet transform and blocks matching. It has capacity above 35% of the input audio file size with acceptable signal to noise ratio. Also, it is resistance to additive Gaussian noise to about 25 db. Furthermore, the reconstruction of actual secret messages does not require the original cover audio signal.

  4. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  5. Human response to wind turbine noise - perception, annoyance and moderating factors

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Eja

    2007-05-15

    The aims of this thesis were to describe and gain an understanding of how people who live in the vicinity of wind turbines are affected by wind turbine noise, and how individual, situational and visual factors, as well as sound properties, moderate the response. Methods A cross-sectional study was carried out in a flat, mainly rural area in Sweden, with the objective to estimate the prevalence of noise annoyance and to examine the dose-response relationship between A-weighted sound pressure levels (SPLs) and perception of and annoyance with wind turbine noise. Subjective responses were obtained through a questionnaire (n = 513; response rate: 68%) and outdoor, A-weighted SPLs were calculated for each respondent. To gain a deeper understanding of the observed noise annoyance, 15 people living in an area were interviewed using open-ended questions. The interviews were analysed using the comparative method of Grounded Theory (GT). An additional cross-sectional study, mainly exploring the influence of individual and situational factors, was carried out in seven areas in Sweden that differed with regard to terrain (flat or complex) and degree of urbanization (n = 765; response rate: 58%). To further explore the impact of visual factors, data from the two cross-sectional studies were tested with structural equation modelling. A proposed model of the influence of visual attitude on noise annoyance, also comprising the influence of noise level and general attitude, was tested among respondents who could see wind turbines versus respondents who could not see wind turbines from their dwelling, and respondents living in flat versus complex terrain. Dose-response relationships were found both for perception of noise and for noise annoyance in relation to A-weighted SPLs. The risk of annoyance was enhanced among respondents who could see at least one turbine from their dwelling and among those living in a rural in comparison with a suburban area. Noise from wind turbines was

  6. Auralization of NASA N+2 Aircraft Concepts from System Noise Predictions

    Science.gov (United States)

    Rizzi, Stephen A.; Burley, Casey L.; Thomas, Russel H.

    2016-01-01

    Auralization of aircraft flyover noise provides an auditory experience that complements integrated metrics obtained from system noise predictions. Recent efforts have focused on auralization methods development, specifically the process by which source noise information obtained from semi-empirical models, computational aeroacoustic analyses, and wind tunnel and flight test data, are used for simulated flyover noise at a receiver on the ground. The primary focus of this work, however, is to develop full vehicle auralizations in order to explore the distinguishing features of NASA's N+2 aircraft vis-à-vis current fleet reference vehicles for single-aisle and large twin-aisle classes. Some features can be seen in metric time histories associated with aircraft noise certification, e.g., tone-corrected perceived noise level used in the calculation of effective perceived noise level. Other features can be observed in sound quality metrics, e.g., loudness, sharpness, roughness, fluctuation strength and tone-to-noise ratio. A psychoacoustic annoyance model is employed to establish the relationship between sound quality metrics and noise certification metrics. Finally, the auralizations will serve as the basis for a separate psychoacoustic study aimed at assessing how well aircraft noise certification metrics predict human annoyance for these advanced vehicle concepts.

  7. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  8. Amplitude and phase equalization of stimuli for click evoked auditory brainstem responses.

    Science.gov (United States)

    Beutelmann, Rainer; Laumen, Geneviève; Tollin, Daniel; Klump, Georg M

    2015-01-01

    Although auditory brainstem responses (ABRs), the sound-evoked brain activity in response to transient sounds, are routinely measured in humans and animals there are often differences in ABR waveform morphology across studies. One possible reason may be the method of stimulus calibration. To explore this hypothesis, click-evoked ABRs were measured from seven ears in four Mongolian gerbils (Meriones unguiculatus) using three common spectrum calibration strategies: Minimum phase filter, linear phase filter, and no filter. The results show significantly higher ABR amplitude and signal-to-noise ratio, and better waveform resolution with the minimum phase filtered click than with the other strategies.

  9. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  10. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  11. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  12. Human decision making based on variations in internal noise: an EEG study.

    Directory of Open Access Journals (Sweden)

    Sygal Amitay

    Full Text Available Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or 'internal' noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.

  13. Introductory guide to noise

    CSIR Research Space (South Africa)

    Ferreira, T.M

    1973-01-01

    Full Text Available The difference between sound and noise varies from one human being to another. Noise, then, is simply unwanted sound and to understand how it can be combatted we must know more about its nature. A guide of acceptable levels of noise are investigated....

  14. Auditory evoked field measurement using magneto-impedance sensors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K., E-mail: o-kabou@echo.nuee.nagoya-u.ac.jp; Tajima, S.; Song, D.; Uchiyama, T. [Graduate School of Engineering, Nagoya University, Nagoya (Japan); Hamada, N.; Cai, C. [Aichi Steel Corporation, Tokai (Japan)

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  15. Deafness in cochlear and auditory nerve disorders.

    Science.gov (United States)

    Hopkins, Kathryn

    2015-01-01

    Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.

  16. Signal recognition by frogs in the presence of temporally fluctuating chorus-shaped noise.

    Science.gov (United States)

    Vélez, Alejandro; Bee, Mark A

    2010-10-01

    The background noise generated in large social aggregations of calling individuals is a potent source of auditory masking for animals that communicate acoustically. Despite similarities with the so-called "cocktail-party problem" in humans, few studies have explicitly investigated how non-human animals solve the perceptual task of separating biologically relevant acoustic signals from ambient background noise. Under certain conditions, humans experience a release from auditory masking when speech is presented in speech-like masking noise that fluctuates in amplitude. We tested the hypothesis that females of Cope's gray treefrog (Hyla chrysoscelis) experience masking release in artificial chorus noise that fluctuates in level at modulations rates characteristic of those present in ambient chorus noise. We estimated thresholds for recognizing conspecific advertisement calls (pulse rate=40-50 pulses/s) in the presence of unmodulated and sinusoidally amplitude modulated (SAM) chorus-shaped masking noise. We tested two rates of modulation (5 Hz and 45 Hz) because the sounds of frog choruses are modulated at low rates (e.g., less than 5-10 Hz), and because those of species with pulsatile signals are additionally modulated at higher rates typical of the pulse rate of calls (e.g., between 15-50 Hz). Recognition thresholds were similar in the unmodulated and 5-Hz SAM conditions, and 12 dB higher in the 45-Hz SAM condition. These results did not support the hypothesis that female gray treefrogs experience masking release in temporally fluctuating chorus-shaped noise. We discuss our results in terms of modulation masking, and hypothesize that natural amplitude fluctuations in ambient chorus noise may impair mating call perception.

  17. Investigation of a glottal related harmonics-to-noise ratio and spectral tilt as indicators of glottal noise in synthesized and human voice signals.

    LENUS (Irish Health Repository)

    Murphy, Peter J

    2008-03-01

    The harmonics-to-noise ratio (HNR) of the voiced speech signal has implicitly been used to infer information regarding the turbulent noise level at the glottis. However, two problems exist for inferring glottal noise attributes from the HNR of the speech wave form: (i) the measure is fundamental frequency (f0) dependent for equal levels of glottal noise, and (ii) any deviation from signal periodicity affects the ratio, not just turbulent noise. An alternative harmonics-to-noise ratio formulation [glottal related HNR (GHNR\\')] is proposed to overcome the former problem. In GHNR\\' a mean over the spectral range of interest of the HNRs at specific harmonic\\/between-harmonic frequencies (expressed in linear scale) is calculated. For the latter issue [(ii)] two spectral tilt measures are shown, using synthesis data, to be sensitive to glottal noise while at the same time being comparatively insensitive to other glottal aperiodicities. The theoretical development predicts that the spectral tilt measures reduce as noise levels increase. A conventional HNR estimator, GHNR\\' and two spectral tilt measures are applied to a data set of 13 pathological and 12 normal voice samples. One of the tilt measures and GHNR\\' are shown to provide statistically significant differentiating power over a conventional HNR estimator.

  18. Auditory Brainstem Gap Responses Start to Decline in Middle Age Mice: A Novel Physiological Biomarker for Age-Related Hearing Loss

    Science.gov (United States)

    Williamson, Tanika T.; Zhu, Xiaoxia; Walton, Joseph P.; Frisina, Robert D.

    2014-01-01

    The CBA/CaJ mouse strain's auditory function is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL), but on a mouse life cycle “time frame”. This pattern of ARHL is relatively similar to that of most humans: difficult to clinically diagnose at its onset, and currently not treatable medically. To address the challenge of early diagnosis, CBA mice were used for the present study to analyze the beginning stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility, but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison to the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable to previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system is already beginning in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented as a possibility for attenuating further damage to the auditory system due to ARHL. PMID:25307161

  19. Auditory brainstem gap responses start to decline in mice in middle age: a novel physiological biomarker for age-related hearing loss.

    Science.gov (United States)

    Williamson, Tanika T; Zhu, Xiaoxia; Walton, Joseph P; Frisina, Robert D

    2015-07-01

    The auditory function of the CBA/CaJ mouse strain is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL) but within the "time frame" of a mouse life cycle. This pattern of ARHL is similar to that of most humans: difficult to diagnose clinically at its onset and currently not treatable medically. To address the challenge of early diagnosis, we use CBA mice to analyze the initial stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, namely young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison with the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable with previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system begins in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented in order to attenuate further damage to the auditory system attributable to ARHL.

  20. Some remarks on the effects of drugs, lack of sleep and loud noise on human performance.

    NARCIS (Netherlands)

    Sanders, A.F. & A.A. Bunt.

    1971-01-01

    Some literature is reviewed on the effect of some drugs, (amphetamine, hypnotics, alcohol), loud noise and sleep loss in test of time estimation, decision making, long term performance and short term memory. Results are most clear with respect to amphetamine, hypnotics and lack of sleep, in that amp

  1. Some remarks on the effects of drugs, lack of sleep and loud noise on human performance.

    NARCIS (Netherlands)

    Sanders, A.F. & A.A. Bunt.

    1971-01-01

    Some literature is reviewed on the effect of some drugs, (amphetamine, hypnotics, alcohol), loud noise and sleep loss in test of time estimation, decision making, long term performance and short term memory. Results are most clear with respect to amphetamine, hypnotics and lack of sleep, in that

  2. The Harmonic Organization of Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Xiaoqin eWang

    2013-12-01

    Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  3. Experimental study of traffic noise and human response in an urban area: Deviations from standard annoyance predictions

    NARCIS (Netherlands)

    Salomons, E.M.; Janssen, S.A.; Verhagen, H.L.M.; Wessels, P.W.

    2014-01-01

    Annoyance and sleep disturbance by road and rail traffic noise in an urban area are investigated. Noise levels Lden and Lnight are determined with an engineering noise model that is optimized for the local situation, based on local noise measurements. The noise levels are combined with responses of

  4. Perspectives on the design of musical auditory interfaces

    OpenAIRE

    Leplatre, G.; Brewster, S.A.

    1998-01-01

    This paper addresses the issue of music as a communication medium in auditory human-computer interfaces. So far, psychoacoustics has had a great influence on the development of auditory interfaces, directly and through music cognition. We suggest that a better understanding of the processes involved in the perception of actual musical excerpts should allow musical auditory interface designers to exploit the communicative potential of music. In this respect, we argue that the real advantage of...

  5. How Might People Near National Roads Be Affected by Traffic Noise as Electric Vehicles Increase in Number? A Laboratory Study of Subjective Evaluations of Environmental Noise.

    Directory of Open Access Journals (Sweden)

    Ian Walker

    Full Text Available We face a likely shift to electric vehicles (EVs but the environmental and human consequences of this are not yet well understood. Simulated auditory traffic scenes were synthesized from recordings of real conventional and EVs. These sounded similar to what might be heard by a person near a major national road. Versions of the simulation had 0%, 20%, 40%, 60%, 80% and 100% EVs. Participants heard the auditory scenes in random order, rating each on five perceptual dimensions such as pleasant-unpleasant and relaxing-stressful. Ratings of traffic noise were, overall, towards the negative end of these scales, but improved significantly when there were high proportions of EVs in the traffic mix, particularly when there were 80% or 100% EVs. This suggests a shift towards a high proportion of EVs is likely to improve the subjective experiences of people exposed to traffic noise from major roads. The effects were not a simple result of EVs being quieter: ratings of bandpass-filtered versions of the recordings suggested that people's perceptions of traffic noise were specifically influenced by energy in the 500-2000 Hz band. Engineering countermeasures to reduce noise in this band might be effective for improving the subjective experience of people living or working near major roads, even for conventional vehicles; energy in the 0-100 Hz band was particularly associated with people identifying sound as 'quiet' and, again, this might feed into engineering to reduce the impact of traffic noise on people.

  6. How Might People Near National Roads Be Affected by Traffic Noise as Electric Vehicles Increase in Number? A Laboratory Study of Subjective Evaluations of Environmental Noise.

    Science.gov (United States)

    Walker, Ian; Kennedy, John; Martin, Susanna; Rice, Henry

    2016-01-01

    We face a likely shift to electric vehicles (EVs) but the environmental and human consequences of this are not yet well understood. Simulated auditory traffic scenes were synthesized from recordings of real conventional and EVs. These sounded similar to what might be heard by a person near a major national road. Versions of the simulation had 0%, 20%, 40%, 60%, 80% and 100% EVs. Participants heard the auditory scenes in random order, rating each on five perceptual dimensions such as pleasant-unpleasant and relaxing-stressful. Ratings of traffic noise were, overall, towards the negative end of these scales, but improved significantly when there were high proportions of EVs in the traffic mix, particularly when there were 80% or 100% EVs. This suggests a shift towards a high proportion of EVs is likely to improve the subjective experiences of people exposed to traffic noise from major roads. The effects were not a simple result of EVs being quieter: ratings of bandpass-filtered versions of the recordings suggested that people's perceptions of traffic noise were specifically influenced by energy in the 500-2000 Hz band. Engineering countermeasures to reduce noise in this band might be effective for improving the subjective experience of people living or working near major roads, even for conventional vehicles; energy in the 0-100 Hz band was particularly associated with people identifying sound as 'quiet' and, again, this might feed into engineering to reduce the impact of traffic noise on people.

  7. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  8. Criteria for environmental noise assessment

    OpenAIRE

    Hadzi-Nikolova, Marija; Mirakovski, Dejan; Doneva, Nikolinka

    2015-01-01

    The noise assessment generally refers to the assessment of noise impact from a specific source, such as noise originating from certain industrial plants, road traffic, and this is not always an easy task. Practically in every surrounding, a number of different sources contribute to the ambiental noise at a certain point. Standardization of noise level includes recommendations for noise level prescribed by legislation, which are enabling stay in the environment without danger to human heal...

  9. Auditory function in the Tc1 mouse model of down syndrome suggests a limited region of human chromosome 21 involved in otitis media.

    Directory of Open Access Journals (Sweden)

    Stephanie Kuhn

    Full Text Available Down syndrome is one of the most common congenital disorders leading to a wide range of health problems in humans, including frequent otitis media. The Tc1 mouse carries a significant part of human chromosome 21 (Hsa21 in addition to the full set of mouse chromosomes and shares many phenotypes observed in humans affected by Down syndrome with trisomy of chromosome 21. However, it is unknown whether Tc1 mice exhibit a hearing phenotype and might thus represent a good model for understanding the hearing loss that is common in Down syndrome. In this study we carried out a structural and functional assessment of hearing in Tc1 mice. Auditory brainstem response (ABR measurements in Tc1 mice showed normal thresholds compared to littermate controls and ABR waveform latencies and amplitudes were equivalent to controls. The gross anatomy of the middle and inner ears was also similar between Tc1 and control mice. The physiological properties of cochlear sensory receptors (inner and outer hair cells: IHCs and OHCs were investigated using single-cell patch clamp recordings from the acutely dissected cochleae. Adult Tc1 IHCs exhibited normal resting membrane potentials and expressed all K(+ currents characteristic of control hair cells. However, the size of the large conductance (BK Ca(2+ activated K(+ current (I(K,f, which enables rapid voltage responses essential for accurate sound encoding, was increased in Tc1 IHCs. All physiological properties investigated in OHCs were indistinguishable between the two genotypes. The normal functional hearing and the gross structural anatomy of the middle and inner ears in the Tc1 mouse contrast to that observed in the Ts65Dn model of Down syndrome which shows otitis media. Genes that are trisomic in Ts65Dn but disomic in Tc1 may predispose to otitis media when an additional copy is active.

  10. Noise Pollution Control System in the Hospital Environment

    Science.gov (United States)

    Figueroa Gallo, LM; Olivera, JM

    2016-04-01

    Problems related to environmental noise are not a new subject, but they became a major issue to solve because of the increasing, in complexity and intensity, of human activities due technological advances. Numerous international studies had dealt with the exposure of critical patients to noisy environment such as the Neonatal Intensive Care Units; their results show that there are difficulties in the organization in the developing brain, it can damage the delicate auditory structures and can cause biorhythm disorders, specially in preterm infants. The objective of this paper is to present the development and implementation of a control system that includes technical-management-training aspects to regulate the levels of specific noise sources in the neonatal hospitalization environment. For this purpose, there were applied different tools like: observations, surveys, procedures, an electronic control device and a training program for a Neonatal Service Unit. As a result, all noise sources were identified -some of them are eliminable-; all the service stable staff categories participated voluntarily; environmental noise measurements yielded values between 62.5 and 64.6 dBA and maximum were between 86.1 and 89.7 dBA; it was designed and installed a noise control device and the staff is being trained in noise reduction best practices.

  11. Dopamine Activation Preserves Visual Motion Perception Despite Noise Interference of Human V5/MT

    Science.gov (United States)

    Yousif, Nada; Fu, Richard Z.; Abou-El-Ela Bourquin, Bilal; Bhrugubanda, Vamsee; Schultz, Simon R.

    2016-01-01

    When processing sensory signals, the brain must account for noise, both noise in the stimulus and that arising from within its own neuronal circuitry. Dopamine receptor activation is known to enhance both visual cortical signal-to-noise-ratio (SNR) and visual perceptual performance; however, it is unknown whether these two dopamine-mediated phenomena are linked. To assess this, we used single-pulse transcranial magnetic stimulation (TMS) applied to visual cortical area V5/MT to reduce the SNR focally and thus disrupt visual motion discrimination performance to visual targets located in the same retinotopic space. The hypothesis that dopamine receptor activation enhances perceptual performance by improving cortical SNR predicts that dopamine activation should antagonize TMS disruption of visual perception. We assessed this hypothesis via a double-blinded, placebo-controlled study with the dopamine receptor agonists cabergoline (a D2 agonist) and pergolide (a D1/D2 agonist) administered in separate sessions (separated by 2 weeks) in 12 healthy volunteers in a William's balance-order design. TMS degraded visual motion perception when the evoked phosphene and the visual stimulus overlapped in time and space in the placebo and cabergoline conditions, but not in the pergolide condition. This suggests that dopamine D1 or combined D1 and D2 receptor activation enhances cortical SNR to boost perceptual performance. That local visual cortical excitability was unchanged across drug conditions suggests the involvement of long-range intracortical interactions in this D1 effect. Because increased internal noise (and thus lower SNR) can impair visual perceptual learning, improving visual cortical SNR via D1/D2 agonist therapy may be useful in boosting rehabilitation programs involving visual perceptual training. SIGNIFICANCE STATEMENT In this study, we address the issue of whether dopamine activation improves visual perception despite increasing sensory noise in the visual cortex

  12. Measurement of the beauty of periodic noises

    CERN Document Server

    Manet, Vincent

    2012-01-01

    In this article indicators to describe the "beauty" of noises are proposed. Rhythmic, tonal and harmonic suavity are introduced. They give a characterization of a noise in terms of rhythmic regularity (rhythmic suavity), of auditory pleasure of the "chords" constituting the signal (tonal suavity) and of the transition between the chords (harmonic suavity). These indicators have been developed for periodic noises typically issued from rotating machines such as engines, compressors... and are now used by our industrial customers since two years.

  13. Towards a general framework for including noise impacts in LCA.

    Science.gov (United States)

    Cucurachi, Stefano; Heijungs, Reinout; Ohlau, Katrin

    Several damages have been associated with the exposure of human beings to noise. These include auditory effects, i.e., hearing impairment, but also non-auditory physiological ones such as hypertension and ischemic heart disease, or psychological ones such as annoyance, depression, sleep disturbance, limited performance of cognitive tasks or inadequate cognitive development. Noise can also interfere with intended activities, both in daytime and nighttime. ISO 14'040 also indicated the necessity of introducing noise, together with other less developed impact categories, in a complete LCA study, possibly changing the results of many LCA studies already available. The attempts available in the literature focused on the integration of transportation noise in LCA. Although being considered the most frequent source of intrusive impact, transportation noise is not the only type of noise that can have a malign impact on public health. Several other sources of noise such as industrial or occupational need to be taken into account to have a complete consideration of noise into LCA. Major life cycle inventories (LCI) typically do not contain data on noise emissions yet and characterisation factors are not yet clearly defined. The aim of the present paper is to briefly review what is already available in the field and propose a new framework for the consideration of human health impacts of any type of noise that could be of interest in the LCA practice, providing indications for the introduction of noise in LCI and analysing what data is already available and, in the form of a research agenda, what other resources would be needed to reach a complete coverage of the problem. The literature production related to the impacts of noise on human health has been analysed, with considerations of impacts caused by transportation noise as well as occupational and industrial noise. The analysis of the specialist medical literature allowed for a better understanding of how to deal with the

  14. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  15. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  16. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  17. Prevalência de sintomas auditivos e vestibulares em trabalhadores expostos a ruído ocupacional Prevalencia de síntomas auditivos y vestibulares en trabajadores expuestos al ruido ocupacional Prevalence of auditory and vestibular symptoms among workers exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    Rosalina Ogido

    2009-04-01

    Full Text Available O objetivo do estudo foi estimar a prevalência de sintomas auditivos e vestibulares em trabalhadores expostos a ruído ocupacional. Foram analisados os prontuários de 175 trabalhadores com perda auditiva induzida por ruído, atendidos em um centro de referência de saúde ocupacional de Campinas, SP, de 1997 a 2003. As variáveis estudadas foram freqüência dos sintomas de hipoacusia, zumbido e vertigem. As associações com idade, tempo de exposição ao ruído e limiares auditivos tonais foram analisadas utilizando-se os testes estatísticos qui-quadrado e exato de Fisher. Foram relatados hipoacusia em 74% dos casos, zumbidos em 81% e vertigem em 13,2 %. Verificou-se associação entre hipoacusia e idade, tempo de exposição ao ruído e limiares auditivos tonais e entre vertigem e tempo de exposição ao ruído, não sendo encontradas outras associações significativas.El objetivo del estudio fue estimar la prevalencia de síntomas auditivos y vestibulares en trabajadores expuestos al ruido ocupacional. Fueron analizados los prontuarios de 175 trabajadores con pérdida auditiva inducida por ruido, atendidos en un centro de referencia de salud ocupacional de Campinas, Sureste de Brasil, de 1997 a 2003. Las variables estudiadas fueron frecuencia de los síntomas de hipoacusia, zumbido y vértigo. Las asociaciones con edad, tiempo de exposición al rudio y límites auditivos tonales fueron analizados utilizándose las pruebas estadísticas chi-cuadrado y exacto de Fisher. Fueron relatados hipoacusia en 74% de los casos, zumbidos en 81% y vértigo en 13,2%. Se verificó asociación entre hipoacusia y edad, tiempo de exposición al ruido y límites auditivos tonales y entre vértigo y tiempo de exposición al ruido, no siendo encontradas otras asociaciones significativas.The purpose of the study was to assess the prevalence of auditory and vestibular symptoms in workers exposed to occupational noise. There were examined medical records of 175

  18. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  19. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory.

    Science.gov (United States)

    Kraus, Nina; Strait, Dana L; Parbery-Clark, Alexandra

    2012-04-01

    Musicians benefit from real-life advantages, such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians' auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. © 2012 New York Academy of Sciences.

  20. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  1. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  2. Does exposure to noise from human activities compromise sensory information from cephalopod statocysts?

    Science.gov (United States)

    Solé, Marta; Lenoir, Marc; Durfort, Mercè; López-Bejar, Manel; Lombarte, Antoni; van der Schaar, Mike; André, Michel

    2013-10-01

    Many anthropogenic noise sources are nowadays contributing to the general noise budget of the oceans. The extent to which sound in the sea impacts and affects marine life is a topic of considerable current interest both to the scientific community and to the general public. Cepaholopods potentially represent a group of species whose ecology may be influenced by artificial noise that would have a direct consequence on the functionality and sensitivity of their sensory organs, the statocysts. These are responsible for their equilibrium and movements in the water column. Controlled Exposure Experiments, including the use of a 50-400Hz sweep (RL=157±5dB re 1μPa with peak levels up to SPL=175dB re 1μPa) revealed lesions in the statocysts of four cephalopod species of the Mediterranean Sea, when exposed to low frequency sounds: (n=76) of Sepia officinalis, (n=4) Octopus vulgaris, (n=5) Loligo vulgaris and (n=2) Illex condietii. The analysis was performed through scanning (SEM) and transmission (TEM) electron microscopical techniques of the whole inner structure of the cephalopods' statocyst, especially on the macula and crista. All exposed individuals presented the same lesions and the same incremental effects over time, consistent with a massive acoustic trauma observed in other species that have been exposed to much higher intensities of sound: Immediately after exposure, the damage was observed in the macula statica princeps (msp) and in the crista sensory epithelium. Kinocilia on hair cells were either missing or were bent or flaccid. A number of hair cells showed protruding apical poles and ruptured lateral plasma membranes, most probably resulting from the extrusion of cytoplasmic material. Hair cells were also partially ejected from the sensory epithelium, and spherical holes corresponding to missing hair cells were visible in the epithelium. The cytoplasmic content of the damaged hair cells showed obvious changes, including the presence of numerous vacuoles

  3. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2001-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.